/socialsamosa/media/media_files/2025/02/28/N3BTHhStcmXDghp7VrQ5.jpg)
Meta has apologised for an error that resulted in some Instagram users seeing violent and graphic content on their Reels feed, stating that the issue has been resolved.
According to reports by CNBC, Meta acknowledged that an error caused some users to see content in their Instagram Reels feed that should not have been recommended, and apologised for the mistake.
The statement followed numerous complaints from Instagram users across social media platforms about an influx of violent and 'not safe for work' content recommendations. Some users reported encountering such material despite enabling Instagram's 'Sensitive Content Control' at its highest moderation setting.
The platform's policy outlines that the platform seeks to protect users from disturbing imagery by removing content that is particularly violent or graphic. Prohibited content includes 'videos depicting dismemberment, visible innards or charred bodies,' along with 'sadistic remarks towards imagery depicting the suffering of humans and animals.'
However, the platform allows some graphic content if it serves to condemn or raise awareness about human rights abuses, armed conflicts or acts of terrorism. Such content may be accompanied by warning labels.
On Wednesday night in the US, CNBC observed several Instagram Reels posts showing dead bodies, graphic injuries and violent assaults, marked as 'Sensitive Content.'
According to Meta's website, the company uses internal technology, including artificial intelligence and machine learning tools, alongside a team of over 15,000 reviewers to detect and remove the majority of violating content before users report it. The platform also aims to avoid recommending content that is 'low-quality, objectionable, sensitive or inappropriate for younger viewers.'
The incident comes amid the platform's recent shift in content moderation policies. On 7 January, the company announced plans to adjust how it enforces content rules, aiming to reduce errors that have led to user censorship.
The platform stated it would shift automated systems from scanning for 'all policy violations' to focusing on 'illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.' For less severe violations, the company said it would rely on user reports before taking action.
Additionally, the platform said it was scaling back content demotions based on predictions of potential violations, with CEO Mark Zuckerberg announcing that the platform would allow more political content. The company also plans to revamp its third-party fact-checking programme with a 'Community Notes' model, similar to Elon Musk's platform X.
The changes have been widely viewed as an effort by Zuckerberg to repair relations with US President Donald Trump, who has previously criticised Meta's moderation practices.
Earlier this month, Zuckerberg visited the White House to discuss how Meta could support the Biden administration in promoting American technological leadership abroad, according to a Meta spokesperson on X.
These developments follow a series of tech layoffs at Meta in 2022 and 2023, which saw the company cut 21,000 employees, nearly a quarter of its workforce, impacting its civic integrity and trust and safety teams.