/socialsamosa/media/media_files/2025/03/04/rcDArKrWBLJ9MB0kpeZN.jpg)
Two months ago, Meta, the parent company of Facebook and Instagram, made an official announcement regarding significant changes to its content moderation policies. It positioned this shift as a step toward giving users "more speech and fewer mistakes," a message that seemed to align with the political landscape of the moment.
The company noted that its platforms are built ‘to be places where people can express themselves freely’ and relaxed restrictions on political speech, dialed back third-party fact-checking, and altered its hate speech policies — an approach catering to the Trump administration’s broader push for deregulated digital discourse.
Ironically, just weeks after this announcement, Instagram was hit by a glitch that flooded user feeds with violent and graphic content. Disturbing videos, including fatal shootings and graphic accidents, appeared on people’s Reels. It was displayed even for users who had set their content sensitivity settings to the highest level. Meta scrambled to address the problem, issuing an apology and claiming to have resolved the issue.
However, users continued reporting that such content lingered on their feeds. This incident poses the question: Has Meta’s shift in policy created an environment where harmful content is more likely to thrive?
Replacing the third-party fact-checking, it introduced the X-style Community Notes program, which allows users to decide when posts are misleading and need more context. However, a recent study revealed that 85% of Community Notes on X remain invisible to users, and only 8.3% of proposed notes are visible on average. Community Notes generally require consensus among users with differing political views before a note is shown. Without consensus, it leads to low visibility.
Its latest mishap raises questions about user and brand safety. For brands, this development is more than a technical mishap but a reputational nightmare. Advertisers pour millions into Meta’s platforms, carefully curating brand-safe campaigns. Since 2019, advertisers have spent ₹5,998,234,033 on its platforms, according to the Meta Ad Library Report.
If its reduced content moderation means a higher likelihood of ads appearing alongside explicit and dangerous material, they may reconsider their investments alike X. Elon Musk’s controversial takes led to an advertiser exodus, and the microblogging platform is reportedly pressuring advertising giants like Interpublic Group (IPG) to increase client spending.
With Mark Zuckerberg slowly taking the ‘manlier’ approach towards his platforms, following Musk’s footsteps, this could slowly lead to a similar result. The irony here is that Meta has long justified strict content moderation as necessary to keep its advertising business healthy. If this approach alienates brands, it could undermine Meta’s revenue model.
Meta’s history of content controversies
This is not the first time Meta has found itself in a storm over content management. In 2017, Facebook was implicated in spreading hate speech that fueled violence against the Rohingya minority in Myanmar. In response, it admitted its shortcomings and took several measures, including conducting a human rights impact assessment to understand its role in the crisis.
It enhanced its content moderation efforts by hiring more local language moderators and updating its policies to better address hate speech and collaborated with international bodies like the UN's Independent Investigative Mechanism for Myanmar.
More recently, the platform has also been criticised for its role in the spread of misinformation during global elections, COVID-19, and geopolitical conflicts. Donald Trump was particularly critical about Meta’s content moderation during his first tenure as the President of the United States.
Despite these criticisms, the company has responded to such a crisis by tightening policies, investing in third-party fact-checking and pledging increased accountability. However, this time, it appears to be moving in the opposite direction.
While it has apologised for the mishap, there have been no long-term structural changes or policy measures and reviews announced. With this response, the company appears to be in damage control mode without addressing the root cause of it. Simply put, it seems that Meta is shifting its priorities.
Meta’s internal shake-up
This shift extends beyond content moderation and into Meta’s internal policies. In recent months, the company is on its way to dismantle much of its Diversity, Equity, and Inclusion (DEI) initiatives. Reports have also surfaced that Meta has engaged in gender discrimination in its job advertisements' algorithm, which has reinforced gender stereotypes by showing "typically female professions" to women and vice versa.
This raises concerns that the company’s commitment to inclusivity is eroding. Similarly, it has recently engaged in aggressive layoffs — cutting 3,600 employees while awarding hefty executive bonuses to 200% of their base salary.
If that wasn’t enough, it is increasing its fixation on artificial intelligence (AI). The company is ramping up its AI development, significantly increasing capital expenditure in this area. The company is reportedly considering the construction of a $200 billion data centre as part of its AI development. It is also expanding its presence in India, setting up a new site in Bengaluru and hiring for 41 positions.
While Meta deemed the laid-off personnel as low performers, it apparently believes that AI can rival human intervention.
It also plans to launch a separate Meta AI app, and a dedicated app for Reels, suggesting that its vision is increasingly centered around automation, machine learning, and content generation rather than user-driven organic engagement.
But at what cost is Meta pursuing these changes? While AI development and market expansion are essential for keeping up with competitors like Google and OpenAI, they appear to be at the expense of the platform’s ethical framework. By deprioritising content moderation, eliminating DEI efforts, and pushing forward with AI-driven expansion, it seems to be shedding the very values it once claimed to champion.
While all these shifts could have to do with Zuckerberg’s shift towards building a ‘masculine work environment’, this transformation also speaks to a concerning trend in the tech industry: the rise of techno-fascism. As platforms gain more control over digital discourse, they are slowly absolving themselves of responsibility for content moderation. We are inching closer to an environment where technology is used not just to connect but to gain control. By shifting the burden of content policing away from themselves and onto users, platforms are creating a digital landscape where misinformation, extremism and harmful content can spread unchecked.
While social media platforms and their leaders are changing their colours, it is unclear whether these changes will make the internet a better or worse place. Right now, the signs are far from reassuring.