Advertisment

Advertisers question Meta’s commitment to protecting brands

As Meta revises its content moderation policies, advertisers worry about the impact on brand safety and increased exposure to objectionable content.

author-image
Social Samosa
New Update
dv

Meta Platforms, under the leadership of CEO Mark Zuckerberg, has started revising its content moderation policies, a shift that has sparked concerns among advertisers about potential risks to brand safety. This change follows an exclusive report by the Wall Street Journal, highlighting the evolving relationship between the tech giant and its advertisers. Earlier this month, Zuckerberg criticised 'legacy media' for supporting content censorship and argued that fact-checkers had undermined public trust.

In response, the company has replaced U.S.-based fact-checkers with ‘Community Notes,’ a crowdsourced content moderation system similar to the one implemented by Elon Musk’s X (formerly Twitter). The International Fact-Checking Network (IFCN), which includes organisations like AFP, challenged Zuckerberg’s claim that the program promotes censorship.

Zuckerberg defended the overhaul, stating that fact-checking had become 'too politically biased' and resulted in excessive censorship. Despite Meta’s assertion that the changes aim to reduce restrictive moderation, advertisers are wary about how these adjustments might affect the visibility and placement of their campaigns.

A tense pivot on brand safety

The new policies by the company, mark a departure from its prior commitments to remove potentially harmful content. Monika Bickert, the company's vice president of content policy, reportedly stated during a recent advertiser call that the platform will now prioritise removing content posing safety risks while granting users more freedom to discuss news and broader issues.

dv
Monika Bickert, Vice President of content policy, Meta

One notable change involves reclassifying 'hate speech' as 'hateful conduct,' which Meta claims will allow for greater context in moderating content. For instance, statements like 'women should not be allowed to serve in combat,' previously banned as discriminatory, are now permitted. This shift means brands must fine-tune their ad placement settings to avoid proximity to such posts, exacerbating existing concerns over brand safety.

vd
 Nicola Mendelsohn, Head of Global Business, Meta

Advertisers have expressed unease about their campaigns potentially appearing next to objectionable material. While Nicola Mendelsohn, head of global business for the company, reassured marketers that the company remains committed to providing tools for transparency and brand suitability, some fear the changes may lead to an uptick in offensive or misleading content on Meta’s platforms.

A new era of moderation

WSJ's report highlights a pivotal cultural moment at the company. Following years of tightening restrictions to address advertiser and societal pressures, its current approach shifts more responsibility to users. Its ‘Community Notes’ system will initially apply to organic posts but could eventually extend to paid ads. Content flagged by users may face demotion rather than removal, except for high-severity offences like racial slurs, which Meta’s AI will still monitor and eliminate automatically.

This hands-off approach comes amid rising political scrutiny around content moderation. In recent years, 'brand safety,' once a term confined to marketing circles, has become politically charged. High-profile cases, such as Elon Musk’s lawsuit against an ad trade group over alleged anti-conservative bias, have brought the issue into sharp focus.

Musk filed the lawsuit following a detailed report issued by the House Judiciary Committee in July, chaired by Ohio Republican Jim Jordan. The report suggested that the ad trade group and its members may have violated antitrust laws by withholding ad spending from social media platforms and conservative media outlets.

The balancing act for advertisers

Ad executives have expressed caution about speaking out on brand safety, fearing it could make them targets. Some agencies are now hesitant to send clients 'point-of-view' memos on the issue when online controversies emerge. Doug Rozen, former CEO of Dentsu’s media-buying unit in the U.S., reportedly said that brand safety is under attack at a time when it’s needed more than ever, especially with the large audiences on platforms like Instagram and X and their more hands-off approach to monitoring posts.

The company's strategy poses a challenge for advertisers who rely on the platform’s vast reach and advanced targeting capabilities. While some large brands, including Procter & Gamble and General Motors, have raised concerns, smaller businesses remain Meta’s financial backbone. The tech giant’s $131.9 billion ad revenue in 2023 underscores its dominance, despite past advertiser boycotts over hate speech.

In a note to top marketers and agency executives, Nicola Mendelsohn, the company’s head of global business, reportedly, sought to reassure advertisers that Meta was still committed to providing brands with tools to ensure their ads run in safe places. 

Even if their ads don’t run directly alongside objectionable content, some advertisers are concerned that the changes could lead to an explosion of toxic or misleading posts on Meta’s platforms, making the general environment less suitable for ads. In addition to paid ads, many advertisers publish organic posts on the company's platforms. Some are asking it to provide tools so those posts can also avoid controversial content.

For advertisers, the question remains: can Meta strike the right balance between free speech and a safe environment for brands? As this policy shift unfolds, it signals a broader recalibration of tech platforms’ roles in moderating content and their accountability to stakeholders.

 

Meta meta ads Meta content moderation policies meta fact checking