Facebook has shared the steps they are taking to improve the transparency of content monetization, brand safety tools, and practices.
The updates include which factors of partner & content monetization policies and brand safety controls will be evaluated, enforcement of Community Standards, and industry collaborations of Facebook.
Content Monetization, Brand Safety Tools and Practices
The evaluation of the partner & content monetization policies and the brand safety controls made available to advertisers will be run by the Media Rating Council (MRC), and will include (but not be limited to):
- Evaluation of the development and enforcement of Partner Monetization Policies
- Evaluation of the development and enforcement of Content Monetization Policies and how these policies enforce the 4A’s/Global Alliance for Responsible Media (GARM) Brand Suitability Framework and comply with MRC’s Standards for Brand Safety
- Assessment of Facebook’s ability to apply brand safety controls to ads shown within publisher content such as in-stream, Instant Articles or Audience Network
- A determination of the accuracy of available reporting in these areas
- Facebook plans to report progress on the overall process by mid-August and will communicate a plan for further marketplace updates thereafter.
The next Community Standards Enforcement Report (CSER) will be released in August since it has been moved to a quarterly basis.
The report showcases how Facebook is doing at removing content that violates Community Standards. They expect to share prevalence data for hate speech in the November CSER, pending no additional COVID-19 challenges.
Facebook is also looking at opening up content moderation systems for external audits and reaching out to civil society, the advertising industry, and more entities & individuals.
Collaborating With The Industry
Facebook highlights its work with partners:
- Participation in the World Federation of Advertiser’s Global Alliance for Responsible Media to align on brand safety standards and definitions, scaling education, common tools and systems, and independent oversight for the industry
- Holding sessions with industry bodies to provide further insight into how the teams work to review content and enforce Community Standards
- Certification from independent groups, like the Digital Trading Standards Group which examines advertising processes against JICWEBS’ Good Practice Principles
- Facebook says they will continue to work with advertising industry bodies such as the Global Alliance for Responsible Media and the Media Rating Council to audit our brand safety tools and practices
The updates come in after an estimate of 1000 advertisers have banned Facebook for advertising purposes because of their lack of hate speech moderation. Unilever, Verizon, Starbucks, Coca-Cola, Ben & Jerry’s and more are few of the names that have joined the ban
Some of them paused advertising for at least 30 days, while some put an ‘indefinite pause’ to it. Jonathan Greenblatt, CEO, Anti-Defamation League called the civil rights leaders meeting with Facebook addressing this issue on July 7 “disappointing at best and sort of exasperating at worst”.
In a tweet, he also mentioned, “Our meeting with Mark Zuckerberg reaffirmed that Facebook cares more about profit than it does about protecting its users from #hate & disinformation. That’s why we are continuing to push hard for advertisers to tell them it is time to #StopHateForProfit”.