Do Meta’s Brand Safety Controls Go Far Enough?

Dan Meier 16 May, 2023 

Meta has rolled out tools to give advertisers better controls over where their ads appear on Facebook and Instagram, in efforts to assuage brand safety concerns among digital marketers. The development came as welcome news to advertisers who have long called for greater scrutiny of ad placements on the social media platform, even if the announcement was somewhat overshadowed by an advertising glitch that ate through millions of dollars worth of ad budgets.

Concerns over brand safety on Facebook have been building since the 2016 US election (the company later acknowledged its role in the spread of disinformation), coming to a head during Black Lives Matter protests in 2020, when more than 1,000 advertisers boycotted Facebook for its inaction on hate speech. Three years, one rebrand and the biggest stock drop in Facebook history later, Meta has introduced tools enabling advertisers to avoid appearing next to certain content on Facebook and Instagram Feeds.

Using the Global Alliance for Responsible Media (GARM) Suitability Framework, the new system classifies content as high, medium or low risk. Advertisers can then select the level of risk they are comfortable with, choosing to appear alongside the following types of content:

  • Expanded inventory: the default setting shows ads next to any content that Meta deems eligible for monetisation, regardless of risk factor.
  • Moderate inventory: the filter Meta calls “moderately conservative” excludes high-risk content only.
  • Limited inventory: the “most conservative” option rules out all high- and medium-risk content.

Brand suitability firm Zefr uses AI to analyse video, image, text and audio on the Facebook and Instagram Feed, and labels the content according to the GARM standards. Through this third-party verification, Meta found that less than one percent of Facebook Feed content falls into the high-risk category. “The solution allows advertisers to measure, verify and understand the suitability of content near their ads to help them make informed decisions in order to reach their marketing goals,” Meta said in its announcement.

Risky business

The use of AI for contextual analysis is nothing new; in 2021 Meta built AI technology to spot harmful content, and claims the tool has reduced the prevalence of hate speech on the platform. But the implementation of third-party verification allows for independent assessment of the content deemed fit for ad adjacency. “The introduction of Zefr is just another layer saying, we’re not going to 100 percent take Meta or anybody else at their word,” says Neil Thurman, Co-Founder of the Brand Safety Institute.

That said, AI is still in its infancy, and there is no guarantee of catching every piece of nefarious content – especially with the nuanced approach required to keep funding quality journalism on the topics in question. “I don’t know if we will ever get to the point, whether it’s Meta or any of the other platforms, where people feel like we’re in a zero-risk environment,” adds Thurman. “It’s a big milestone, but I don’t think it’s the end of the journey – and I don’t think anybody from Meta would suggest that it is either.”

But there are advertisers who argue that the focus on “brand safety” misses the point of their concerns and boycotts, namely that harmful content should not be there in the first place. “What Facebook has done with this new tool is good because it protects the brand from being seen immediately around something that they don’t necessarily agree with or want to be seen next to,” argues Dino Myers-Lamptey, Founder of strategy agency The Barber Shop, and Co-Chair of The Conscious Advertising Network GSD Advisory Group. “However, what it doesn’t do, is it doesn’t stop the funding going to those sites or communities or areas that are publishing that bad content.”

And since the default setting still serves ads on all eligible content (ie. content that adheres to Meta’s Community Standards and monetisation criteria), the onus remains on advertisers to opt out, rather than on Meta to take responsibility for the content it monetises and allows to be funded. “It seems to be a strange way round to do it,” comments Myers-Lamptey. “If you are really able to clearly define what is that un-brand-safe content, then why aren’t you taking it off?”

Wider contextual analysis highlights the limitations of a brand suitability tool: it protects brands, but not the people at risk from the kind of content brands don’t want to be associated with. Nonetheless, advertisers should be assured that Meta does not allow content that violates its policies. “Instead of brand safety, think more about world safety, people safety,” advises Myers-Lamptey. “If your funding can accidentally fund and allow those things to exist, then you’ve got to be concerned about stopping that from happening. And this tool doesn’t doesn’t do that.”

Follow VideoWeek on Twitter and LinkedIn.

2023-05-17T17:41:49+01:00

About the Author:

Reporter at VideoWeek.
Go to Top