Meta’s Samantha Stetson Discusses Brand Safety Vs. Brand Suitability

Dan Meier 29 January, 2024 

As we head into a major election year, the role of social media and its social impact returns to the spotlight. Civil rights advocates and democracy groups continue to monitor the spread of misinformation, while brands and advertisers seek assurances that their ads will not be aligned with, or funding, harmful content.

Last year, Meta introduced new brand suitability tools to grant advertisers controls over where their ads appear on Facebook and Instagram, starting with controls for Feed environments before expanding the tools to Reels in October. The roll-out was welcomed by advertisers seeking reassurances over their social media placements, but criticised by online safety advocates who argued the tools do not prevent ad spend from funding harmful content.

But Samantha Stetson, VP Client Council and Industry Trade Relations at Meta, argues that those criticisms stem from confusion over the terms ‘brand safety’ and ‘brand suitability’. “We’ve got to slow down and really make sure that the industry is very clear on what we mean by safety, versus what we mean by suitability,” she tells VideoWeek.

Drawing the line

Meta and the wider industry define brand safety as a consensus around the type of content that no advertiser wants to be funding; a set of principles outlined by the Global Alliance for Responsible Media’s (GARM) Brand Safety Floor. The company says it does not allow this content on its platforms, and removes it through its content moderation practices. “Our community standards which govern what’s allowed on our platforms are actually above the floor in what’s defined by GARM,” says Stetson.

Brand suitability on the other hand is a “grey area”, according to Stetson, as the type of content deemed suitable to advertise around varies according to different brands. “It isn’t necessarily that violative that it needs to be removed, but it could be considered sensitive to brands,” notes Stetson. Meta’s brand suitability controls, built around GARM’s adjacency standards and brand suitability framework, allow advertisers to select the level of risk they are comfortable with – divided into Expanded, Moderate or Limited inventory.

“This concept of safety and suitability gets conflated a lot in the industry right now,” comments Stetson. “I think a lot of people thought, ‘Oh my god this is awesome, Meta finally built controls for Feed, we’ve been waiting for this for so long. Now we’re going to have no more hate speech on the platform.’ And we were like, ‘Whoa no, hate speech doesn’t belong on the platform to begin with because it violates our community standards.'”

And advertisers using the tools have responded positively to having more control over ad adjacency, particularly since Meta extended the tools to the Reels format on Instagram and Facebook.

“The brand suitability controls are giving our teams more confidence to deliver plans and recommendations for clients that have particular concerns around brand safety and suitability,” says Jason Cotrina-Vasquez, SVP, Global Head of Social at Kinesso, an IPG Mediabrands agency. “The expansion of the controls for Reels across Facebook and Instagram has been key in supporting the adoption of the new format, especially as the type of content on Reels is much more entertainment focused and creative in nature, which can be seen as a suitability risk. But, it is very important for brands to work with the Reels format, given the shift in user behaviour.”

Reel talk

But recent developments have cast doubt over Meta’s ability to weed out harmful content from its brand safe inventory on Reels. In November, the Wall Street Journal conducted experiments by setting up Instagram accounts that exclusively followed young gymnasts, cheerleaders and preteen influencers. The researchers found that Instagram served ads for major brands alongside sexualised child content. Two of the brands, Bumble and Match, suspended advertising on the platform as a result.

Meta countered that the Wall Street Journal’s findings were based on a “manufactured experience that does not represent what billions of people around the world see every single day when they use our products and services.” Stetson adds that Reels was tested for “nearly a year” before releasing the product with robust safety measures in place. “In 2023, we actioned over 4 million Reels per month across Facebook and Instagram globally for violating our policies,” says Stetson.

“We don’t want this kind of content on our platforms and brands don’t want their ads to appear next to it,” she states. “We continue to invest aggressively to stop it, and report every quarter on the prevalence of such content, which remains very low. Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions.”

Then there are wider harms that fall outside of the GARM framework, but are covered by Meta’s community standards. Social media has been linked to mental health issues and depression in young people, prompting the US surgeon general to issue a warning over its “profound risk of harm” to children and teenagers. In October, 33 US states filed a lawsuit against Meta for fuelling a “mental health crisis among American youth” through the addictive nature of their platforms.

And Meta has conducted its own studies into these risks, as reported by the Wall Street Journal in 2021. An internal presentation from March 2020 included the finding: “Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse.” The company confirmed the research but noted that it was not a representative sample size, and that other findings had been omitted from the report – for example, respondents experiencing loneliness and anxiety found that Instagram made them feel better about those issues.

“We want our platforms to be a healthy environment for teens and young adults and youth,” says Stetson. “We’ve been working really, really hard over the last several years, with several outside experts across the board, from eating disorders to child exploitation to mental health groups. There’s a huge team that just spends all the time taking the signals to work out what we can do to make this the best experience possible for teens.”

Breaking the habit

Stetson argues that the effects of social media on teenagers’ self-esteem are not unique to Meta’s platforms, and that if teenagers were not using Instagram, they would “fill the void” with another social media app. “This is an ecosystem, industry challenge, not a specific platform challenge, so how do we collectively address this?”

Meta’s moves to limit the risk to teenagers include the introduction of a ‘Take a Break’ feature, which alerts users when they have reached a daily time limit they have set themselves, alongside ‘nighttime nudges’ that prompt teen users who have spent more than 10 minutes on Instagram late at night.

There are also Parental Supervision controls on Instagram that allow parents to see which accounts their children follow and are followed by. And this year the company has introduced measures to make content relating to suicide, self-harm and eating disorders harder to find, by hiding related results and directing users to expert resources for help.

However, the efficacy of these measures is still up for debate. Again, it is not necessarily the GARM-defined ‘harmful content’ that poses risks to mental health, but the act of comparing oneself to aspirational content, combined with the addictive nature of social media. Studies suggest that even brief exposure to social media can trigger what psychologists call ‘social comparison’, highlighting the shortcomings of Take a Break-type features – although efforts to limit time spent scrolling could at least be beneficial.

Meanwhile Meta has publicly called for federal legislation that requires app stores to receive parental consent when children under-16 download apps. “We are working with our industry peers and lawmakers directly to advocate for this concept, and to ease the burden on parents,” Antigone Davis, VP Global Head of Safety at Meta, wrote in a blog post in November.

But while teen safety is a cross-industry challenge, Meta’s position could appear to be passing the responsibility onto app stores, rather than shouldering control of content on the apps themselves.

The AI factor

While Meta is keen to avoid content that “crosses the line” when it comes to topics such as body image, Stetson argues that certain types of content can have a positive impact on teenagers with eating disorders, for example posts around recovery could encourage healthy patterns of behaviour. “Learning to distinguish those types of content and making sure that we’re able to separate them, that’s all the stuff that we’re working really hard on.”

But it’s not just people who have to learn to distinguish between types of content; AI is playing an increasing role in content moderation, which poses challenges around bias, privacy and the technology’s ability to make the kind of fine distinctions Stetson describes. On the other hand, AI could potentially be more effective than humans at identifying particular types of content, at least in terms of processing power. The company claims that AI has helped reduce hate speech on its platforms by 80 percent.

Using AI in this way has obvious economic benefits for Meta, whose CEO Mark Zuckerberg declared 2023 the “Year of Efficiency”, which included staff layoffs and reallocated investment in AI. The wider tech sector has also seen heavy job losses over the past year, and Meta has cut more than 20,000 jobs since November 2022.

Stetson notes that humans will continue to play a role in content moderation, but the nature of that role will shift as AI takes over repetetive tasks. “I think people think, ‘Oh my God, AI is going to get rid of all these people, and everything will be run by machines.’ And I think that couldn’t actually be further from the truth,” she says. “What we want the machines to do is do the jobs that are taking a lot of people’s time, that now can elevate and change where they can spend their brainpower and their time.”

She stresses the need to ensure the safety of AI models before they reach consumers. And given the ongoing impact of social media on misinformation, hate speech and mental health, it can feel as though society is still disentangling the last mess as a new one looms ever closer into view.

“The conversation’s happening earlier than it did for social media,” comments Stetson. “We’ve been working on AI for as long as I’ve been at the company, which is over 11 years. We’re very committed to doing everything in open sourced models and putting it out to researchers and academics, so that they can pressure test a lot of these things before we start to build them into the product.”

Follow VideoWeek on Twitter and LinkedIn.

2024-01-29T12:51:14+01:00

About the Author:

Reporter at VideoWeek.
Go to Top