Meta has come under fire for surfacing salacious content for adults who “might have a purient interest in children”, according to the Wall Street Journal. Testing by the publication revealed that Instagram Reels recommends “risqué footage of children” to accounts that demonstrate such an interest.
The researchers set up test accounts on Instagram to exclusively follow young gymnasts, cheerleaders and preteen influencers. They found that the Reels algorithm served them sexualised child content and adult videos – alongside ads for major brands.
These included a Bumble ad next to a video of a young girl lifting her shirt, and a Pizza Hut ad following a video of a man lying in bed with a 10-year-old girl (according to the caption). Bumble, the dating app, has since announced it is suspending advertising on Meta.
The tech giant responded that the tests produced a manufactured experience that does not reflect what real users see. “Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, VP Client Council and Industry Trade Relations at Meta.
But the research, as well as separate testing by the Canadian Centre for Child Protection, found the advertising included adult livestreams, “massage parlors” and cybersex chatbots – all of which are meant to be prohibited by Meta. The child-protection group also revealed that Instagram was regularly serving videos of children who appear in the database of the National Center for Missing & Exploited Children. These images are often used to advertise paedophilic content in dark-web forums, the charity said.
Sexualisation and monetisation
This is not the first time Meta’s algorithms have faced criticism, most publicly facing scrutiny over spreading election misinformation. But the latest findings suggest brands advertising on Meta could be funding more serious crimes, and cast further doubt over the company’s willingness to disengage from an algorithm trained to keep users – and therefore advertisers – engaged.
Meta has said it will investigate the situation, according to brands whose ads appeared in the testing, and would pay for external brand-safety audits. The tech firm also has a task force for detecting users who behave suspiciously, which it says takes down tens of thousands of accounts each month.
Still, the inability to detect sexualised child material at least implies severe gaps in Meta’s content moderation systems. The report claims that before launching Reels, safety staff flagged the risk that the product would chain together videos of children and inappropriate content. Vaishnavi J., former Head of Youth Policy at Meta, said the company did not adopt their recommendations to ramp up content detection capabilities.
But Meta’s Stetson responded that the product had been safety-checked. “We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures,” she said.