How Facebook helps predators find each other
A new report on CSAM shows how platform dynamics are bringing bad actors together, years after a similar scandal unfolded at YouTube
Programming note: To accommodate some news, the next edition of Platformer will arrive Wednesday morning, instead of at 5PM PT Tuesday as usual.
In 2019, YouTube realized that it had a problem. Parents who had uploaded seemingly innocuous footage of their children playing in swimsuits were surprised to find that some of the videos were getting hundreds of thousands of views.
It turned out that the company’s recommendation algorithms had unwittingly created a catalog of videos of young children in various states of undress and were serving them up to an audience of pedophiles. “YouTube never set out to serve users with sexual interests in children,” Max Fisher and Amanda Taub wrote in the New York Times, “but in the end… its automated system managed to keep them watching with recommendations that he called “disturbingly on point.”
I thought about YouTube’s predatory playlists over the weekend while reading how Meta’s systems have been discovered to operate in a similar way. In a new investigation, the Wall Street Journal examined how automated systems in Facebook and Instagram continuously recommend content related to pedophilia and child sexual abuse material.
Here are Jeff Horwitz and Katherine Blunt:
The company has taken down hashtags related to pedophilia, but its systems sometimes recommend new ones with minor variations. Even when Meta is alerted to problem accounts and user groups, it has been spotty in removing them.
During the past five months, for Journal test accounts that viewed public Facebook groups containing disturbing discussions about children, Facebook’s algorithms recommended other groups with names such as “Little Girls,” “Beautiful Boys” and “Young Teens Only.” Users in those groups discuss children in a sexual context, post links to content purported to be about abuse and organize private chats, often via Meta’s own Messenger and WhatsApp platforms.
The Journal’s report follows an earlier investigation in June that documented how Instagram is used to connect buyers and sellers of CSAM. That report found that viewing even a single account in a criminal network was enough to get “suggested for you” recommendations for buyers and sellers on the service, and that “following just a handful of these recommendations was enough to flood a test account with content that sexualizes children.”
A follow-up report in September by the Stanford Internet Observatory, which aided in the Journal’s investigations, found that Meta had made some progress — but that obvious gaps in enforcement remain.
Let’s stipulate that CSAM is an industry-wide issue, and that Meta has invested more in building child-safety features than many other social networks. This year Stanford found equally disturbing CSAM issues at networks including X, Telegram, Mastodon, and decentralized networks generally. That Meta is subject to more scrutiny on this subject is less of a commentary on the company’s unique negligence than on its vast scale and the responsibilities that come with it.
Let’s also stipulate that this is what tech policy folks call an adversarial problem: a cat-and-mouse game in which predators change tactics continuously to evade every new enforcement tactic the platforms roll out. Even when platforms manage to close off some vector for harm, motivated actors will work diligently to find new ones. It’s exhausting, essential work.
Even if you’re willing to grant Meta all that, though, the Journal’s report makes for some deeply worrisome reading.
I’ve written in the past about the difference between internet problems and platform problems: “Internet problems arise from the existence of a free and open network that connects most of the world; platform problems arise from features native to the platform.”
Internet problems reflect the world we live in and can’t be blamed on any one actor. Platform problems, though, are the responsibility of the people who design and oversee them.
Viewed through this lens, the fact that pedophiles create accounts on social networks and attempt to organize there is an internet problem. The fact that algorithmic recommendations work to connect these people and create a market for CSAM and other harms is a platform problem — Meta’s platform problem.
The Journal documents several instances of surprisingly large Facebook groups that appear to be devoted to promoting problematic content — and are themselves promoted by the company’s “groups you should join” recommendations. “In one public group celebrating incest,” the authors write, “200,000 users discussed topics such as whether a man’s niece was ‘ready’ at the age of 9, and they arranged to swap purported sex content featuring their own children. In another user group numbering 800,000, administrators shared images of schoolgirls as a way to promote a Spanish-language website with a name referring to women’s underwear.”
In another case, the reporters write, “Facebook’s ‘groups you should join’ feature has suggested topics such as kidnapping, ‘dating’ children as young as 11 and even chloroforming women.”
Meta’s enforcement systems have also been easily defeated by predators who lightly modify problematic hashtags after the company bans more straightforward versions. And in other cases — incredibly — the company’s recommendation systems appear to be working directly on behalf of the pedophiles:
On a Journal Instagram test account, Meta wouldn’t allow search results for the phrase “Child Links,” but the system suggested an alternative: “Child Pornography Links.” After Meta blocked that term following a query by the Journal, the system began recommending new phrases such as “child lingerie” and “childp links.”
The design problems are so pervasive — and the violating content so easily found by outsiders — that it is difficult to believe that the teams at Meta who are charged with policing this material are adequately staffed. Thousands of layoffs in the company’s “year of efficiency” did not spare content moderation teams. The Journal reports that “hundreds” of safety staffers have been cut, though Meta says most did not work primarily on child safety issues.
The first and most important reason to address these issues, of course, is to protect the victims and the children using Meta’s platforms. But there are also compelling business reasons for Meta to be doing a better job here — and that’s the main reason I find myself surprised that the company isn’t doing a better job here.
In March, Utah became the first state to ban the use of social media apps without parental permission for children under the age of 18. Arkansas followed with a similar law, though its implementation was blocked by a judge. Meanwhile, with the release of the surgeon general’s warning about kids and social media this spring, it has now become bipartisan conventional wisdom that social media is not safe for children.
Against this backdrop, the last thing Meta needs is a series of regular, detailed reports about its apparently unmanageable child predator problem.
For its part, Meta responded with a 1,300-word blog post about all the steps it has taken since June, including setting up a child safety task force with as many as 100 employees and participating in Lantern, a coalition of tech companies that shares signals about predators between platforms so that they can be more quickly removed.
The company says it has removed hundreds of thousands of accounts that violated its child safety policies, along with thousands of groups and dozens of networks dedicated to abuse.
“Child exploitation is a horrific crime and online predators are determined criminals,” Meta told me in a statement. “We work hard to stay ahead. That’s why we hire specialists dedicated to online child safety, develop new technology that roots out predators, and we share what we learn with other companies and law enforcement. We are actively continuing to implement changes identified by the task force we set up earlier this year.”
But after reading the Journal and Stanford reports, it’s worth asking whether those task force-approved changes will be enough to address the problem. These stories offer more than the usual lists of isolated incidents — they reflect ongoing, systemic problems created in part by the company’s own machinery.
At the very least, Meta should be re-thinking how it recommends groups and hashtags. It has already taken steps to prevent what it calls “potentially suspicious adults” from being able to see each others’ comments or be suggested to one another as follows.
The truly tragic thing, given YouTube’s highly publicized experience above, is that Meta could have taken these steps at any time in the past few years. The platform dynamics of this terrible abuse are very well known — the only question has been when the company would at long last get around to addressing them.
Talk about this edition with us in Discord: This link will get you in for the next week.
Governing
Epic v. Google: Judge James Donato said he would pursue an independent investigation into Google for intentionally destroying evidence by automatically deleting internal chat messages. (Sean Hollister / The Verge)
Google reached a $27 million settlement in a lawsuit by employees alleging unfair labor practices. (Reed Albergotti / Semafor)
Google is asking the UK’s Competition and Markets Authority to take action against Microsoft for its cloud computing dominance, saying that its business practices put competitors at a disadvantage. I wonder what it would say about the market for web search? (Martin Coulter / Reuters)
Thousands of fake social media accounts were created by someone in China in an attempt to spread polarizing US political content ahead of the elections. (David Klepper / Associated Press)
Meta is facing a $600 million lawsuit by a group of 83 Spanish media companies, citing unfair competition in the advertising market. (Inti Landauro / Reuters)
Meta received another request for information about child safety on Instagram under the European Digital Services Act, including its rules around self-generated child sexual abuse material. (Natasha Lomas / TechCrunch)
Misinformation researcher Joan Donovan is accusing Harvard of ending her research project after complaints by former Meta executives linked to the school. (Joseph Menn / Washington Post)
Generative AI is reportedly causing conflict for European lawmakers discussing the AI Act, with disagreements on how systems like ChatGPT should be regulated. (Supantha Mukherjee, Foo Yun Chee and Martin Coulter / Reuters)
French prime minister Élisabeth Borne signed a directive for all government employees to uninstall foreign messaging apps like Signal, WhatsApp and Telegram in favor of French app Olvid. (Bill Toulas / Bleeping Computer)
Albuquerque Public Schools employ filters that are misguided and inappropriate, blocking students from accessing online resources like suicide prevention, race, ethnicity and LGBTQ topics, an investigation found. (Todd Feathers and Dhruv Mehrota / WIRED)
Industry
Spotify is cutting about 1,500 jobs, about 17 percent of its workforce, in its third round of layoffs this year, citing slow economic growth and rising capital costs. Also it’s canceling the wonderful podcast Heavyweight, which is insane. (Manish Singh / TechCrunch)
ByteDance is reportedly working on a platform that will allow users to create their own AI chatbots — it will be launched in beta by the end of the month. (Coco Feng / South China Morning Post)
OpenAI reportedly agreed to invest $51 million in AI chips from startup Rain AI, in which CEO Sam Altman has personally invested more than $1 million. (Paresh Dave / WIRED)
Microsoft’s partnership with OpenAI propelled it ahead in the AI race. But while the company had ambitious AI plans to balance safety with innovation, its cautious approach turned upside down with CEO Sam Altman’s firing. (Charles Duhigg / The New Yorker)
OpenAI is delaying the launch of its announced ChatGPT store until early next year. (Ina Fried / Axios)
Asking ChatGPT to repeat words forever, which can be used to get the model to leak its training data, is now considered a terms of service violation. (Jason Koebler / 404Media)
Brad Lightcap, OpenAI’s COO, discusses the unexpected success of ChatGPT, how Altman thinks, and where he expects the technology to advance over the next year. (Hayden Field / CNBC)
X is now turning towards small and medium-sized businesses for advertising revenue, following an exodus of big advertisers after Elon Musk’s antisemitic post. Good luck with that!! (Hannah Murphy and Daniel Thomas / Financial Times)
Meta executives say there are no commercial downsides to openly sharing its AI technology, as the models could potentially improve faster with more developers working on it. (Aisha Counts / Bloomberg)
Meta AI researchers announced Seamless Communication, a new suit of AI models developed to bridge language barriers and enable more natural and authentic communication. (Michael Nuñez / VentureBeat)
Threads won’t be getting chronological search results anytime soon, as Instagram chief Adam Mosseri says they would create a “substantial safety loophole”. (Jay Peters / The Verge)
Mark Zuckerberg sold Meta shares for the first time in two years, earning about $185 million after a 172 percent surge in the company’s stock price. (Benjamin Stupples / Bloomberg)
Meta is starting to disconnect Facebook and Instagram messenger chats. I assume this is related to encryption? (Kyle Bradshaw / 9to5Google)
AI-powered stars using the likeness and voices of celebrities are on the rise on platforms like Facebook, Instagram, and YouTube, despite the complexities around licensing and intellectual property. (Alex Weprin / The Hollywood Reporter)
YouTube is cracking down on ad blockers, but ad blockers are developing new strategies to counter the rules. (Anthony Ha / Engadget)
Telegram added new features to channels, including improved discovery of other channels, emoji customizations for reactions, and stats for stories. (Ivan Mehta / TechCrunch)
Bluesky is rolling out automated tools for content moderation designed to flag content that violates its community guidelines. (Sarah Perez / TechCrunch)
Kick, a live streaming platform that emerged as a Twitch competitor, cut lucrative deals with top gaming creators to lure their audiences onto the site. But criticisms of its status as an online casino and lack of content moderation remain. (Kellen Browning / The New York Times)
Business Insider promoted an article on a subreddit known for harassment, highlighting a dilemma lots of news outlets are facing — how to drive traffic as X continues to decline. (Taylor Lorenz / Washington Post)
Guillaume Verdon, a former Google engineer and founder of AI startup Extropic, is behind the @BasedBeffJezos account on X, which promotes “effective accelerationism” — advancing technology and capitalism at all costs. (Emily Baker-White / Forbes)
Those good posts
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and potentially suspicious adults: casey@platformer.news and zoe@platformer.news.