Did YouTube solve its rabbit hole problem?
A new study on the power — and limits — of changing a recommendation algorithm
Today, let’s revisit the question of whether YouTube consumption can push viewers into adopting more extremist views. A new study finds that the company’s recommendation systems no longer routinely promote hateful content to most people — but the platform may still serve as an important cog in the broader ecosystem of extremism anyway.
The backlash against social media that followed the 2016 US presidential election focused attention on how the viral machinery of tech platforms could be hijacked by groups seeking to do harm. Investigations into YouTube at the time found that an all-out race to increase the amount of time spent watching videos on the site led far-right content to become one of the most-watched verticals on the platform, along with music and gaming.
Struck by the number of times it recommended videos of Holocaust denial and white supremacist rants to her, in 2018 the sociologist Zeynep Tufekci branded YouTube “the great radicalizer.” Over and over again, she found that the platform’s recommendations tended to get more extreme with every click on a suggestion. “Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with,” she wrote, “or to incendiary content in general.”
The evidence was not merely anecdotal. In 2019, a group of researchers led by Manoel Horta Ribeiro found evidence that users were migrating from generic conservative content to extremism. (They cleverly analyzed users’ public comments to document how more centrist and extremist communities merged over time.)
Amid widespread criticism, that year YouTube announced it would stop recommending what it calls “borderline content” — videos that come right up to the line of violating the site’s policies without going over. It also began adding links to additional context on videos that promoted misinformation and conspiracy theories — declarations that the earth is flat, for example.
Two years later, the company introduced a new measurement of the quality of content moderation on the platform: the “violative view rate,” which measures the number of views on videos that are later removed for violating the company’s policies. (The lower it is, the better a job YouTube is doing, according to its own standards.) According to the company, this rate has fallen from 0.7 percent in 2017 to 0.1 percent earlier this year.
Still, we’ve seen little data on the subject of extremism on YouTube lately that doesn’t come from YouTube itself. And that brings us to a new study published this week in the journal Science Advances. In it, researchers Annie Y. Chen, Brendan Nyhan, Jason Reifler, Ronald E. Robertson, and Christo Wilson examine the behavior of 1,181 US adults between July and December 2020.
Using a browser extension that recorded participants’ YouTube usage, the researchers analyzed traffic to videos that advance “alternative” and extremist ideologies. (The researchers define alternative channels as those that “discuss controversial topics through a lens that attempts to legitimize discredited views by casting them as marginalized viewpoints,” and cites videos from Steven Crowder, Tim Pool, Laura Loomer, and Candace Owens as examples. Are you surprised that the notorious xenophobe Loomer — who was once banned from Uber Eats for being too racist — has an active YouTube channel? So was I.)
Separately, the researchers had participants take a survey that, among other things, gauged their attitudes around race and gender. Paired together, the data helps us understand both who is watching alternative and extremist content on YouTube, and also how they found it.
Overall, researchers found little evidence of the rabbit hole phenomenon. But there are a good number of caveats to that observation, they write:
We rarely observe recommendations to alternative or extremist channel videos being shown to, or followed by, nonsubscribers. During our study period, only 3% of participants who were not already subscribed to alternative or extremist channels viewed a video from one of these channels based on a recommendation.
On one hand, this finding suggests that unsolicited exposure to potentially harmful content on YouTube in the post-2019 era is rare, in line with findings from prior work. On the other hand, even low levels of algorithmic amplification can have damaging consequences when extrapolated over YouTube’s vast user base and across time. Further, it may be the case that the susceptible population was already radicalized during YouTube’s pre-2019 era. Last, given the limitations of our study, our results must be interpreted as a lower bound on rabbit hole events, which suggests that YouTube may still need to do more to remove “borderline” content from recommendations.
The relatively small size of the effect here reminded me of the research on Facebook I wrote about here last month. Those papers documented cases in which researchers modified some aspect of the core Facebook experience — removing re-shared content, for example, or reverting the feed to reverse-chronological order — and found that these tweaks had little effect on users’ attitudes.
One view is that these studies, taken together, make a case that the post-2016 tech backlash was primarily a moral panic driven by fears about new technology. Another is that while the individual studies are rigorous enough, they fail to account for the more profound ways that social media has reshaped communication and politics in general. In this view, there is simply no variable you can isolate that explains the current relationship between platform dynamics and public opinion.
I imagine we’ll continue to debate that question for some time to come — aided, I hope, by lots more research. In the meantime, Chen and her co-authors find plenty of worrisome activity that YouTube ought to be worried about, even if the extremist rabbit holes have largely been plugged.
Most importantly, the researchers found that YouTube remains valuable to extremist networks because — so long as the material doesn’t violate the platform’s guidelines — it hosts content like Loomer’s for free, providing a library of material that can be shared and discussed on other platforms. They write:
Similar to prior work, we observe that viewers often reach these videos via external links (e.g., from other social media platforms). In addition, we find that viewers are often subscribers to the channels in question. These findings … highlight that YouTube remains a key hosting provider for alternative and extremist channels, helping them continue to profit from their audience and reinforcing concerns about lax content moderation on the platform.
YouTube has removed some of the extremist channels studied by researchers since 2020, including those of Stefan Molyneux and David Duke. (It also removed 77,000 videos and 995,000 comments for promotion of violence and violent extremism in the first quarter of this year.) But others remain, and in a climate of ongoing white supremacist violence, the company’s decisions about which channels to continue hosting bear ongoing scrutiny.
“We’ve heavily invested in our policies and systems to successfully combat extremism on YouTube,” a spokeswoman told me over email. “This includes overhauling our recommendation system to connect people to high-quality, authoritative news and information, and removing extremist content that violates our policies, including comments. We’re committed to combating extremism online and will continue to refine and improve our work.”
The good news here is that, on YouTube at least, extremism no longer appears to be benefiting from an out-of-control algorithm. The bad news is that, so long as alternative platforms exist — and YouTube remains willing to host fringe beliefs — the extremists may no longer need it.
Want to hang out with me in person? Applications are open for this year’s Code Conference, hosted by me, The Verge’s Nilay Patel, and CNBC’s Julia Boorstin. Join us for live, on-stage journalism with X/Twitter CEO Linda Yaccarino, GM CEO Mary Barra, Microsoft CTO Kevin Scott, and many more speakers to come. It’s all happening September 26th and 27th at The Ritz-Carlton, Laguna Niguel. Follow the latest here.
On the podcast this week: Kevin and I discuss recent revelations that a team of billionaires has planned to build a city as dense as Paris northwest of San Francisco. Then, we dive deep into AI-note taking. Finally, we play a round of HatGPT.
Apple | Spotify | Stitcher | Amazon | Google
Governing
Saudi Arabia sentenced a man to death for his posts on X and YouTube, the first known instance of the kingdom wielding the death penalty solely over online speech. Just an absolutely awful story. (Jon Gambrell / Associated Press)
OpenAI responded to a pair of class-action lawsuits from book authors by arguing that the inclusion of their works in data sets used to train ChatGPT does not violate copyright. This hugely consequential fight is now officially making its way through the courts — I wonder how long it’ll take to see a resolution. (Ashley Belanger / Ars Technica)
The U.S. Copyright Office opened a public comment period this week seeking people’s thoughts on generative AI and copyright issues, in particular whether AI-generated content can even be copyrighted in the first place. (Emilia David / The Verge)
The U.K. government’s leading security agency said it plans to empower police to expand their use of facial recognition, including real-time screening of the general public. The move puts the United Kingdom in stark opposition to the European Union, which is seeking a ban on facial recognition in public. (Anna Gross and Madhumita Murgia / Financial Times)
Australia decided against forcing adult websites to require age verification from users after determining that it posed too many privacy and security concerns. It Some US states should take note. (Josh Taylor / The Guardian)
Microsoft said it would unbundle its Teams videoconferencing app from its broader Office software suite in a bid to avoid further regulatory scrutiny in the EU. The best thing to happen to Slack in years. (Samuel Stolton / Bloomberg)
The rise in natural disasters linked to climate change has led to an influx in conspiracy theories and other online misinformation designed to downplay intensifying environmental conditions. (Tiffany Hsu / The New York Times)
The Modi administration has become a key buyer of Israeli and homegrown surveillance tech, which is often installed at India’s subsea internet cable stations to spy on citizens. (Alexandra Heal, Anna Gross, Benjamin Parkin, Chris Cook and Mehul Srivastava / Financial Times)
X is facing 2,200 arbitration cases from ex-employees suing the firm over Elon Musk’s takeover and subsequent layoffs, with numerous claims related to unpaid severance. Just the filing fees for these cases amount to about $3.5 million, a Delaware court filing revealed. (Lora Kolodny / CNBC)
Scammers are using X’s paid verification system to target Chinese-language users, including political dissidents, with so-called sextortion schemes with little to no recourse from the company. But I thought paid verification was going to automatically stop all bots and scams? (Caiwei Chen / Rest of World)
China-linked hackers planted a fake Signal app on Google Play to spy on the encrypted messaging app’s communications. Researchers believe the operation may be linked to a previous surveillance operation that used Telegram to target Uyghurs. (Thomas Brewster / Forbes)
Bumble updated its community guidelines to crackdown on bots, spam and doxxing, and it now considers ghosting to qualify as “bullying and abusive conduct” the company will take action against. I hope it’s ready to ban the entire gay male population of San Francisco. (Ivan Mehta / TechCrunch)
Industry
Meta released an open source data set called FACET designed to test computer vision models for bias by calculating the “fairness” of a model’s classifications of people, occupations and demographic info. (Kyle Wiggers / TechCrunch)
Meta updated its Facebook help center website to include a “Generative AI Data Subject Rights” form that lets users access, edit and delete personal data the company uses to train its AI models. (Jonathan Vanian / CNBC)
Google is rolling out its AI-powered Search Generative Experience in India and Japan, and the company said it’s seeing higher levels of satisfaction with the product from younger users. (Abner Li / 9to5Google)
X updated its privacy policy to say it will begin collecting more sensitive user data, including biometrics for “safety, security, and identification” and employment and education histories for its new job listings platform. (Aisha Counts / Bloomberg)
Musk said it will soon support audio and video calling without requiring a phone number, with Musk referring to the platform as a “global address book.” I mean, it used to be. (Charlotte Hughes-Morgan / Bloomberg)
X followed through on its promise to let X Premium users hide their likes, and the feature is now available to paid subscribers. (Ivan Mehta / TechCrunch)
An excerpt from Walter Isaacson’s upcoming Musk biography describes the chaotic final days leading up to his purchase of Twitter, in which Musk mulled over whether he could win in court to avoid purchasing the company. Isaacson documents an impressive number of Musk’s mood swings in a relatively short piece. (Walter Isaacson / WSJ)
The controversial rebrand from Twitter to X has spawned a host of web tools, including one plugin that replaces the entire site’s branding with the old blue bird and one that blocks X Premium subscribers by default. It is so much easier to just use Threads, I promise. (Lindsey Choo and Meghan Bobrowsky / WSJ)
Apple is discontinuing customer support on X, YouTube and its own Apple Support Community forum, and plans to transition these roles over to its phone-based support platform. (Joe Rossignol / MacRumors)
Meta is testing full-text search for Threads in Australia and New Zealand, as right now the feature only supports searching for usernames. (Ivan Mehta / TechCrunch)
Instagram is testing an extension of the Reels limit from three minutes to 10 minutes, which would put it on par with TikTok and make it more competitive with YouTube. (Aisha Malik / TechCrunch)
Google announced interoperability between its Chat platform and other workplace messaging apps, including Microsoft Teams and Slack. (Abner Li / 9to5Google)
Google said its annual Pixel hardware event would take place on October 4, with the company expected to unveil the Pixel 8 and Pixel Watch 2. (Abner Li / 9to5Google)
YouTube Music redesigned its “Now Playing” screen for the first time in three years and integrated comment sections from the main YouTube platform. (Abner Li / 9to5Google)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and extreme YouTube videos: casey@platformer.news and zoe@platformer.news.
I agree that the right wing recommendation rabbit hole has gone away in my own experience. But I think shorts has a bigger unexamined problem - it is filled with manosphere guys. If you start using shorts in incognito mode or on a new account, you will inevitably see some manosphere content in the first few minutes.