The YouTube Nazi Panic was just another moral panic
We freaked out, but there was never any evidence to back it up.
Back in 2018, a friend visited me from out of town and wanted to show me some of the YouTube videos he and his friends had been watching. I quickly realized that he had gotten into alt-right content, and was now viewing it on my own YouTube account. Some of the videos, for example, featured Disney musical clips dubbed over with lyrics about how Trump would restore White power. Having read the media reports of YouTube’s algorithmic radicalization, I was immediately worried that my YouTube recommendations would from now on be full of Nazi propaganda.
It never happened. As far as I could tell, I was never recommended a single rightist video, or even any videos about politics of any kind. My algorithm remained full of the stuff I normally watch — music and how-to videos. At that point, I started to read stories about YouTube’s “radicalization machine” with a more skeptical eye. I began to entertain the notion that what I was seeing was not accurate reporting on a real and frightening phenomenon, but a moral panic.
I am no stranger to moral panics; I have lived through a number of them. In the 1980s, some Christian groups claimed that the tabletop role-playing game Dungeons & Dragons was satanic. Concern over violence in video games leading to real-world violence is longstanding, and has even led to Congressional hearings, despite scant evidence that any link exists. Of course, the history of moral panics over rock & roll music is long and storied.
The YouTube Nazi Panic bore some obvious similarities to the panics over D&D, video games, and rock & roll. The primary worry was that consumers of an innocent-seeming popular media product would be unwittingly funneled down a rabbit hole that would ultimately indoctrinate them with a dangerous ideology. And just as mass shootings seemed to provide a concrete, pressing reason to worry about violent video games, the increase in rightist street violence and hate-fueled terrorist attacks in the years after Trump’s election propelled the freakout over YouTube.
So with the knowledge that those earlier panics had turned out to have little evidence to back them up, I kept an eye on the research about YouTube and radicalization over the next few years. And sure enough, the narrative that had been pushed in the media — and which friends of mine were asserting to me as fact as recently as last year — turned out to have little or no empirical support. In fact, the better the evidence gets, the more strongly it suggests that the “rabbit hole” story was a case of panic-driven myth-making.
The YouTube “rabbit hole” hypothesis
First, let’s take a detailed look at the standard story of how YouTube’s algorithm is supposed to radicalize people. The idea, essentially, is that radicalization happens in three steps:
People see one or two rightist or rightist-adjacent videos, either by accident or out of curiosity.
The YouTube algorithm sees them watching these videos, and decides to recommend them more — and more extreme — rightist content.
By watching these videos, people become radicalized to have rightist beliefs.
In March 2018, the New York Times’ Zeynep Tufekci wrote:
It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalising instruments of the 21st century…What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.
Though she admits that evidence of this is hard to come by, Tufekci cites the work of ex-Google engineer Guillaume Chaslot. Chaslot wrote a program to search non-personalized video recommendations that YouTube’s algorithm would serve up in response to searches for “Trump” or “Clinton”. Analysis of the results showed that recommendations tended to show people biased content in favor of the candidate they had searched for.
But bias doesn’t automatically equal either extremism or radicalization. That’s where parts 2 and 3 of the “rabbit hole” story come in.
In December 2018, Kelly Weill of the Daily Beast told the stories of several young men who were supposedly radicalized by innocuous YouTube content that led them to ever more extreme videos and eventually converted them into far-right activists. Here’s a representative story:
For David Sherratt, like so many teenagers, far-right radicalization began with video game tutorials on YouTube. He was 15 years old and loosely liberal, mostly interested in “Call of Duty” clips. Then YouTube’s recommendations led him elsewhere…
During that four-year trip down the rabbit hole, the teenager made headlines for his involvement in the men’s rights movement…He made videos with a prominent YouTuber now beloved by the far right…
Matt, a former right-winger who asked to withhold his name, was personally trapped in such a filter bubble…he described watching a video of Bill Maher and Ben Affleck discussing Islam, and seeing recommended a more extreme video about Islam by Infowars employee and conspiracy theorist Paul Joseph Watson. That video led to the next video, and the next…
Andrew, who also asked to withhold his last name, is a former white supremacist who has since renounced the movement…When Andrew was 20, he said, he became sympathetic to white nationalism after ingesting the movement’s talking points on an unrelated forum…Gaming culture on YouTube turned him further down the far-right path.
Other articles told similar tales, with family members accusing YouTube of “brainwashing” loved ones.
But individual horror stories and accusations by angry family members are a hallmark of moral panics. For example, one of the activists in the 1980s panic about Dungeons & Dragons was Patricia Pulling, whose teenage son had played D&D and had also tragically taken his own life. Pulling claimed that D&D “uses demonology, witchcraft, voodoo, murder, rape, blasphemy, suicide, assassination, insanity, sex perversion, homosexuality, prostitution, satanic type rituals, gambling, barbarism, cannibalism, sadism, desecration, demon summoning, necromantics, divination and other teachings,” and launched a campaign to warn the world.
Of course, this was nonsense. But this is what you get when you use angry or bereaved family members as the chief authoritative sources of information about what happened to their loved ones. The cold, hard fact is that individual horror stories don’t isolate causality. If your son kills himself, that doesn’t mean it was because he played D&D. Similarly, if your son watches rightist YouTube videos and becomes a rightist, it might simply be that he was toying with right-wing ideas and watched YouTube videos because he was interested in learning more about those ideas.
In order to establish that the “rabbit hole” hypothesis is true, we would need evidence on three things:
First, we would need evidence that YouTube’s algorithm tends to lead people to extremist videos.
Next, we would need evidence that the people who watch these videos are often not the kind of people who were already interested in that sort of extremist content.
Finally, we would need evidence that watching extremist content radicalizes people.
Obviously, coming up with evidence for all three of these things is quite difficult. In other words, the YouTube “rabbit hole” story was always going to be primarily supported by anecdotes, prior beliefs, politics, and the opinions of insiders in the company. But in the years since the “rabbit hole” hypothesis became widespread, researchers have tried to find evidence on these three things. And in general, they haven’t found much.
What the research says
Research on YouTube radicalization began appearing about a year after the above allegations went mainstream — a pretty rapid turnaround time, as these research literatures go. One of the earliest papers was Munger & Phillips (2019). They attempt to classify far-right videos, and find that views of these videos peaked in 2017. This immediately casts doubt on the importance of the “rabbit hole” hypothesis — if YouTube is an effective engine of far-right radicalization, then the attention given to far-right videos should grow over time instead of fall, as the algorithm sucks more and more people into the far-right orbit. Munger & Phillips suggest that the real story is falling demand for far-right politics after 2017 (which seems consistent with Trump’s falling popularity after Charlottesville).
A second paper, widely viewed but also widely criticized, was Ledwich & Zaitsev (2019). The authors look at the videos YouTube recommends on the pages for various channels. The find that recommended videos tend to be less politically extreme than the videos in the channels where the recommendations are being posted.
This result was criticized by many, including Princeton’s Arvind Narayanan. Critics pointed out that channel recommendations are different than the recommendations a YouTube user will receive when logged in to their profiles and watching videos. So Ledwich & Zaitsev don’t really test the exact “rabbit hole” hypothesis that media folks talk about. This is a fair criticism, but it should also be noted that Guillaume Chaslot’s own research, which prompted much of the YouTube radicalization panic, also did not use personalized, logged-in profiles to analyze recommendations. This criticism was notably absent among the people trumpeting Chaslot’s work.
(Note: Narayanan also criticizes Ledwich for political bias. Ledwich definitely does seem to be biased against journalistic outlets that report stories of YouTube radicalization. But Narayanan’s own tweets demonstrate a very similar and opposite bias in favor of these journalistic outlets, whose anecdotes he accepts as the best available evidence given the inherent difficulties of studying radicalization systematically. So there’s a lot of bias going around here.)
An even more important paper was Hosseinmardi et al (2021). Unlike other papers, this one looks at individual internet users’ browsing histories. This is an incredibly powerful dataset. It allows the authors to look at the sequence of what videos people actually watch over time, and how they navigated to those videos (e.g., whether they clicked there from another video). If the standard story for “rabbit hole” radicalization is true, then YouTube users will probably tend to click directly from one right-wing video to another, instead of saving the videos for later. But just in case users do save recommended videos for later, the authors can also check which videos people watch nearer to the end of the browsing session, when the algorithm will have had more time to give them a bunch of recommendations.
Having access to browser histories also allows Hosseinmardi et al. to correlate the YouTube videos each person watches with the other content they consume elsewhere on the internet. This allows the authors to investigate the main alternative to the “rabbit hole” hypothesis — namely, that people who watch extremist YouTube videos get radicalized elsewhere.
In short, Hosseinmardi et al. find no evidence of the “rabbit hole” story:
The pathways by which users reach far-right videos are diverse, and only a fraction can plausibly be attributed to platform recommendations. Within sessions of consecutive video viewership, we find no trend toward more extreme content, either left or right, indicating that consumption of this content is determined more by user preferences than by recommendation…Consumers of anti-woke, right, and far-right content also consume a meaningful amount of far-right content elsewhere online, indicating that, rather than the platform (either the recommendation engine or consumption of anti-woke content) pushing them toward far-right content, it is a complement to their larger news diet.
These results indicate little evidence for the popular claim that YouTube drives users to consume more radical political content, either left or right. Instead, we find strong evidence that, while somewhat unique with its growing and dedicated anti-woke channels, YouTube should otherwise be viewed as part of a larger information ecosystem in which conspiracy theories, misinformation, and hyperpartisan content are widely available, easily discovered, and actively sought out.
This study is very powerful evidence, owing to its huge and very rich data set and to its careful and thorough analysis. After it appeared, I began to see cautious reports in the media that perhaps the YouTube “rabbit hole” radicalization story wasn’t so clear. Zeynep Tufekci quickly dismissed the study because it didn’t have data on recommendations from YouTube, but — as one of the authors pointed out — she ignored the authors’ finding that few people arrived at right-wing videos by clicking through from other videos of any kind.
Now we have a new and very important paper: Chen et al. (2022). Remember that the biggest criticism of the previous studies was that they didn’t have data on the exact videos that YouTube recommends to users. Chen et al. get around this by recruiting over 1100 people and having them install a browser extension that directly observes what recommendations they get! And remember, these authors can also see what people end up clicking on. They also use surveys to get an idea of the political leanings of the people watching the videos, to see whether non-rightist people end up getting directed to rightist content.
Basically, they find strong evidence against the “rabbit hole” hypothesis:
Though almost all [study] participants use YouTube, videos from alternative and extremist channels are overwhelmingly watched by a small minority of people with high levels of gender and racial resentment. Even within this group, total viewership is concentrated among a few superconsumers who watch YouTube at high volumes. Viewers often reach these videos via external links and/or are subscribers to the channels in question. By contrast, we rarely observe recommendations to alternative or extremist channel videos being shown to, or followed by, non-subscribers.
We thus find little support in post-2019 data for prevailing narratives that YouTube’s algorithmic recommendations send unsuspecting members of the public down “rabbit holes” of extremism.
So this should conclusively kill the “rabbit hole” story. Except for one little detail — that “post-2019” qualifier in the last paragraph. It turns out that YouTube actually changed its algorithm substantially in 2019, in response to the media outcry! Chen et al. describe the impact of these changes:
YouTube announced changes in 2019 to “reduce the spread of content that comes close to—but doesn’t quite cross the line of—violating our Community Guidelines”...It subsequently claimed that these interventions resulted in a 50% drop in watch time from recommendations for “borderline content and harmful misinformation” and a 70% decline in watch time from non-subscribed recommendations…
YouTube’s 2019 changes do appear to have aected the propagation of some of the worst content on the platform, reducing both recommendations to conspiratorial content on the platform and sharing of YouTube conspiracy videos on Twitter and Reddit[.]
So it’s still possible that YouTube’s recommendation engine used to radicalize people before 2019, and now doesn’t do so anymore. Note that Hosseinmardi et al. (2021) also use data from after the algorithm change.
Of course, this story only works if you believe that the results of Munger & Phillips (2019) and Ledwich & Zaitsev (2019) aren’t informative. Both of those papers use data from before the algorithm change. So to believe that the YouTube “rabbit hole” story held true before 2019, you have to believe two things:
The studies using data from before 2019, which find no evidence of radicalization, are useless because of bad methods, AND
The studies using data from after 2021, which also find no evidence of radicalization, are only getting that result because YouTube changed its algorithm and cleaned up its act.
It’s possible to believe this, of course, but it doesn’t really seem like the likeliest story.
And in the meantime, the moral panic is still ongoing. Becca Lewis, who wrote a report in 2018 that attempted to map out the far right on YouTube, wrote a Guardian op-ed in December 2020 — more than a year after the algorithm change — alleging that YouTube was still radicalizing people, and blaming it for an Islamophobic terrorist attack in New Zealand. That assessment was shared by the government of New Zealand itself (though of course they didn’t back that up with research and may have simply been looking to place the blame on foreign media outlets instead of examining the roots of Islamophobia in their own society).
Meanwhile, some researchers in the field continue to insist that there is evidence for YouTube radicalization in the here and now. For example, when I asked on Twitter what evidence exists for the “rabbit hole” theory, I got the following response from Kate Starbird, a computer scientist at the University of Washington:
In other words, Starbird’s “evidence” here consists of:
A paper that’s about misinformation, not radicalization (two very different things!)
Theories about human behavior from sociology and psychology that may or may not apply in this case
Anecdotes from journalists
In other words, some researchers at prestigious institutions still have very strong priors that the YouTube “rabbit hole” theory is true, and they explicitly state that those priors come from the anecdotal reports of journalists.
Imagine researchers saying the same about violent video games in the 2000s, Dungeons & Dragons in the 1980s, or rock music in the 1970s! Practically any moral panic can be maintained if smart people believe strongly that journalistic anecdote represents the true state of the world. Instead, I think rational scientific inquiry demands that researchers not do that. Moral panics are too easy to start, and if scientists don’t take a hard-nosed look at whether the panics are well-founded, no one will.
Why is this important?
Now, you might be asking me why this is an important issue. After all, the moral panics against rock music, D&D, and video games didn’t get rid of rock music, D&D, or video games. YouTube seems to be in no danger of being destroyed, or even severely impacted, by this moral panic (its real threat is competition from TikTok). And YouTube changed its algorithm in 2019 to de-emphasize extremist content, so all’s well that ends well. Right?
Well, perhaps my pushback against this moral panic is a case of overly stubborn adherence to the supremacy of empirical evidence over politically driven narrative. But I do think that if we allow “YouTube radicalized a generation of American youth into Nazis” to become a historical “fact” in our collective memory, we are doing our body politic a disservice.
The other day Barack Obama gave a big speech about how social media disinformation is killing Americans, arguing that this means that social media needs tighter regulation by the government. Now, I don’t necessarily disagree with that — the antivax movement clearly led to many preventable and tragic deaths, and I also support the idea of a new Fairness Doctrine that would reduce the impact of both rightist and leftist extremism.
But I think we have to be very very careful about how we do this. Government regulation of the media is one of the most dangerous things any society can do — that’s why press freedom is one of the basic rights enshrined in our First Amendment. Politicians who propose to have government regulate speech do so at their own peril, even if a wave of sentiment allows them to succeed for a while.
And there’s another concern here. If we allow ourselves to base government regulation of the media on moral panics like the YouTube Nazi Panic, then we will be picking the wrong targets, because our targets will be determined by the fads, fashions, and grudges of the journalistic class. That will do society a disservice by ignoring real threats instead of fake ones.
(As a possible example, I’ll add my own journalistic antidote to the pile. The friend who showed me the alt-right videos did end up being radicalized — not as a Nazi, but as a far-leftist tankie. And it wasn’t YouTube where he became a radical tankie, but Twitter — after finding a group of tankie friends who were willing to listen to him and promote his tweets. I feel like maybe, given all the recent evidence, researchers studying online radicalization should start focusing less on YouTube and more on Twitter…)
As someone who was deep in youtube in the mid 2010s there was a huge wave of anti sjw postgamer gate content at the time. It was definitely a thing and it exposed me and many other young people to a sort of proto alt right view point. But it was only a few years. Between algorithm changes and hype cycle changes it’s not been that way for a long time now. That stuff is still out there but as the research you posted suggests, it doesn’t have the same viral potential as it did in the past
I think people pretty dramatically underrate how good the YouTube recommendation algorithm is. You'd have to watch a lot of radical content relative to other content for a day or so to even START getting recommendations, and even then you might get one video per refresh. The simple reality is that in order to get a bunch of radical videos recommended you have to 1) seek out those videos deliberately or 2) basically never use YouTube and wander into them.
The idea that you'll get recommended radical videos for simply watching innocuous videos on channels that have radical videos is similarly wild. The YouTube algorithm is really good! When I watch compilations of Seinfeld clips it knows EXACTLY what I am looking for and not only does not recommend any other content a clip-making channel may make it doesn't even recommend Seinfeld content that is not clips of the show! And the algorithm gets better the more you use YouTube. I would say at any given time that 90% of the videos in my recommended are things I would enjoy? Maybe more?