The Shouting Class 2: Last Refuge of Scoundrels
New evidence to support my theory of social media outrage
I’ve been writing a series of posts to try to articulate the ways that I think social media, and Twitter in particular, has harmed political discourse in America. But the first post in the series remains the most well-read — and, I think, the most original. It was called “The Shouting Class”, and it was basically about selection effects — how Twitter selects for and promotes the types of people who are inclined to spread political discord and outrage.
At the time, that was just a theory based on personal observation. There are lots of other ways that social media could create discord and outrage, besides selection effects. These include:
Misreading. Tweets are a highly compressed medium — just a few lines of text, with no real human emotion or space for explanation and nuance — and it’s easy to misinterpret people’s statements in a more negative or aggressive light than they intended.
Pseudonymity. Behind the invincible wall of a pseudonym, people are free to take out all the accumulated frustration, bitterness, resentment, and other bad feelings that they accumulate from society’s many slights, frictions, and disappointments — like cursing at people from inside your car.
Crowd psychology. Psychologists have long theorized that people act differently (and often more aggressively and irrationally) in mobs than in individual interactions. And social media is sort of one big free-floating never-ending mob, plus lots of little ad-hoc mobs..
In any case, the research results are now rolling in, and we’re starting to get an idea of how important these various factors are. Both provide general support for my theory that it’s the setup of the platforms themselves that amplify the worst people in our discourse. But they point to a more nuanced, complex process than what I initially described.
The Shouting Class: Now, with empirical support!
The first paper, which recently got a writeup in Gizmodo, is “The Psychology of Online Political Hostility: A Comprehensive, Cross-National Test of the Mismatch Hypothesis”, by Alexander Bor and Michael Bang Petersen. Here’s the abstract:
Why are online discussions about politics more hostile than offline discussions? A popular answer argues that human psychology is tailored for face-to-face interaction and people’s behavior therefore changes for the worse in impersonal online discussions. We provide a theoretical formalization and empirical test of this explanation: the mismatch hypothesis. We argue that mismatches between human psychology and novel features of online environments could (a) change people’s behavior, (b) create adverse selection effects and (c) bias people’s perceptions. Across eight studies, leveraging cross-national surveys and behavioral experiments (total N=8,434), we test the mismatch hypothesis but only find evidence for limited selection effects. Instead, hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline. Finally, we offer initial evidence that online discussions feel more hostile, in part, because the behavior of such individuals is more visible than offline.
The authors’ set of hypotheses doesn’t exactly conform to the breakdown I listed above. They define “selection effects” as the idea that more contentious, aggressive people selectively engage in social media politics discussions. In fact, they do find a bit of this — non-hostile people tend not to get involved in political discussions — but the effect is true both online and offline.
The paper’s main finding is that the people who are hostile online are also hostile in offline discussions. First, they measure this with a bunch of surveys — basically asking people questions about how hostile they are in various environments, how hostile they perceive other people to be, and so on. First of all, they confirm that people generally find online discussions to be more hostile than offline ones:
They also find that people report seeing strangers get attacked online much more than offline. So the phenomenon we’re trying to explain here is really real; online political discussions really are awful.
But when the authors ask people various forms of “are you an asshole in political discussions”, they find that the same people who report hostile behaviors online tend to report them offline as well. In other words, jerks are jerks wherever they are. And the authors correlate this self-reported jerkiness with self-reported status-seeking behavior — in other words, they think that the Shouting Class is motivated by self-aggrandizement.
Of course, this could be an artifact of survey methodology — some people could just consistently self-report more hostile behavior regardless of the context. So the authors also did some experiments. Most importantly, they actually got on Facebook and talked about immigration (the most divisive issue they could think of, sigh) with some of the people who took the surveys. The people who reported themselves as being more hostile in online discussions actually were more hostile in online discussions! Score one for survey methodology.
Anyway, the authors do a bunch of tests — surveys and experiments — to see whether non-hostile messages are misread as hostile when they’re online. They aren’t. In other words, they reject what I call the “misreading” hypothesis above.
And the finding that it’s just the same people being hostile on- and offline rejects the “pseudonymity” hypothesis too. It doesn’t quite reject the “crowd psychology” hypothesis, because people who are normally nice might perceive themselves as being nice even once they join an angry mob. And as we’ll see with the second paper, crowd psychology might well be playing a part in the question of why jerks get more attention online than offline.
But in any case, the authors interpret their findings as support for the idea that I put forth in “The Shouting Class” — that the nature of online discussion spaces gives more attention to hostile individuals than the nature of offline discussion spaces. They put it thus:
But what can explain the hostility gap, if “people are people” no matter where they discuss? If anti-social personality is the main source of online (and offline) hostility, the hostility gap is likely to be an artefact of more mechanical effects of online environments’ connectivity. Online environments are unique in creating large, public forums, where hostile messages may reach thousands including many strangers, could stay accessible perennially and may be promoted by algorithms tuned to generate interactions…The hostility gap may thus emerge as a direct consequence of the larger reach of those already motivated to be hostile.
In other words, society has always had about the same number of shouty jerks. But with the rise of social media, we have moved our society’s political discussions from spaces in which the shouty jerks were at least somewhat marginalized and contained to spaces that preferentially amplify their voices.
The anger of crowds
The second paper is “How social learning amplifies moral outrage expression in online social networks”, by William J. Brady, Killian McLoughlin, Tuan Doan, and Molly J. Crockett. Here’s the abstract:
Moral outrage shapes fundamental aspects of social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two preregistered observational studies on Twitter (7331 users and 12.7 million total tweets) and two preregistered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning. In addition, users conform their outrage expressions to the expressive norms of their social networks, suggesting norm learning also guides online outrage expressions. Norm learning overshadows reinforcement learning when normative information is readily observable: in ideologically extreme networks, where outrage expression is more common, users are less sensitive to social feedback when deciding whether to express outrage. Our findings highlight how platform design interacts with human learning mechanisms to affect moral discourse in digital public spaces.
Now, importantly, this is measuring something different than the first paper — moral outrage, rather than hostility! So it’s possible — even likely — that these two papers aren’t actually explaining the same things. The main result of this paper is that social networks encourage people to act angrier online. But that doesn’t necessary encourage them to be meaner.
But in any case, this paper is another indication that the structure of social media shapes the kind of discussions that happen there.
First, they look at people’s Twitter histories, and classify their tweets as outraged or non-outraged. They find that when people got more likes for outraged tweets, they tended to write more outraged tweets the next day. But when they got more likes for non-outraged tweets, they tended to write more non-outraged tweets the next day. People respond to incentives and modify their behavior accordingly!
They also find that politically extreme people tend to express more outrage (surprise, surprise!), and that people whose Twitter networks are more extreme tend to get rewarded more for outrage, so when you start hanging around online with shouters you tend to become a shouter too. Little surprise there — this is just classic social group conformity.
Also, importantly, they find — as many others have found — that outrage tends to get more likes on Twitter. They confirm this with an experiment — when they send subjects onto Twitter and tell them to try to maximize their likes, they find that outraged tweets get more rewarded than non-outraged ones.
All this suggests that over time, Twitter is teaching everyone to express more outrage, and is helping to turn human society more outraged in general. There are many explanations for the recent period of unrest in America (and the world), but “It’s just Twitter (and to a lesser extent Facebook)” has to go on the list here.
Last refuge of scoundrels
Now, it’s worth asking whether the second paper conflicts with the first here. The first paper showed that status-seeking hostile political jerks are the same people online and offline, while the second paper showed that Twitter teaches people to act more outraged over time. One possibility here is that hostility is simply a very different thing than outrage — the people who go online and start fights and troll and harass are simply engaged in a different social process from the people who denounce stuff. In other words, there might actually be a Bullying Class and a Shouting Class, and the latter is more capable than the former of converting new recruits.
But I think it’s also possible to see these two behaviors — hostility and outrage — as part of a single, unified process. And it’s what British essayist Samuel Johnson meant in 1775 when he wrote that “patriotism is the last refuge of the scoundrel.”
Johnson was actually talking about insincere expressions of patriotism; what he meant was that bad people can draw attention away from their bad character, and perhaps win acclaim and power for themselves, by professing faux-patriotism and thus rallying a whole gang of (true and false) patriots to their side. I think we can safely extend this idea from national patriotism to any sort of group identity where people strongly fight for their group — what American pundits now call “tribalism” (even though it’s not the correct anthropological definition of that word), and what George Orwell called “nationalism”.
Suppose there are some jerks — the hostile status-seekers of the Bor and Petersen paper — who want to both distract the world from their own jerkiness and win acclaim for themselves. A natural way for them to do this is to do exactly what Samuel Johnson described — find a tribe and denounce the enemies of that tribe. In normal life, distributed more-or-less randomly, they find this hard to do. At their work or in other offline social situations, they find themselves surrounded by people who aren’t of just one tribe. If you denounce whole groups of people at work, you’re liable to get fired; at a bar, you might get punched in the face.
But then when these hostile, self-aggrandizing jerks get online, they can sort into a particular tribe, and present themselves as the champions of that tribe. Because online spaces are amazing at sorting — you can always “find your people”, even if what you’re actually trying to find are followers. The jerks each find a tribe and express outrage on behalf of that tribe, against the tribe’s enemies. And the people of that tribe come to see these mini-demagogues as their champions against their enemies or potential enemies (and of course, the jerks will tell them that everyone else is their enemy). By expressing outrage, they win acclaim.
And then, by the process described in the Brady et al. paper, the normal, non-hostile people start expressing outrage too. They learn from themini-demagogues, and imitate the demagogues’ performative anger, especially against outgroups. In pursuit of personal glory, bad people turn neighbor against neighbor. Eventually all of online society — and whatever pieces of offline society are affected by the process — is divided into bitter, warring camps, and the mini-demagogues are the only people who win.
Of course, this process is hardly original or unique to the online world. I’m sure you can think of historical examples where this happened, and of the generally bloody results. But the ability of online spaces to sort people into like-minded gangs, to amplify outrage, and to focus attention on hostile individuals may have accelerated this process until it has overwhelmed the hard-won defense mechanisms our societies had developed against this sort of chaos.
So if this is what’s going on, what do we do to get out of it? That’s not a question I can easily answer. But the people who run social media companies should think about tweaking their algorithms to short-circuit this process of escalating outrage and the triumph of mini-demagogues. Armed with increasing understanding of how this process works and what its consequences are, they can hopefully design a fix.
If not, we may have to wait for society itself to evolve and find a way to sideline and isolate online mini-demagogues once more. And that road could be quite long and arduous, and perhaps even painted in blood.
"But the people who run social media companies should think about tweaking their algorithms to short-circuit this process of escalating outrage and the triumph of mini-demagogues."
Facebook researches know about this whole phenomena and actively tweaked their algorithms to increase the visibility and reach of angry content. They do this, surprise surprise, to increase profits.
Neither Twitter or Facebook will willingly undermine engagement on their platform. They are optimizing for profitability, irrespective of the impact on society. Only regulation or consumers abandoning their platform will cause them to change. The broader media is also to blame here for picking up twitter comments and representing them as news, further increasing their reach.