I just keep coming back to thinking about what the internet was like when social media wasn't called "social media." The more traditional style of internet forums, newsgroups, IRC chats, etc had their own share of extremists, but it seems like the fragmentary nature of stuff back then slowed or sometimes stopped extremist views from going viral. Different communities had their own varying levels of moderation and the extremists knew they had to behave in normal communities. Normal people just stayed away from the extremist communities.
Today normal people who would never have sought out extremist views can now be radicalized and taken in because they share the same mass public forum with the extremists.
I don't know a good solution, but maybe fragmenting the social media landscape isn't a bad idea.
I feel like FD2 has a problem in that it the scale of social media is too huge. For the original FD, there can only be a small amount of people on air at one time, say four republicans vs one dem on the topic of gun control. Even though the dem is outnumbered, they still occupy 1/5 of the conversation, which is at least noticeable. With FD2, twitter could potentially ban 1k conservatives and balance that with 1 tankie, which isn’t much different than what we have now (although I would love to see the memes venerating the one chosen sacrificial leftie brought to the altar).
The easy solution to this is to instill a ratio, but that introduces a whole new host of problems. If there was another 1/6 and twitter had to suddenly ban 10k righty accounts, would they be forced to ban 1k innocent lefty accounts? That wouldn’t go over well. Or Twitter could could systematically ban hundreds of lefty accounts each month as to ensure that it all evens out in the case of another righty coup, but I feel like at that point FD2 becomes more about appeasement than parity.
And what is left and right anyways? It’s pretty easy to tell at this point in history with MAGA hashtags and hammer n sickle emojis and whatnot, but who knows what political transformations the world will go though in the next 10 plus years. Interesting to think how right/left would be defined and how we could write legislation to compensate for status quo shifts.
Putting aside policy issues for a sec, one big obstacle to Fairness Doctrine is that it's probably unconstitutional. There was a lower court decision to that effect before it was withdrawn (republican regulators agreed with the decision). There are various arguments on both sides of this issue (haha), but given the intervening SCOTUS precedent and the composition of the other federal courts now, anything resembling the prior doctrine would be unlikely to survive a First Amendment challenge.
I'd be really interested to see how that could be crafted! I agree that something has to change, I'm just really wary, from a constitutional perspective, of mandating content, and I have a hard time envisioning a rule that wouldn't do that. But very curious to see if it can be done by smarter people than me!
Nice post, Noah. I've been thinking a lot about this as well.
My ideal solution is to have some sort of diminishing-marginal-returns for adding new users who post disinformation/hate/etc... As a platform or website grows (measured by petabytes of content or daily active users, etc), the percentage of "bad" (disinfo/hate/illegal/etc) content the site can have decreases. Sorta like a exponentially-decaying section 230.
This makes it so small internet communities, start-ups, etc, are still protected by section 230, but as they grow larger (and gain more resources/revenue), their responsibility for what their platform is used for grows too.
Of course, I expect whatever happens to be much less elegant - If the new bill out of North Dakota is any sign, lawmakers are not well-equipped to deal with this issue.
I've been trying to cast the past week's events in the framework from Gurri's Revolt of the Public and have been arriving at something much like this. Gurri identifies the problem of information management to a large, diverse, population and then leaps people's over-sized expectations of government driving it all.
I think the root of the problem is that with the current information ecosystem, too many people regularly believe and amplify things that aren't true. There's no perception of failure or missed expectations required. Democracy requires a fairly rational and informed electorate, and with the current media environment, we're failing to meet that. Significant modifications to the structure of social media will be required to make it better. I think the suggestions here are a good start.
Did you start by thinking, "how can we ban the tankies?" Then work backwards from there? I can tell you're terrified of the ever-growing Communist regime.
Three thoughts. First, thanks for the "Social media platforms as public squares" passage. This is an incredibly important point, and yours is the best explication I've read.
Second, you write "So social media platforms obviously have both an interest in reducing toxicity, both for the sake of their bottom lines and for society." This seems obviously false to me. More outrage -> more clicks/engagement -> more ad impressions. In a less toxic twitter, what rev stream would replace this? I agree that a less toxic twitter would be better for society, but I think the problem is that what's good for society is at right angles with what's good for Twitter's activist shareholders. What am I missing?
Third, I think that even if you're unconvinced of Ali's specific example (Chapo Trap House Subreddit ban) it's worth responding to the Golden Mean Fallacy critique of FD2. Lots of things we take for granted today in all sorts of domains (science, econ, morality) were fringe not so long ago. Why do you think this is a not a problem? Or perhaps it is a problem, but the benefits of FD2 outweigh the costs? What are your thoughts here?
And many people we think were centrist were actually extreme! Remember, we're less than two decades removed from the Iraq War, which was the ultimate case of centrist mania and led to hundreds of thousands of deaths.
This is a simply awful idea. This sort of doctrine would just be an excuse to amplify centrist opinions.
If this existed in the 19th century we might still have slavery today. Recall that abolitionists were considered the radicals of their day. You’d have to deplatform them whenever you deplatformed a secessionist. Give me a break.
In effect, companies like reddit already do this. When they banned the_donald, they also banned the chapo subreddit. This was a mistake—just like saying both extremes are wrong and the answer is in the middle is a glaring logical fallacy.
You clearly have no idea what you’re talking about and you’re probably speaking from a place of personal dislike for that particular community. The subreddit was quarantined for ironic posts calling for the death of 19th century slaveowners who no longer exist. After following reddit’s guidelines and responding to the criticism, reddit still did not remove them from quarantine.
They then banned them the day they banned T_D, despite the subreddit being in full compliance, in order to satisfy some kind of wrongheaded “both sides are bad” idea like this article is suggesting.
The logical fallacy in this article is that he is coming from the assumption that any sort of radical ideas are by default bad, and working from there. There’s plenty of problems these days that need radical solutions, like climate change.
He uses tankies as an example because it’s currently in vogue to hate on China, but you really thinking you’ve accomplished something good if you ban say radical environmentalists working against climate change when you ban some trump supporter group?
Dunno much about reddit, but without more information the answer to that question isn't just a simple no. What is it about environmentalism that makes it so no amount of radicalization or extremism would make it justifiable to ban as opposed to trumpism?
When I was a junior and product manager, I was taught to always take pride in my product, even at the cost of some short term revenue. To me, banning people who straight up lie, threaten and abuse is just taking pride in your product. There are some edge cases (eg fine line between vitriol and abuse) but it’s not that hard.
Much of this makes intuitive sense, but the problem remains that the evidence that social media is the primary cause of polarization is weak. In that Iyengar et al Science paper that you link to the authors acknowledge that this question is hotly debated, and they don't really provide any evidence that social media is increasing polarization apart from a reference to a "recent intriguing field experiment." Pretty weak sauce! Meanwhile, there's a whole lot of evidence that Fox News has had a real effect on election outcomes. Should we perhaps be focusing our attention there instead?
Banning of Trumpists doesn't bother me - they had it coming. I am far more bothered by Google, Apple, and then AWS stopping Parler. There are only two app stores of significance, and only three large providers of cloud services. I haven't ever been on Parler, but I want market alternatives to Twitter and FB. Parler has what, 30 employees? FB has 15,000 content moderators alone.
I think FB, AWS, and Twitter were well within their rights, but I still want an online space where we can communicate uncensored, even if it attracts Trumpists, tankies, or whoever else
They definitely could. It would just be a very, very different beast. Not to sound too much like an AWS commercial, but you have a number of advantages with AWS.
1) Very little in terms of upfront costs
2) No need to hire people to manage the server farm
3) Simple to set up servers around the globe, with edge locations serving customers faster and with far lower environmental risk (e.g. earthquakes can take out one AWS location and you still have alternative server farms in nearby geographies
4) No need to plan very much for loads - Load balancing and automatic scaling algorithms mean you are only paying for the servers you need, not the dozens or hundreds that would be needed to handle peak traffic.
So they could do it on their own, they just need to plan on several employees and lots of costs dedicated to that. Maybe that is okay, but I think it should at least make us uncomfortable.
I know very little about the rules surrounding ISPs and routing. In theory could giant internet service providers refuse to route to certain addresses, meaning that if you were on Comcast or AT&T they wouldn't have to direct traffic headed to that site? I don't know the rules there, but it seems quite plausible...
I agree with those that say if a company (like Facebook, Google) is allowed to exist as a monopoly, they should be given less leeway to ban people. Twitter isn't big enough so they can do as they please.
The algorithms that underpin social media aren't neutral, they're designed to encourage engagement, medium-is-the-message style. They have no ideological rationale or goal motivating them, they just show you more of what you like so you'll click more. Requiring modifications to these algorithms could reduce the polarizing effect of social media without requiring the government to arbitrate what counts as fair or biased.
Imagine YouTube if the video ranking algorithms were required by law to start discouraging engagement after a user had spent a certain amount of time or clicks on the site in one session. Or where videos that demonstrated highly polarized like/dislike patterns, or were primarily liked by a highly homogeneous subgroup of users were ranked lower. Perhaps a Twitter that was penalized with fines if its user graph showed distinct enough cliques.
The laws would have to be very carefully designed, because you want to keep the good aspects of social media; you want BLM to go viral, you want science communication and fun memes and stuff. The new laws (requirements on the algorithm design itself? mathematical metrics for the user graph structure on which the network has to attain a certain score?) would probably mute some of the good stuff, but I would be surprised if they couldn't mute the bad stuff a lot more.
The social media companies will haaaate something like this, because it targets exactly what they're trying to maximize, so political will would be required. But a law that goes "social networks with a userbase greater than X must maintain graph connectivity structure blah blah blah" will probably get less pushback from the public than "the government decides what speech is fair." And you might not even have to require that everybody's Twitter works this way, but just that it works this way by default and they have to change it themselves in the settings if they don't like it.
My thoughts on what specific algorithmic changes to require are pretty half-baked, but if this approach is possible, I think it would be desirable.
"The bad guys work all day and the good guys have to fight them as a hobby." This glass is half *full*: I see a new category of service sector jobs opening up. Yes, the good guys do have money, and yes, there are plenty of wannabe journalists looking for work. I'd pay 'em at least 50 cents per tweet.
One of my taglines on Quora (before it degenerated) was "The best lack all conviction, while the worst // Are filled with passionate intensity."
I just keep coming back to thinking about what the internet was like when social media wasn't called "social media." The more traditional style of internet forums, newsgroups, IRC chats, etc had their own share of extremists, but it seems like the fragmentary nature of stuff back then slowed or sometimes stopped extremist views from going viral. Different communities had their own varying levels of moderation and the extremists knew they had to behave in normal communities. Normal people just stayed away from the extremist communities.
Today normal people who would never have sought out extremist views can now be radicalized and taken in because they share the same mass public forum with the extremists.
I don't know a good solution, but maybe fragmenting the social media landscape isn't a bad idea.
The nam-shub of Enki...
I feel like FD2 has a problem in that it the scale of social media is too huge. For the original FD, there can only be a small amount of people on air at one time, say four republicans vs one dem on the topic of gun control. Even though the dem is outnumbered, they still occupy 1/5 of the conversation, which is at least noticeable. With FD2, twitter could potentially ban 1k conservatives and balance that with 1 tankie, which isn’t much different than what we have now (although I would love to see the memes venerating the one chosen sacrificial leftie brought to the altar).
The easy solution to this is to instill a ratio, but that introduces a whole new host of problems. If there was another 1/6 and twitter had to suddenly ban 10k righty accounts, would they be forced to ban 1k innocent lefty accounts? That wouldn’t go over well. Or Twitter could could systematically ban hundreds of lefty accounts each month as to ensure that it all evens out in the case of another righty coup, but I feel like at that point FD2 becomes more about appeasement than parity.
And what is left and right anyways? It’s pretty easy to tell at this point in history with MAGA hashtags and hammer n sickle emojis and whatnot, but who knows what political transformations the world will go though in the next 10 plus years. Interesting to think how right/left would be defined and how we could write legislation to compensate for status quo shifts.
Putting aside policy issues for a sec, one big obstacle to Fairness Doctrine is that it's probably unconstitutional. There was a lower court decision to that effect before it was withdrawn (republican regulators agreed with the decision). There are various arguments on both sides of this issue (haha), but given the intervening SCOTUS precedent and the composition of the other federal courts now, anything resembling the prior doctrine would be unlikely to survive a First Amendment challenge.
Yep, this FD wouldn't be like the previous one though, so might be a legal opportunity for it.
I'd be really interested to see how that could be crafted! I agree that something has to change, I'm just really wary, from a constitutional perspective, of mandating content, and I have a hard time envisioning a rule that wouldn't do that. But very curious to see if it can be done by smarter people than me!
Nice post, Noah. I've been thinking a lot about this as well.
My ideal solution is to have some sort of diminishing-marginal-returns for adding new users who post disinformation/hate/etc... As a platform or website grows (measured by petabytes of content or daily active users, etc), the percentage of "bad" (disinfo/hate/illegal/etc) content the site can have decreases. Sorta like a exponentially-decaying section 230.
This makes it so small internet communities, start-ups, etc, are still protected by section 230, but as they grow larger (and gain more resources/revenue), their responsibility for what their platform is used for grows too.
Of course, I expect whatever happens to be much less elegant - If the new bill out of North Dakota is any sign, lawmakers are not well-equipped to deal with this issue.
Yeah it's hard to implement elegant optimal solutions!
I've been trying to cast the past week's events in the framework from Gurri's Revolt of the Public and have been arriving at something much like this. Gurri identifies the problem of information management to a large, diverse, population and then leaps people's over-sized expectations of government driving it all.
I think the root of the problem is that with the current information ecosystem, too many people regularly believe and amplify things that aren't true. There's no perception of failure or missed expectations required. Democracy requires a fairly rational and informed electorate, and with the current media environment, we're failing to meet that. Significant modifications to the structure of social media will be required to make it better. I think the suggestions here are a good start.
Yeah, Gurri was pretty influential in my coming around to this way of thinking.
Did you start by thinking, "how can we ban the tankies?" Then work backwards from there? I can tell you're terrified of the ever-growing Communist regime.
I'm no fan of tankies, and I do hope they get banned, but no, I just started by thinking of how we could get Walter Cronkite back... :-)
Three thoughts. First, thanks for the "Social media platforms as public squares" passage. This is an incredibly important point, and yours is the best explication I've read.
Second, you write "So social media platforms obviously have both an interest in reducing toxicity, both for the sake of their bottom lines and for society." This seems obviously false to me. More outrage -> more clicks/engagement -> more ad impressions. In a less toxic twitter, what rev stream would replace this? I agree that a less toxic twitter would be better for society, but I think the problem is that what's good for society is at right angles with what's good for Twitter's activist shareholders. What am I missing?
Third, I think that even if you're unconvinced of Ali's specific example (Chapo Trap House Subreddit ban) it's worth responding to the Golden Mean Fallacy critique of FD2. Lots of things we take for granted today in all sorts of domains (science, econ, morality) were fringe not so long ago. Why do you think this is a not a problem? Or perhaps it is a problem, but the benefits of FD2 outweigh the costs? What are your thoughts here?
And many people we think were centrist were actually extreme! Remember, we're less than two decades removed from the Iraq War, which was the ultimate case of centrist mania and led to hundreds of thousands of deaths.
Compromise solution: For every extremist banned, one annoying self-proclaimed centrist also has to be banned.
This is a simply awful idea. This sort of doctrine would just be an excuse to amplify centrist opinions.
If this existed in the 19th century we might still have slavery today. Recall that abolitionists were considered the radicals of their day. You’d have to deplatform them whenever you deplatformed a secessionist. Give me a break.
In effect, companies like reddit already do this. When they banned the_donald, they also banned the chapo subreddit. This was a mistake—just like saying both extremes are wrong and the answer is in the middle is a glaring logical fallacy.
Haha man I was with you until you held up Chapo as the example. Then your case just fell apart. :D
Abolitionists getting banned would be a worry, but notice that Reddit and Twitter never even touched BLM activists.
To equate Chapo with abolitionists just made me LOL.
The Chapo subreddit was also toxic as hell and was more than capable of earning a ban all by itself, logical fallacies aside.
You clearly have no idea what you’re talking about and you’re probably speaking from a place of personal dislike for that particular community. The subreddit was quarantined for ironic posts calling for the death of 19th century slaveowners who no longer exist. After following reddit’s guidelines and responding to the criticism, reddit still did not remove them from quarantine.
They then banned them the day they banned T_D, despite the subreddit being in full compliance, in order to satisfy some kind of wrongheaded “both sides are bad” idea like this article is suggesting.
The logical fallacy in this article is that he is coming from the assumption that any sort of radical ideas are by default bad, and working from there. There’s plenty of problems these days that need radical solutions, like climate change.
He uses tankies as an example because it’s currently in vogue to hate on China, but you really thinking you’ve accomplished something good if you ban say radical environmentalists working against climate change when you ban some trump supporter group?
Dunno much about reddit, but without more information the answer to that question isn't just a simple no. What is it about environmentalism that makes it so no amount of radicalization or extremism would make it justifiable to ban as opposed to trumpism?
When I was a junior and product manager, I was taught to always take pride in my product, even at the cost of some short term revenue. To me, banning people who straight up lie, threaten and abuse is just taking pride in your product. There are some edge cases (eg fine line between vitriol and abuse) but it’s not that hard.
Much of this makes intuitive sense, but the problem remains that the evidence that social media is the primary cause of polarization is weak. In that Iyengar et al Science paper that you link to the authors acknowledge that this question is hotly debated, and they don't really provide any evidence that social media is increasing polarization apart from a reference to a "recent intriguing field experiment." Pretty weak sauce! Meanwhile, there's a whole lot of evidence that Fox News has had a real effect on election outcomes. Should we perhaps be focusing our attention there instead?
Yes, but there's probably no legal ground for regulating them.
Well, I assume that from reading about the cases involved with the original FD. But I am not a lawyer or legal expert, so...maybe!
Banning of Trumpists doesn't bother me - they had it coming. I am far more bothered by Google, Apple, and then AWS stopping Parler. There are only two app stores of significance, and only three large providers of cloud services. I haven't ever been on Parler, but I want market alternatives to Twitter and FB. Parler has what, 30 employees? FB has 15,000 content moderators alone.
I think FB, AWS, and Twitter were well within their rights, but I still want an online space where we can communicate uncensored, even if it attracts Trumpists, tankies, or whoever else
Yeah, this idea doesn't really deal with the AWS/Parler thing. That's separate, and like I said, was done for natsec reasons, not ideological reasons.
in theory Parler could set up their own server farm, right?
They definitely could. It would just be a very, very different beast. Not to sound too much like an AWS commercial, but you have a number of advantages with AWS.
1) Very little in terms of upfront costs
2) No need to hire people to manage the server farm
3) Simple to set up servers around the globe, with edge locations serving customers faster and with far lower environmental risk (e.g. earthquakes can take out one AWS location and you still have alternative server farms in nearby geographies
4) No need to plan very much for loads - Load balancing and automatic scaling algorithms mean you are only paying for the servers you need, not the dozens or hundreds that would be needed to handle peak traffic.
So they could do it on their own, they just need to plan on several employees and lots of costs dedicated to that. Maybe that is okay, but I think it should at least make us uncomfortable.
I know very little about the rules surrounding ISPs and routing. In theory could giant internet service providers refuse to route to certain addresses, meaning that if you were on Comcast or AT&T they wouldn't have to direct traffic headed to that site? I don't know the rules there, but it seems quite plausible...
I don't know much either. I guess another level of banning could be if the domain registrar companies also refused to do business with Parler
I agree with those that say if a company (like Facebook, Google) is allowed to exist as a monopoly, they should be given less leeway to ban people. Twitter isn't big enough so they can do as they please.
The algorithms that underpin social media aren't neutral, they're designed to encourage engagement, medium-is-the-message style. They have no ideological rationale or goal motivating them, they just show you more of what you like so you'll click more. Requiring modifications to these algorithms could reduce the polarizing effect of social media without requiring the government to arbitrate what counts as fair or biased.
Imagine YouTube if the video ranking algorithms were required by law to start discouraging engagement after a user had spent a certain amount of time or clicks on the site in one session. Or where videos that demonstrated highly polarized like/dislike patterns, or were primarily liked by a highly homogeneous subgroup of users were ranked lower. Perhaps a Twitter that was penalized with fines if its user graph showed distinct enough cliques.
The laws would have to be very carefully designed, because you want to keep the good aspects of social media; you want BLM to go viral, you want science communication and fun memes and stuff. The new laws (requirements on the algorithm design itself? mathematical metrics for the user graph structure on which the network has to attain a certain score?) would probably mute some of the good stuff, but I would be surprised if they couldn't mute the bad stuff a lot more.
The social media companies will haaaate something like this, because it targets exactly what they're trying to maximize, so political will would be required. But a law that goes "social networks with a userbase greater than X must maintain graph connectivity structure blah blah blah" will probably get less pushback from the public than "the government decides what speech is fair." And you might not even have to require that everybody's Twitter works this way, but just that it works this way by default and they have to change it themselves in the settings if they don't like it.
My thoughts on what specific algorithmic changes to require are pretty half-baked, but if this approach is possible, I think it would be desirable.
"The bad guys work all day and the good guys have to fight them as a hobby." This glass is half *full*: I see a new category of service sector jobs opening up. Yes, the good guys do have money, and yes, there are plenty of wannabe journalists looking for work. I'd pay 'em at least 50 cents per tweet.
One of my taglines on Quora (before it degenerated) was "The best lack all conviction, while the worst // Are filled with passionate intensity."
If you want a dumb government solution tax people for the time they spend on social media. They will all decide it's not worth it, problem solved.