My takeaway from Noah's essay is that some Internet companies can only increase revenue by worsening the user experience. It's hard to believe that this will end well for them.
That is the normal course. I dont use Google search anymore. I use DDG, at least for now. But it's not much better. I do like Google maps, a lot, when traveling. We are in Australia this month, and Google maps has been a great resource. But Ad block is essential
I blame the VCs for making every company want to get big. If companies optimized for profit *per employee* in some kind of co-op model, then you could run small sustainable internet businesses with a good user experience. But investors want to maximize total profit instead, so they try to squeeze every last dollar out.
You could build a decent version of Quora with 10 engineers, throw some basic ads on it, watch the traffic flow in, and call it a day. But instead, Quora has 300 engineers working on AI and ads and whatever else they've cooked up. For some positive examples of a good product with a small team, see
If big corporations are not nowadays masters, I don't know who they are. Big corporations promotes slavery in the 21st century. But this time, it is happening in the virtual space. Slavery in the sense that tools like algorithm locked you in the internet 24/7. They take advantage of our human flaws to raise our dopamine and profit from them. This is death capitalism; we exchange our attention with our humanity to profit the few. Depression, mentally health ,anxiety etc have been serious implications. Given this, the subjects are unlikely to escape from the internet because they're constantly awashed with algorithms.
I encountered this paragraph today on ADDISON DEL MASTRO's sub stack.
"The downside of social media is more subtle and corrosive. It’s the mental distress that comes from being made aware of the sheer lunacy with which you share your society. It does something to you—your ability to trust people, your ability to judge words and arguments at face value instead of straining to hear the dog whistle, your general outlook—knowing that people with these views exist. This knowledge never goes away, or breaks down. It accumulates, a mental persistent organic pollutant, and the symptoms of its poisoning are jadedness, guardedness, and distrust."
So very true. But social media is a just a kind of hypertophy of what mass media had already done.
... to delude you into thinking you know more than you really do..... the longtime illusion that you could really know what is going on all over the world just by switching on the news. And when consumers of mainstream (and social) media have done being ‘kept informed’ by its news, they may well escape to its ‘progressive’ tales. More insidious than The News, these apparently apolitical Narratives leech into your consciousness without you even noticing. https://grahamcunningham.substack.com/p/non-binary-sibling-is-entertaining
One of my hotter takes in regards to the internet is that CDA 230 should be amended, the blanket exemption that companies get for hosting user content should count algorithmic "push" as editorializing and the companies should be responsible for the content.
In the old days, most user content hosting was passive, like personal websites or bulletin boards that mostly sorted by newest post. If a company is choosing content to promote, they should be fully responsible for it. This won't solve everything, but I think it would cut down on a lot of the attention seeking aspects.
Russia tries to influence the US and European elections. We try to influence their (and everyone else's) elections. Anyone applauding the latter has no right to get worked up over the former.
And the stupidity of public social media is why we're all here, in a private space that we collectively pay for.
There are many ways to try to influence elections. I don’t particularly mind influence campaigns that are based on trying to get real information out to voters who aren’t necessarily aware of it, that might be directly relevant to which outcome of the election they’d prefer. I’m more worried about influence campaigns dedicated to producing strife, or to influencing votes through incorrect information. I don’t know the details of specific election influence campaigns, but this is what determines how I feel about them, whether they are done by campaigns, by others inside the country, or by others outside the country.
Samantha Powers was in Hungary a year ago throwing out grants to "opposition media" -- literally funding antigovernment propaganda outlets in a democratic, EU country and NATO ally. (Bear in mind that in Hungary, about 70% of the media is opposed to the Fidesz party that is in power, so it's not like there's some deficit of "objective" media coverage.) Why are we doing this? Not to "get real information out". We're doing it to get more "Orban hates gay people", and "Orban is a fascist" media stories for the benefit of the US and EU LGBT lobby. That's it. How is that not "a campaign to produce strife"?
I only mention Hungary because it's the one I'm most familiar with since I have friends there. But we funded very much "strife producing" protests/revolutions in Georgia, Ukraine, Belarus, Kyrgyzstan, Tunisia, Egypt, and Libya (those are only the ones we know of.) I'm pretty sure most Libyans today would have preferred we had stayed home and left Ghaddafi in charge. Syrians too. The Belarusian govt failed to roll over and the Russians stomped on Georgia and are currently stomping on Ukraine. Lots of strife to go around, and the origins trace back to our CIA and NGO-ocracy and people like Samantha Powers.
They’re not also funding extreme pro-Orban messaging are they? When I say “campaign to promote strife”, that’s the kind of thing I mean, where both sides are being whipped up in opposite extremes, not because you believe both sides are right, but because you want the sides to fight each other more. In all of the examples you mention, there appears to be a side the interveners believe in, and they are promoting the evidence that supports that side, not because strife is the goal, but because they believe that side is right.
Hmm.. OK. I can see your point. So us funding revolutionary groups in Nicaragua (in the 80's) or Belarus (in the 2010's) with the goal of destabilizing those countries is different because we were ideologically and politically aligned with those groups. I imagine the USSR felt the same way about funding communist insurgencies in the middle East and Asia -- "our motives are ideologically pure so our meddling is OK. We're not sowing strike; we're freeing people."
I think it's distinction without a difference. When it was in our interest, we kept Sadat and Mubarak in power in Egypt. When we thought it was in our interest, we pushed Mubarak out of power by demanding free elections. When we realized we were wrong, we arranged a mass protest and coup to overthrow Morsi (even though he was the first legitimately elected president in modern Egypt.) We do what's in our interests. Russia has viewed itself as effectively at war with the Western-aligned international order for at least a decade. So it is in their interests to foment protests in Western countries to undermine that order. In this case, their interests happen to be aligned with the interests of a large groups of blue-collar Americans who have seen their standard of living fail to keep pace under that same liberal-globalist order, which gives Russia's propaganda a ready audience. Kind of like how people in Georgia were chafing under a corrupt and unjust ruling order that they resented, thus giving our CIA's propaganda a ready audience there.
I do see the difference. I just don't think that's what's going on in either case.
I think there's both a difference between funding revolutionary groups (which is a thing the United States has done that I don't support) and supporting an ideologically-aligned side in a democratic election (which is a thing I don't particularly mind, though it's hard to do without causing backlash among the democratic electorate one is trying to support), as well as a difference between supporting an ideologically-aligned side in a democratic election, and supporting *both* sides of ideological disputes in an attempt to cause strife. The latter is what I take Russia to have done significantly in recent years, amplifying both Black Lives Matter and Blue Lives Matter messaging, and both pro-Brexit and anti-Brexit messaging, and so on. It's not just supporting causes that they are ideologically aligned with, but nihilistically supporting diametrically-opposed causes to increase tensions.
"If I were trying to make a living by creating content for YouTube or TikTok or Instagram, I’d probably be trying to optimize for whatever I thought the algorithm would pick up. That’s not necessarily bad, but it’s not what the internet used to be."
Not necessarily bad? Sounds like slavery to me. And slavery to a machine.
Well ultimately isn't every job something that you do because someone else wants you to do it (either your employer, or your customers if you're self-employed)? By that definition, literally every job would be slavery. Even Elon Musk is a slave to the EV market.
To rephrase my response, you can find something to do that both you enjoy doing and your customers enjoy consuming. This will probably not maximize your earnings. However, JRR Tolkien did not write the Lord of the Rings because he thought there would be a market out there for it, yet it made him a rich man. Most great artists and authors operate this way— they create what they want to create and their amazing productions find at market afterwards.
Or you can choose a job that may not be the most lucrative job but it's something you would choose to do anyway. People, like my sister, who choose to found their own nonprofits have chosen this path.
Never saw a reason to be on social media- If I see something interesting I text or email or call my contacts- usually a few times a month, never more than a couple times a week.
I am very happy that some of the geezer bloggers I liked in the 90s and 00s are back at it on substack (I never did Twitter and I don’t do podcasts- too many wasted words, even at 2x speed), along with some new additions in the same spirit- Noah amongst them. Thank you for your efforts and for this historical overview and future predictions.
You almost cheapened it with the Russian nonsense, of course, but I understand and can overlook ritual ablutions.
Enshittification is catchy, but it's a fancy way of saying the English-speaking Internet is largely done growing. 95% of US users are on the Internet; 90% have a smartphone (Pew). Existing platforms have to monetize AND compete with each other for users, with the resultant big platform vibe being less "coffee shop" and more "Times Square." The Internet isn't the shiny new thing anymore, it's just part of life, now.
As for slop, if it's the new spam, I remember similar articles talking about how spam was killing the Internet in the late 1990's (remember the Nigerian princes?) and eventually the problem got bad enough that a mix of government regulations and better tech made it much less of a felt problem. I see the same thing happening here. It's difficult to find studies that attempt to measure the effectiveness of mis/disinformation (vs. just "look at all the attempts!") but some (RAND) have found that targeted populations wise up to it and become much more skeptical of sources after an adjustment period.
The 90s Internet vibe still exists - it went from USENET to Geocities to MySpace to Blogger/LiveJournal to Reddit. Most people don't want the 90s Internet vibe, though, they just want to order some air filters, swap pictures of their kids/pets with their friends, watch a police procedural or a sitcom, and move on with their life. Nothing wrong with that.
Seems like there’s a lot wrong with regular folks using these Internet platforms if they are full of lies and ads, it keeps these incumbent actors like FB, Google, TikTok, etc in business, justifying this enshittification.
It is kind of about the fact that the English language world is already all online, but it still doesn't change the fact that it appears to basically be a rule that these products inevitably end up becoming worse so they can squeeze more profits out of a stagnant userbase. I don't know how you solve for something like that, but it doesn't make it any less annoying.
Exactly. For all the complaints I see, I still use Google to get my search results. The ads can be mildly annoying sometimes but it's never nearly as bad as most people seem to make it. E-commerce results come out first for stuff because most people searching want to by it online. It might be annoying if you don't want to, but I'm sure a little of people appreciate it.
Not sure what's wrong with the Quora. The ads are there obviously, but they're actually less intrusive than most. The search feature in Quora is bad but it always has been, that's more of a just Quora thing. I use Quora a lot and get a lot of insightful atuf from it. I read stuff by American, Chinese and Russian ultra nationalists. Their perspectives can be scary, but they're real people and it's nice to see it from the other side. I also see regular people and seeing different perspectives again is nice.
My YouTube is algorithm generated but I prefer it like that. There's a place to check videos from only people I'm subscribed to but I prefer the algo. Instagram works good for me too. The community is disgusting but that's because of the people more than the platform.
Maybe it's because I didn't experience the old internet so I don't have all the nostalgia for it.
Well said. One thing I'd add is that I don't think it's *just* the design changes to the internet. I think people have actually changed in how *we* use the internet, and not necessarily for the better.
Like you said, the internet used to be an escape. You could come here to have some fun, blow off steam, whatever. Admittedly that lead to a toxicity and trolls, but it also provided a liberating freedom of speech.
Now, everyone takes it so damn seriously. LinkedIn and Facebook are a shiny wall of "perfection," everyone bragging about their bland perfect life like professional cover letter. Reddit answers are fine-tuned for conformist positivity, because that's what gets upvoted. Youtube "content" is all finely-tuned for "the algorithm" of whatever is trendy these days (a short reaction video to trendy dances, apparently). Even online shopping is kind of homogenized- a stock model in bright light holding a shiny plastic thing, with hundreds of 5-star rave reviews about how it changed their lives.
The scary thing isn't AI. It's how humans have been trained to *behave* like AI.
I remember the early days of online shopping. I was totally offline so saw it as a bemused observer. The smugness of the early pioneers at "I got this for half the price it's on sale for in the shops". The cons and scams as the online shopping "rules" got codified and security measures put in place..The people who boasted about their great deals and bargains (as above) then same people! vociferously complaining about all their local high street shops closing and even the out of town retail centre looking a bit due at heel. I think after COVID everyone got fed up with shopping online and they all wanted to GO OUT,see and feel the stuff they wanted to buy,feel other people around them,and catch that life vibe. I notice that stuff online is now about the same price as offline,returns no longer free,and of course the weak point is delivery. That text saying "sorry you weren't home so you can collect your parcel from our depot".
That really isn’t my experience at all. In my experience, since the pandemic online shopping has gotten even better and easier. Though it depends a lot on where you are physically located. When I lived in Texas most things arrived in two days, sometimes next day. But in California, I can get nearly anything next day, and a good amount of stuff same day.
Rdrama and Kiwi Farms are the last outposts of the old flippancy. I love seeing how they take the piss out of the ultrawoke Redditors. Keeps my blood from boiling.
Another way to experience social media is to bypass feeds altogether. One way is to bookmark the accounts of the people you like, and visit their accounts sequentially, posting or replying if you have something to contribute. You can become aware of other interesting accounts through replies and reposts. Since you aren’t following anyone, content is not pushed at you.
While you don’t interact with others in real time, this usually isn’t necessary. And by controlling the accounts you visit, you can assess their strengths and weaknesses over time – good protection from bots and trolls.
I was reading about Bluesky's user moderation options. I've been offered codes during its invite-only phase but didn't take them. I just also want to minimize social media interactions, so I don't know if Bluesky followed through with this.
On Techdirt, it did say that Bluesky offered robust tools for every user to be their own moderator. This in theory is a good idea. If you're not a bro, you don't really have an affection for "absolute free speech". You might be a woman, or a person of color, or not straight, and the internet can be a vicious place. Yet you still want to carry on conversations. So you can tune the account to block accounts and terms you don't wish to engage with, but you don't derail those conversations.
It also frees the burden of a social media's trust and safety team from having to be the arbiters of what can be said, and focus on material that does harm users and the business (e.g., child sexual abuse material, facilitating illegal behavior, etc.).
Hi Noah, I moved from Google to Perplexity.ai a couple of months ago. I ask it a question and get an Ai generated summary based on five relevant websites, with the links. I can ask follow up questions and keep the series of results. Not sure what the long-term business model is for the firm but, for now, as a user it’s great. It does make mistakes - said I was born into UK (not true) - but it’s MUCH superior to Google for searching. Google really did kill its own business model.
I'm adding this to my Alternatives to Google article. I'm going to caveat it though because it seems to offer a potentially dangerous answers to the question “how to respond to a vinyl chloride fire” though it does at least foot note where it got the answer from so you can read the actual source doc
Perplexity does make mistakes and it is free so caveat emptor. But, I find it’s good at definitions, providing lists of benefits and costs, and whether two concepts are the same or different. Try “Are facts the same as circumstances?”, for example.
I've started using Perplexity for the "find information on the Internet" task that ChatGPT can't do yet and Google used to be good for but isn't any more. I'm not paying for it (yet?) and haven't been at it long enough to say it's great, but the early returns have been positive.
I'm having a lot of success using Microsoft Copilot which is available on their Edge browser. The button is in the upper right-hand corner. It isn't perfect, but it generally can handle more complex and specific questions than Google. It has three settings, from most creative to most precise, and I always use the most precise setting.
I've been using the pay version of Perplexity for about 3 months. You get a choice of four different LLM's (including one developed by the Perplexity team) which all have their strengths and weaknesses. I find it much more useful that Google or DDG but still have to pay attention to the results -- even though they are thoroughly cited which is good. I have found a couple of blatant errors but follow up questions work well to sort those out. One slightly frustrating thing is that it will return a generality of references but say it can't provide the specifics. The solution is to specifically tell it to use the references it just provided to present the specific details desired and then it will. All a learning curve in using a new tool but so far so good. And no ads.
Yeah this was where I was confused about Noah saying “[Google doesn’t work] so when I can I now use ChatGPT (which is not yet enshittified).” I don’t trust chatGPT at all right now given the kind of slop it serves me as a high school teacher every time I ask for a writing assignment!
It's interesting that Perplexity is deliberately less creative, and cites its sources alongside its synthesis/analysis. It definitely won't write you a bad version of your work, but I feel like it could cut down the time it takes to research what you're working on.
Truth becoming a luxury good seems to be part of the trends you outline. If you can afford subscription to ad-free, lie-free content, you get quality information. If not you live in the world of bots, slop, lies and conspiracies.
One could speculate on the the consequences this would have in a democracy.
The internet is turning out worse for humanity. The 90s and 2000s were periods worth celebrating given that the internet was a way to express ourselves, talk to each other, write blogs with indepth knowledge etc. It was easy to log into the Internet and get inspired from random content-real human content. It was designed to better users. But, the dramatic change was when corporations-the big four-(Apple, fb, Microsoft, ) yearned for profit optimisation; unhealthy competition by whatever means became the major strategy. 2) algorithms emerged and altered the whole internet for worse. Content after content were curated; that means there isn't no escape from the internet. For creators to make money, they had to be brands ,not authentic selves. They had to do anything awkward that even earn their attention from elsewhere.
As someone who grew up with a young internet filled with imagination and genuine human connections, this future of algorithms and AI slop leave me yearning for either a return to an old-school internet, or simply just a complete disconnect from the online world minus more direct social platforms like Discord or other instant messaging services. If the 90s and 2000s showed us the benefits of an online world for your average citizen looking for social and creative outlets, then the 2010s and 2020s are showing us just how that same digital world can be abused by greedy corporations and corrupt states with global agendas.
I’ve been thinking/worried about the escalating cycle of slop and AI for quite awhile. “Garbage in = garbage out” as the idiom goes, continuing to train AI on ever more misinformation and AI generated bullshit seems like a recipe for a very expensive and energy intensive cesspool with negative end value to both business and society.
Its worse than you think. LLMs are guaranteed to hallucinate answers. If those hallucinated answers get fed back in then potentially the hallucination becomes what the internet considers to be true.
These LLM models aren't that good, and each generation seems to require exponentially more resources for training, while still remaining basic generators of boring text. And that's when they're not making up crap out of whole cloth.
There are probably insufficient resources on earth for more than another doubling or two, at which point it'll be obvious that there's no reasonable business based on selling access to partly hallucinated generic slop that consumes the resources of a medium sized nation.
I think LLMs have useful, *niche* applications where they are trained on limited, specific data sets and only used for that niche application… like analyzing mammograms and MRIs. Recognizing patterns in pixels to improve healthcare outcomes is a great application and probably worthy of the resources.
Having an army of LLMs churning out the dumbest answer to any simple query and fighting to make your dumb AI answer the top of the search result, in order to generate potential ad revenue does not seem nearly as noble a use.
I am reminded of one of Brian Klaas’ recent posts about “solving puzzles vs. solving mysteries.” AI is great at solving puzzles with limited possible inputs and variables and fixed determinable outcomes.
Applying AI to broad environments with limitless inputs (“mysteries”) and insufficient guidance is just inviting it to learn all the wrong things and cause potential harms in the process.
I absolutely agree. I would imagine that there are incredible medical applications for doing things like providing a second opinion on reading a radiology scan, for example.
But I wouldn't call those things LLMs (large language models), I think of them simply as neural network or machine learning applications: they have limited domains, and use *much* smaller amounts of data.
I was dissing the more general LLMs like ChatGPT 4 and Gemini, which are queried in natural language to write arbitrary random text, like "Write a Seinfeld Finale in the style of the last scene in Pulp Fiction, with Kramer as Samuel L Jackson's character."
My takeaway from Noah's essay is that some Internet companies can only increase revenue by worsening the user experience. It's hard to believe that this will end well for them.
The users aren’t the customers
Indeed. The users are the commodities.
Perhaps they will eventually unvolunteer for that role.
That is the normal course. I dont use Google search anymore. I use DDG, at least for now. But it's not much better. I do like Google maps, a lot, when traveling. We are in Australia this month, and Google maps has been a great resource. But Ad block is essential
Right now, my best search engine is formulating a question on Copilot on Edge. Button in upper right corner. Use more precise setting.
Thanks. I'll give it a go
I blame the VCs for making every company want to get big. If companies optimized for profit *per employee* in some kind of co-op model, then you could run small sustainable internet businesses with a good user experience. But investors want to maximize total profit instead, so they try to squeeze every last dollar out.
You could build a decent version of Quora with 10 engineers, throw some basic ads on it, watch the traffic flow in, and call it a day. But instead, Quora has 300 engineers working on AI and ads and whatever else they've cooked up. For some positive examples of a good product with a small team, see
https://www.theverge.com/2023/3/20/23648650/marco-arment-overcast-solo-acts
If big corporations are not nowadays masters, I don't know who they are. Big corporations promotes slavery in the 21st century. But this time, it is happening in the virtual space. Slavery in the sense that tools like algorithm locked you in the internet 24/7. They take advantage of our human flaws to raise our dopamine and profit from them. This is death capitalism; we exchange our attention with our humanity to profit the few. Depression, mentally health ,anxiety etc have been serious implications. Given this, the subjects are unlikely to escape from the internet because they're constantly awashed with algorithms.
I encountered this paragraph today on ADDISON DEL MASTRO's sub stack.
"The downside of social media is more subtle and corrosive. It’s the mental distress that comes from being made aware of the sheer lunacy with which you share your society. It does something to you—your ability to trust people, your ability to judge words and arguments at face value instead of straining to hear the dog whistle, your general outlook—knowing that people with these views exist. This knowledge never goes away, or breaks down. It accumulates, a mental persistent organic pollutant, and the symptoms of its poisoning are jadedness, guardedness, and distrust."
https://thedeletedscenes.substack.com/p/trolling-alone
So very true. But social media is a just a kind of hypertophy of what mass media had already done.
... to delude you into thinking you know more than you really do..... the longtime illusion that you could really know what is going on all over the world just by switching on the news. And when consumers of mainstream (and social) media have done being ‘kept informed’ by its news, they may well escape to its ‘progressive’ tales. More insidious than The News, these apparently apolitical Narratives leech into your consciousness without you even noticing. https://grahamcunningham.substack.com/p/non-binary-sibling-is-entertaining
One of my hotter takes in regards to the internet is that CDA 230 should be amended, the blanket exemption that companies get for hosting user content should count algorithmic "push" as editorializing and the companies should be responsible for the content.
In the old days, most user content hosting was passive, like personal websites or bulletin boards that mostly sorted by newest post. If a company is choosing content to promote, they should be fully responsible for it. This won't solve everything, but I think it would cut down on a lot of the attention seeking aspects.
Russia tries to influence the US and European elections. We try to influence their (and everyone else's) elections. Anyone applauding the latter has no right to get worked up over the former.
And the stupidity of public social media is why we're all here, in a private space that we collectively pay for.
There are many ways to try to influence elections. I don’t particularly mind influence campaigns that are based on trying to get real information out to voters who aren’t necessarily aware of it, that might be directly relevant to which outcome of the election they’d prefer. I’m more worried about influence campaigns dedicated to producing strife, or to influencing votes through incorrect information. I don’t know the details of specific election influence campaigns, but this is what determines how I feel about them, whether they are done by campaigns, by others inside the country, or by others outside the country.
Samantha Powers was in Hungary a year ago throwing out grants to "opposition media" -- literally funding antigovernment propaganda outlets in a democratic, EU country and NATO ally. (Bear in mind that in Hungary, about 70% of the media is opposed to the Fidesz party that is in power, so it's not like there's some deficit of "objective" media coverage.) Why are we doing this? Not to "get real information out". We're doing it to get more "Orban hates gay people", and "Orban is a fascist" media stories for the benefit of the US and EU LGBT lobby. That's it. How is that not "a campaign to produce strife"?
I only mention Hungary because it's the one I'm most familiar with since I have friends there. But we funded very much "strife producing" protests/revolutions in Georgia, Ukraine, Belarus, Kyrgyzstan, Tunisia, Egypt, and Libya (those are only the ones we know of.) I'm pretty sure most Libyans today would have preferred we had stayed home and left Ghaddafi in charge. Syrians too. The Belarusian govt failed to roll over and the Russians stomped on Georgia and are currently stomping on Ukraine. Lots of strife to go around, and the origins trace back to our CIA and NGO-ocracy and people like Samantha Powers.
They’re not also funding extreme pro-Orban messaging are they? When I say “campaign to promote strife”, that’s the kind of thing I mean, where both sides are being whipped up in opposite extremes, not because you believe both sides are right, but because you want the sides to fight each other more. In all of the examples you mention, there appears to be a side the interveners believe in, and they are promoting the evidence that supports that side, not because strife is the goal, but because they believe that side is right.
Hmm.. OK. I can see your point. So us funding revolutionary groups in Nicaragua (in the 80's) or Belarus (in the 2010's) with the goal of destabilizing those countries is different because we were ideologically and politically aligned with those groups. I imagine the USSR felt the same way about funding communist insurgencies in the middle East and Asia -- "our motives are ideologically pure so our meddling is OK. We're not sowing strike; we're freeing people."
I think it's distinction without a difference. When it was in our interest, we kept Sadat and Mubarak in power in Egypt. When we thought it was in our interest, we pushed Mubarak out of power by demanding free elections. When we realized we were wrong, we arranged a mass protest and coup to overthrow Morsi (even though he was the first legitimately elected president in modern Egypt.) We do what's in our interests. Russia has viewed itself as effectively at war with the Western-aligned international order for at least a decade. So it is in their interests to foment protests in Western countries to undermine that order. In this case, their interests happen to be aligned with the interests of a large groups of blue-collar Americans who have seen their standard of living fail to keep pace under that same liberal-globalist order, which gives Russia's propaganda a ready audience. Kind of like how people in Georgia were chafing under a corrupt and unjust ruling order that they resented, thus giving our CIA's propaganda a ready audience there.
I do see the difference. I just don't think that's what's going on in either case.
I think there's both a difference between funding revolutionary groups (which is a thing the United States has done that I don't support) and supporting an ideologically-aligned side in a democratic election (which is a thing I don't particularly mind, though it's hard to do without causing backlash among the democratic electorate one is trying to support), as well as a difference between supporting an ideologically-aligned side in a democratic election, and supporting *both* sides of ideological disputes in an attempt to cause strife. The latter is what I take Russia to have done significantly in recent years, amplifying both Black Lives Matter and Blue Lives Matter messaging, and both pro-Brexit and anti-Brexit messaging, and so on. It's not just supporting causes that they are ideologically aligned with, but nihilistically supporting diametrically-opposed causes to increase tensions.
"If I were trying to make a living by creating content for YouTube or TikTok or Instagram, I’d probably be trying to optimize for whatever I thought the algorithm would pick up. That’s not necessarily bad, but it’s not what the internet used to be."
Not necessarily bad? Sounds like slavery to me. And slavery to a machine.
Described at this level, it’s no more slavery than being an author trying to figure out what readers want.
Well, to me that's slavery, too. I could not write that way.
Well ultimately isn't every job something that you do because someone else wants you to do it (either your employer, or your customers if you're self-employed)? By that definition, literally every job would be slavery. Even Elon Musk is a slave to the EV market.
To rephrase my response, you can find something to do that both you enjoy doing and your customers enjoy consuming. This will probably not maximize your earnings. However, JRR Tolkien did not write the Lord of the Rings because he thought there would be a market out there for it, yet it made him a rich man. Most great artists and authors operate this way— they create what they want to create and their amazing productions find at market afterwards.
Yeah that is fair. On reflection I overreacted because you used the emotive word "slavery", even though obviously you didn't mean it literally.
And so you should have. I'm seeing the word slavery doing an awful lot of very heavy lifting here and in other comments.
Or you can choose a job that may not be the most lucrative job but it's something you would choose to do anyway. People, like my sister, who choose to found their own nonprofits have chosen this path.
Very good read, Thanks for sharing
Never saw a reason to be on social media- If I see something interesting I text or email or call my contacts- usually a few times a month, never more than a couple times a week.
I am very happy that some of the geezer bloggers I liked in the 90s and 00s are back at it on substack (I never did Twitter and I don’t do podcasts- too many wasted words, even at 2x speed), along with some new additions in the same spirit- Noah amongst them. Thank you for your efforts and for this historical overview and future predictions.
You almost cheapened it with the Russian nonsense, of course, but I understand and can overlook ritual ablutions.
I wonder how much email spam contributed to the rise of social media platforms, by making people reluctant to make their email addresses public?
Enshittification is catchy, but it's a fancy way of saying the English-speaking Internet is largely done growing. 95% of US users are on the Internet; 90% have a smartphone (Pew). Existing platforms have to monetize AND compete with each other for users, with the resultant big platform vibe being less "coffee shop" and more "Times Square." The Internet isn't the shiny new thing anymore, it's just part of life, now.
As for slop, if it's the new spam, I remember similar articles talking about how spam was killing the Internet in the late 1990's (remember the Nigerian princes?) and eventually the problem got bad enough that a mix of government regulations and better tech made it much less of a felt problem. I see the same thing happening here. It's difficult to find studies that attempt to measure the effectiveness of mis/disinformation (vs. just "look at all the attempts!") but some (RAND) have found that targeted populations wise up to it and become much more skeptical of sources after an adjustment period.
The 90s Internet vibe still exists - it went from USENET to Geocities to MySpace to Blogger/LiveJournal to Reddit. Most people don't want the 90s Internet vibe, though, they just want to order some air filters, swap pictures of their kids/pets with their friends, watch a police procedural or a sitcom, and move on with their life. Nothing wrong with that.
Seems like there’s a lot wrong with regular folks using these Internet platforms if they are full of lies and ads, it keeps these incumbent actors like FB, Google, TikTok, etc in business, justifying this enshittification.
Reddit has fallen due to malicious moderation, and nothing has filled the void.
It is kind of about the fact that the English language world is already all online, but it still doesn't change the fact that it appears to basically be a rule that these products inevitably end up becoming worse so they can squeeze more profits out of a stagnant userbase. I don't know how you solve for something like that, but it doesn't make it any less annoying.
Exactly. For all the complaints I see, I still use Google to get my search results. The ads can be mildly annoying sometimes but it's never nearly as bad as most people seem to make it. E-commerce results come out first for stuff because most people searching want to by it online. It might be annoying if you don't want to, but I'm sure a little of people appreciate it.
Not sure what's wrong with the Quora. The ads are there obviously, but they're actually less intrusive than most. The search feature in Quora is bad but it always has been, that's more of a just Quora thing. I use Quora a lot and get a lot of insightful atuf from it. I read stuff by American, Chinese and Russian ultra nationalists. Their perspectives can be scary, but they're real people and it's nice to see it from the other side. I also see regular people and seeing different perspectives again is nice.
My YouTube is algorithm generated but I prefer it like that. There's a place to check videos from only people I'm subscribed to but I prefer the algo. Instagram works good for me too. The community is disgusting but that's because of the people more than the platform.
Maybe it's because I didn't experience the old internet so I don't have all the nostalgia for it.
Well said. One thing I'd add is that I don't think it's *just* the design changes to the internet. I think people have actually changed in how *we* use the internet, and not necessarily for the better.
Like you said, the internet used to be an escape. You could come here to have some fun, blow off steam, whatever. Admittedly that lead to a toxicity and trolls, but it also provided a liberating freedom of speech.
Now, everyone takes it so damn seriously. LinkedIn and Facebook are a shiny wall of "perfection," everyone bragging about their bland perfect life like professional cover letter. Reddit answers are fine-tuned for conformist positivity, because that's what gets upvoted. Youtube "content" is all finely-tuned for "the algorithm" of whatever is trendy these days (a short reaction video to trendy dances, apparently). Even online shopping is kind of homogenized- a stock model in bright light holding a shiny plastic thing, with hundreds of 5-star rave reviews about how it changed their lives.
The scary thing isn't AI. It's how humans have been trained to *behave* like AI.
I remember the early days of online shopping. I was totally offline so saw it as a bemused observer. The smugness of the early pioneers at "I got this for half the price it's on sale for in the shops". The cons and scams as the online shopping "rules" got codified and security measures put in place..The people who boasted about their great deals and bargains (as above) then same people! vociferously complaining about all their local high street shops closing and even the out of town retail centre looking a bit due at heel. I think after COVID everyone got fed up with shopping online and they all wanted to GO OUT,see and feel the stuff they wanted to buy,feel other people around them,and catch that life vibe. I notice that stuff online is now about the same price as offline,returns no longer free,and of course the weak point is delivery. That text saying "sorry you weren't home so you can collect your parcel from our depot".
That really isn’t my experience at all. In my experience, since the pandemic online shopping has gotten even better and easier. Though it depends a lot on where you are physically located. When I lived in Texas most things arrived in two days, sometimes next day. But in California, I can get nearly anything next day, and a good amount of stuff same day.
Rdrama and Kiwi Farms are the last outposts of the old flippancy. I love seeing how they take the piss out of the ultrawoke Redditors. Keeps my blood from boiling.
Regarding: Algorithmic feeds: back to push media
Another way to experience social media is to bypass feeds altogether. One way is to bookmark the accounts of the people you like, and visit their accounts sequentially, posting or replying if you have something to contribute. You can become aware of other interesting accounts through replies and reposts. Since you aren’t following anyone, content is not pushed at you.
While you don’t interact with others in real time, this usually isn’t necessary. And by controlling the accounts you visit, you can assess their strengths and weaknesses over time – good protection from bots and trolls.
I was reading about Bluesky's user moderation options. I've been offered codes during its invite-only phase but didn't take them. I just also want to minimize social media interactions, so I don't know if Bluesky followed through with this.
On Techdirt, it did say that Bluesky offered robust tools for every user to be their own moderator. This in theory is a good idea. If you're not a bro, you don't really have an affection for "absolute free speech". You might be a woman, or a person of color, or not straight, and the internet can be a vicious place. Yet you still want to carry on conversations. So you can tune the account to block accounts and terms you don't wish to engage with, but you don't derail those conversations.
It also frees the burden of a social media's trust and safety team from having to be the arbiters of what can be said, and focus on material that does harm users and the business (e.g., child sexual abuse material, facilitating illegal behavior, etc.).
Hi Noah, I moved from Google to Perplexity.ai a couple of months ago. I ask it a question and get an Ai generated summary based on five relevant websites, with the links. I can ask follow up questions and keep the series of results. Not sure what the long-term business model is for the firm but, for now, as a user it’s great. It does make mistakes - said I was born into UK (not true) - but it’s MUCH superior to Google for searching. Google really did kill its own business model.
I'm adding this to my Alternatives to Google article. I'm going to caveat it though because it seems to offer a potentially dangerous answers to the question “how to respond to a vinyl chloride fire” though it does at least foot note where it got the answer from so you can read the actual source doc
Query response: https://www.perplexity.ai/search/how-to-respond-f5vtD8alRNi_4tMlyxY0xA
For context see https://www.funraniumlabs.com/2024/04/phil-vs-llms/
Perplexity does make mistakes and it is free so caveat emptor. But, I find it’s good at definitions, providing lists of benefits and costs, and whether two concepts are the same or different. Try “Are facts the same as circumstances?”, for example.
I like what I've seen of it so far. I'm just noting that it isn't perfect
Totally agree. One way I have found to check for errors is to ask a “double check” question to clarify the answer.
I've started using Perplexity for the "find information on the Internet" task that ChatGPT can't do yet and Google used to be good for but isn't any more. I'm not paying for it (yet?) and haven't been at it long enough to say it's great, but the early returns have been positive.
I'm having a lot of success using Microsoft Copilot which is available on their Edge browser. The button is in the upper right-hand corner. It isn't perfect, but it generally can handle more complex and specific questions than Google. It has three settings, from most creative to most precise, and I always use the most precise setting.
I've been using the pay version of Perplexity for about 3 months. You get a choice of four different LLM's (including one developed by the Perplexity team) which all have their strengths and weaknesses. I find it much more useful that Google or DDG but still have to pay attention to the results -- even though they are thoroughly cited which is good. I have found a couple of blatant errors but follow up questions work well to sort those out. One slightly frustrating thing is that it will return a generality of references but say it can't provide the specifics. The solution is to specifically tell it to use the references it just provided to present the specific details desired and then it will. All a learning curve in using a new tool but so far so good. And no ads.
Yeah this was where I was confused about Noah saying “[Google doesn’t work] so when I can I now use ChatGPT (which is not yet enshittified).” I don’t trust chatGPT at all right now given the kind of slop it serves me as a high school teacher every time I ask for a writing assignment!
It's interesting that Perplexity is deliberately less creative, and cites its sources alongside its synthesis/analysis. It definitely won't write you a bad version of your work, but I feel like it could cut down the time it takes to research what you're working on.
Truth becoming a luxury good seems to be part of the trends you outline. If you can afford subscription to ad-free, lie-free content, you get quality information. If not you live in the world of bots, slop, lies and conspiracies.
One could speculate on the the consequences this would have in a democracy.
Thanks for this thoughtful post.
The internet is turning out worse for humanity. The 90s and 2000s were periods worth celebrating given that the internet was a way to express ourselves, talk to each other, write blogs with indepth knowledge etc. It was easy to log into the Internet and get inspired from random content-real human content. It was designed to better users. But, the dramatic change was when corporations-the big four-(Apple, fb, Microsoft, ) yearned for profit optimisation; unhealthy competition by whatever means became the major strategy. 2) algorithms emerged and altered the whole internet for worse. Content after content were curated; that means there isn't no escape from the internet. For creators to make money, they had to be brands ,not authentic selves. They had to do anything awkward that even earn their attention from elsewhere.
This easily one of your best articles yet, Noah!
As someone who grew up with a young internet filled with imagination and genuine human connections, this future of algorithms and AI slop leave me yearning for either a return to an old-school internet, or simply just a complete disconnect from the online world minus more direct social platforms like Discord or other instant messaging services. If the 90s and 2000s showed us the benefits of an online world for your average citizen looking for social and creative outlets, then the 2010s and 2020s are showing us just how that same digital world can be abused by greedy corporations and corrupt states with global agendas.
I’ve been thinking/worried about the escalating cycle of slop and AI for quite awhile. “Garbage in = garbage out” as the idiom goes, continuing to train AI on ever more misinformation and AI generated bullshit seems like a recipe for a very expensive and energy intensive cesspool with negative end value to both business and society.
Its worse than you think. LLMs are guaranteed to hallucinate answers. If those hallucinated answers get fed back in then potentially the hallucination becomes what the internet considers to be true.
https://ombreolivier.substack.com/p/llm-considered-harmful
These LLM models aren't that good, and each generation seems to require exponentially more resources for training, while still remaining basic generators of boring text. And that's when they're not making up crap out of whole cloth.
There are probably insufficient resources on earth for more than another doubling or two, at which point it'll be obvious that there's no reasonable business based on selling access to partly hallucinated generic slop that consumes the resources of a medium sized nation.
I think LLMs have useful, *niche* applications where they are trained on limited, specific data sets and only used for that niche application… like analyzing mammograms and MRIs. Recognizing patterns in pixels to improve healthcare outcomes is a great application and probably worthy of the resources.
Having an army of LLMs churning out the dumbest answer to any simple query and fighting to make your dumb AI answer the top of the search result, in order to generate potential ad revenue does not seem nearly as noble a use.
I am reminded of one of Brian Klaas’ recent posts about “solving puzzles vs. solving mysteries.” AI is great at solving puzzles with limited possible inputs and variables and fixed determinable outcomes.
Applying AI to broad environments with limitless inputs (“mysteries”) and insufficient guidance is just inviting it to learn all the wrong things and cause potential harms in the process.
I absolutely agree. I would imagine that there are incredible medical applications for doing things like providing a second opinion on reading a radiology scan, for example.
But I wouldn't call those things LLMs (large language models), I think of them simply as neural network or machine learning applications: they have limited domains, and use *much* smaller amounts of data.
I was dissing the more general LLMs like ChatGPT 4 and Gemini, which are queried in natural language to write arbitrary random text, like "Write a Seinfeld Finale in the style of the last scene in Pulp Fiction, with Kramer as Samuel L Jackson's character."