301 Comments
User's avatar
Chris M.'s avatar

As you ought to know, the people warning about AI posing a risk of human extinction are not "a few online rationalists" but rather a lot of computer scientists and engineers, because the arguments are solid and the counterarguments unconvincing. https://www.safe.ai/statement-on-ai-risk

I agree with you on most kinds of technological progress, but AI is different.

Expand full comment
Kei's avatar

Agreed. The people worried about x-risk from AI include 2 of the 3 founders of deep learning, the heads of the three most cutting-edge AI labs, the two authors of the premier artificial intelligence textbook, and many prominent ML researchers and practitioners. Perhaps a year or two ago you could dismiss those worried as a few weirdos, but that's not even close to an accurate description now.

The reality is that blindly following any slogan, such as "Don't be a decel", is worse than just doing a cost-benefit analysis. For most technology, cost-benefit analyses should favor acceleration. For a technology like AGI, of which 48% of surveyed ML practitioners attach a 10% or greater chance of causing something on the order of human extinction, a cost-benefit analysis favors more caution.

Expand full comment
David Burse's avatar

Any SciFi book fan will be frightened of AI. Personally, I'd like to read SciFi books written by AI to see if they give away when the humans will be disposed of.

Expand full comment
Daniel's avatar

Did the survey ask the ML practitioners the likelihood they assign to AGI emerging in the near future?

Expand full comment
Kei's avatar

The survey is here: https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai

If I'm reading it correctly, the median estimate is 2059.

The survey was taken mid-last year though, and I'd expect the number to be materially lower if it were done again today.

Expand full comment
Daniel's avatar

Thanks! Does confirm my priors lol but yeah I agree its probably lower now

Expand full comment
Michał Zdunek's avatar

I don't know about ML practitioners in general, but you can see Yoshua Bengio's estimate here:

> My current estimate places a 95% confidence interval for the time horizon of superhuman intelligence at 5 to 20 years.

https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-risks/

Expand full comment
Daniel's avatar

Thanks! It was an interesting read and led me to a lot of other interesting reads.

His estimate is much more optimistic (or pessimistic I suppose) than my own, but he is a leading figure in the field... and sounds quite reasonable.

The more specific suggestions I find less convincing than the concerns he brings up. Nonetheless I am to some degree persuaded.

Expand full comment
Ethics Gradient's avatar

Strong agree. When it comes to whether or not having an independently agentic entity that is *much smarter and more capable than you are* around is good idea, look around and ask yourself how well that worked our for the Neanderthals.

Except instead of better spears and slightly bigger brains and thinking more or less like you, imagine something whose inner workings we don't understand, orders of magnitude smarter (and potentially self-improving), that can build viruses from scratch, and to whom the most marginal of inconveniences posed by human interference with its goals would be grounds for extermination (either deliberate or incidental).

AI is different.

Honestly between the grossly inappropriate use of the term "decel" here and the adoption of e/acc's propagandistcally willful refusal to consider that perhaps increasing the capabilities of a thing with independent reasoning and giving it agentic goals might result in killing literally everyone -- an event that *already happened* to previous hominid species and a risk that (As Kei observes) is taken extremely seriously by serious people including those most heavily involved in AI -- has me seriously reconsidering my subscription. This article is, honestly, offensively bad.

Expand full comment
Kenny Easwaran's avatar

Note that this description precisely applies to corporations, governments, and other social movements as well. We don’t understand their inner workings, they are orders of magnitude smarter (and potentially self-improving), and while they couldn’t build viruses from scratch until recently, they could and did build all sorts of things that crushed and destroyed humans that got in their way. AI is another instance of this inhuman agent, but it’s not totally novel (and I think the history bears out the fact that we should worry!)

Expand full comment
Ethics Gradient's avatar

I would say that I agree on corporations and governments being far more *capable* than individual humans, but I think the intelligence disparity doesn't really have an analogue (also governmental and corporate affairs, including the actions of individual agents within them, are much more legible than AIs are.)

That said, I have in fact invoked Hitler as a genuinely useful analogy here: the Nazis were fundamentally far more restrained in the scope of their genocide by capabilities limitations than by anything like values alignment. We've seen what capabilities without alignment on a human scale looks like, and it's catastrophic -- on an inhuman / AI scale, there's nothing that plays the role of the Allies.

Expand full comment
Kenny Easwaran's avatar

I’m not convinced that there’s as much disanalogy on the intelligence front as you think. Corporations and bureaucracies don’t exhibit anything like human intelligence, but I’m not convinced that AIs are going to be much closer - I’m much more inclined to identify “intelligence” with “capability to respond effectively to the environment based on some set of goals” than I am to identify it with anything that looks like human reasoning.

Expand full comment
Andrew Keenan Richardson's avatar

As an ML engineer, I see many AI companies that are currently working on automating human reasoning. At a certain point, the ability to take actions in pursuit of a goal is going to look a lot like human intelligence. Or even to supercede human intelligence, in the same way that Google search supercedes human knowledge.

Expand full comment
Auros's avatar

I'm assuming that you're somewhat referring here to Charlie Stross' idea of the corporation as "slow AI"?

http://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html

The thing about that is that they really are _very_ slow, and their processes are _reasonably_ legible, especially to the humans serving as components. A defense contractor company might maximize its return on investment by finding ways to actually stir up wars, but it would have trouble staying in business if it did, because its own employees would leak to the authorities about what it was up to.

I think there is legitimate reason to be concerned that if a corporation sought to automate its processes to a large degree, as a way to gain a competitive edge, a profit-maximizing agent untethered from accountability could do serious, and potentially existential, damage.

Expand full comment
Daniel Kokotajlo's avatar

Whoa there, there are huge differences in degree. Compared to giant transformers, we understand the inner workings of corporations and governments much better. Compared to the sort of AGIs we'll see by the end of the decade, corporations and governments are super dumb -- indeed arguably corporations and governments are worse than individual humans in various important respects. They might be able to do lots of parallel computation/labor, but their serial speed is about the same. Whereas AGIs will have 10x-1000x serial speedup.

Oh, and also, corporations and governments *are built out of people* which sure helps a lot for alignment and interpretability!

History does bear out the fact that we should worry, but also, the situation with AGI is quantitatively worse in several important respects than the situation with corporations and governments.

Expand full comment
Daniel Kokotajlo's avatar

(That said I'm hopeful that significant progress will be made in interpretability, such that in a few years perhaps we'll understand the inner workings of giant transformers better than we understand the inner workings of corporations and governments. Hopefully.)

Expand full comment
Michał Zdunek's avatar

Yes, his denial of AI risk is offensive. But he needs to readjust it, and I'm sure he's capable of that, if he just thinks about the matter more deeply for a while.

Expand full comment
Ethics Gradient's avatar

The "offensive" part isn't just being factually wrong and unwarrantedly dismissive but doing so while opting to use a word from e/acc (who, to be clear, are the bad guys when it comes to AI risk) that's clearly intended to contemptuously evoke the word "incel." I consider that pretty beyond the pale.

Expand full comment
Michał Zdunek's avatar

I agree, but to be clear, it's because he hasn't thought about it for a while. He thinks LLMs are just ordinary chatbots, isn't aware of how people are trying to convert them into agentic AGIs.

Expand full comment
Ethics Gradient's avatar

Fair. You are correct that it would be a better outcome for Noah to digest the arguments and adjust his views, if this is a feasible outcome, rather than for him not to and accomplish nothing other than fanning the flames of righteous indignation.

Expand full comment
JamesLeng's avatar

Neanderthals were competing for the same ecological niche. Humans vs. silicon-based superintelligence might be more usefully compared to mitochondria vs. multicellular life - which worked out okay for the mitochondria, even if they're not exactly in charge.

Expand full comment
Ethics Gradient's avatar

It's not clear to me why AI's ecological niche is or should be less than "any and all available matter and energy in service of whatever goals the AI(s) have."

Expand full comment
JamesLeng's avatar

"Available" is hiding a lot of complexity there. If we're not making magical assumptions about their ability to transcend nuclear physics or thermodynamics, there are better and worse sources of matter and energy. Humans are capable of climbing the tallest mountains or exploring the deepest parts of the ocean, but we don't build houses and raise families in those places because it's still fairly dangerous, and more importantly inconvenient.

Silicon-based AIs will be more intelligent than us, but not *infinitely* intelligent, meaning they'll need to allocate cognitive resources among the available problems. If some field of study - such as mature, stable infrastructure - has few prospects for gain (because it's already been fine-tuned by a poorly-documented evolutionary process), potential for devastating loss if meddled with (because it's a load-bearing pillar of their reproduction), and they've got better things to do which aren't gated behind it (such as exploring the whole rest of the universe), they might make a rational decision to mostly leave well enough alone, treating it as a black box for purposes of their other projects and taking precautions against accidental disruption.

Picture a star-god building a slightly idealized copy of Taiwan, thriving humans included, on an intercept vector to some distant gas giant, then being pestered to enclose it with an appropriately moon-sized condom before orbital insertion.

Expand full comment
Akiyama's avatar

I stopped subscribing several months ago because of Noah's opposition to AI safety.

If a blogger is unable to understand, reason correctly, or present truthful information about the most important issue of our time, why should I trust what they have to say about any issue? I am no more likely to take seriously someone who ignorantly dismisses the risks of developing AGI as I am someone who ignorantly dismisses climate change as a hoax.

Expand full comment
Bamboo Annals's avatar

Spot on. Solar panels and mRNA vaccines are just stuff. AI is about the definition of humanity.

Expand full comment
Aaron Erickson's avatar

I know lots of these folks - and many are really just spooked because they don't understand their own creation, and because generative AI can act in nondeterministic ways at times, they want to put the genie back in the bottle.

But you can't. We have open source models in the wild, that can be improved with basic consumer hardware. Narrow AIs coordinated by decent GPT-4 class models that are either here now or in the pipeline are not just powerful, but can be used to further develop themselves.

The "pause" was never going to happen unless you created a massive totalitarian state to control it. And that cure is worse than the disease.

Expand full comment
David in Tokyo's avatar

"We have open source models in the wild, that can be improved with basic consumer hardware. "

I'm not so sure about that. The amount of computation required to train and run these things is phenomenal and astronomically expensive. Basically, they're burning a ton of VC money every week. Loss leader city.

(For Go, though. you are quite right: Katago (dowload the KaTrain front end and it will include the latest Katago version for you) is devastating, evil, mean, cruel, and kicks my butt something fierce. And all it needs is an RTX3080 in a fast peecee. Great fun if you are a masochistic Go player...)

Expand full comment
Aaron Erickson's avatar

Typically ppl are using LoRA as the means to incrementally fine tune existing models without having to pony up for industrial class GPU rigs: https://huggingface.co/blog/lora

The technique was the basis for the Google “there is no moat” leak which debunked everyone’s priors on the idea that you need to train from scratch.

Expand full comment
Kei's avatar

The "there is no moat" leak has been massively overhyped. It was just the opinion of some random Google researcher. And much of it was based on the idea that GPT-4 grades of model outputs are accurate, which doesn't seem to be the case. I also personally know a Google researcher who thinks there is a moat, but if they leaked their opinion it would get far less press.

Yes, people can fine tune with LoRA, but there are a few important caveats here:

- LoRA is fairly limited in what it can teach a model. Most of what I've seen is that LoRA is good at teaching style and unveiling concepts the model already knows, but is less good at teaching the model a significant amount of new things

- People are less concerned with current open source models like Llama 2 70B and are more concerned with larger, more powerful frontier models. Fine-tuning especially large models on the order of GPT-4 (rumored to be 1.8T params) would be out of reach for anyone without a large number of latest gen GPUs, even if you use quantized training like QLoRA. Due to the rate of compute growth over time, this will eventually not be the case, but buying ~2-5 years of time seems valuable for developing policy

- While fine tuning can often be done by individuals or small groups of people, the creation of frontier base models can currently only be done by large organizations. So if you are able to limit the proliferation of more powerful base models through these large organization chokepoints, that can limit the capability of what can be done with consumer hardware

Expand full comment
Doug S.'s avatar

Does the cycle exploit still work?

Expand full comment
Mo Diddly's avatar

I’m not sure you can declare with any certainty that the cure is worse than the disease will be. My best guess is that unchecked AI development will eventually enable the equivalent of WMD weapons in everyone’s pocket. This will undoubtedly be followed by an authoritarian push to spy on everyone’s every move, enforced by AI agents orders of magnitude more intelligent than anything we have today.

Is there some reason to expect that:

(A) this outcome is unlikely, or

(B) this outcome would somehow be less authoritarian than enforcing a limit on computational power, or training data, or model size today?

Expand full comment
Andrew Keenan Richardson's avatar

My experience with AI pause advocates is that they're aware that it's not a great strategy, but they want to communicate their concern while they work on finding more tractable strategies. As you say, the genie can't go back in the bottle, so it's genuinely unclear how to fix a concerning situation.

Expand full comment
Michał Zdunek's avatar

Yudkowsky is right - track all GPUs to monitor emergence of clusters, if necessary, do an airstrike on a rogue one in a foreign country. Require Nvidia to limit sales of powerful chips. Institute heavy fines for AI misuse.

Expand full comment
JamesLeng's avatar

Church-Turing thesis says computation is fungible - any universal Turing machine can emulate any other. Research budget to develop that was intended for winning World War 2, and succeeded in doing so. Accordingly, your plan would amount to burning down as much of the Internet as can be reached, then guaranteeing the eventual emergence of AI in the hands of someone you're at war with, who would presumably prioritize using it to win, rather than the good of humanity as a whole.

Expand full comment
Matthew Green's avatar

I'm a computer scientist and don't share these alarmist beliefs. Many of my influential researcher colleagues also do not share these beliefs, see e.g., Scott Aaronson. There are no high-profile "AI isn't going to kill us all" petitions going around, but please don't assign a minority viewpoint to us all.

Expand full comment
Kei's avatar

Scott Aaronson is on the linked petition.

Expand full comment
Matthew Green's avatar

That’s a shame because I’ve spoken with him about it and his views expressed to me aren’t reflected by this alarmism.

Expand full comment
Kevin Barry's avatar

AI doomers are absolutely a few online rationalists who are overconfident. Check out this extinction tournament results: https://www.astralcodexten.com/p/the-extinction-tournament

Even superforecasters had a way way lower prediction of AI doom than you say. 1-2% chance rather than 50%+ chance.

Expand full comment
McG's avatar

1-2% is Still Too High

Expand full comment
Armaan Ajoomal's avatar

Appreciate the share - definitely will look further into this.

Expand full comment
vtsteve's avatar

Noah forgot to put the ", FUCK YEAH" on his opening graphic. Sad.

Even if LLMs aren't the ultimate solution for ASI, how can anyone think that OpenAI/Microsoft, Google, Anthropic, et. al. will just shrug their shoulders and settle for $20/month, rather than looking for the Next Big Thing (which will eventually kill everyone)?

Expand full comment
Mitchell Porter's avatar

I just asked Bing and it says decelerationism is just a form of escapism. So I guess Noah's right

Expand full comment
Alexander Ebert's avatar

I would add that all of the “reasonable“ critiques of vax rollout you just laid out were, at the time, considered straight up anti-VAX statements. Meaning a lot of the anti-VAX “invective” was actually fueled by a total mainstream unwillingness to have a nuanced conversation, which refusal presented itself as rightly suspect, which, then further fueled suspicions.

Expand full comment
David Burse's avatar

Agree. My way or the highway is not a great way to influence people. Now let's do Identity politics!

Expand full comment
Jonathan's avatar

And the complete lack of discussion on risk benefits. I know 6 people that died of heart attacks or had significant changes in lifestyles after the Covid vax, while only knowing 2 people that died of Covid. Both of those could’ve died of the flu or a similar common sickness due complications from cancer, or being 400 lbs. sure maybe it did save all the purported lives, but the way it was rolled out and mandated against previous medical ethics was insane. It actually has caused this decel that Noah is talking about when you think about it.

Expand full comment
Mike M's avatar

Not looking up the numbers now, but if you presume the covid vaccine is very effective at preventing death from covid, but still has significant side-effects, it would make sense to see more deaths from the vaccine than from the disease.

To determine risk/benefit, you need to also consider the risks of covid for the unvaccinated, as well as how much the vaccine reduces covid infections in the overall population.

As an example, if the vaccine increases your risk of death by 2%, but it brings the covid risk of death from 5% to 0.5%, its worth taking the vaccine, but the practical effect once everyone starts taking it is you'll see more deaths from the vaccine than from the disease.

Expand full comment
Treeamigo's avatar

The ONS in the UK kept great stats on vax status hospitalizations, deaths, infections by age group. Was very obvious by second half of 2021 that vaccination status had negligible influence for anyone under 40-50 years old and that vaccinating kids was mostly senseless. Vaccinating older folks made sense. Think they stopped releasing these stats in early 2022. A problem was there was no control group because by then nearly everyone had already had Covid - vaxxed or unvaxxed. Corrupted data. But the key thing is not to make broad generalizations/recommendations about a disease that is dangerous primarily for a subset of the population.

Expand full comment
Jo Waller's avatar

Yes there are many problems with looking at observational data, especially in the elderly.

The unvaccinated were self selected, it wasn't a randomised trial, and were more likely to be health conscious, the 4 times vaccinated in particular were more likely to have many co-mordities and by encouraged to be vax. There is also complications of survivor bias.

https://georgiedonny.substack.com/p/update-on-excess-deaths-and-depopulation

Whether once died of 'covid' is also a major problem https://georgiedonny.substack.com/p/there-is-no-covid

Expand full comment
Jo Waller's avatar

Also the 'unvaccinated' included those vaccinated within the last 14 days, so any deaths labelled as 'covid' in this group would be misleading.

Expand full comment
Jonathan's avatar

So, yes your right, but you can only factor in the induvial risk reward, not the societal. for the 65+ risk reward is way different than the 20 something. in addition the years lost should be considered in any medical treatment, not just the pure death rate. If you have someone who needs to exercise to maintain mental health, the risk of the vaccine is different than even someone in the same age group.

More over, this is what Noah misses with this post. I love technology and improvements made, but in regards to the vaccine, so many issue with the data and the mandates associated with it, so for him to call luddites is insane.

Expand full comment
ReadingRainbow's avatar

Except for if Covid deaths are only among those with comorbidities and vax deaths are among the young and healthy.

Expand full comment
Jo Waller's avatar

The risk of dying of 'covid 'for children was effectively zero. According to Ioannidis in under 70s the infection fatality ratio was 0.05% .

In the Pfizer trial (which was shown to have hidden deaths occuring in the vaccine arm in the appendix- making the death rate higher in that arm than the placebo arm) which only included healthy under 70s the vaccine was shown to decrease the chances of getting non specific symptoms or a PCR positive for some genetic sequences never shown to come from an entity by 0.85% (about 98.25% of the placebo arm didn't get 'infected' and 99% of the vaccine arm didn't (according to convicted criminals))

So the risk of catching 'it' is lowered by 0.85% from 1.85% to 1%. Meaning you need to vaccinate about 100 people to allegedly prevent 1 infection and so you need to vaccinate 50,000 people to allegedly prevent 1 death.

The vaccine brought the risk of death from covid from zero to zero in kids and allegedly from 0.0925% to 0.05% in under 70s.

Would you say it was worth it?

Expand full comment
Jo Waller's avatar

Sorry that should read the Pfizer jab with the best efforts of those with enormous vested interests lowered the risk of dying from 'covid' from 0.000925% to 0.0005%.

Expand full comment
Jared's avatar

I remember when the vaccines came out, everyone who wanted one got one. Also, COVID was a unique virus that could re-infect you immediately after recovery, there was no way to get natural immunity to it, only a vaccine would save you. And then the lockdown didn't end, because the vaccines weren't effective enough, then not effective against the new variants, the infection numbers kept going up, and the CNN death ticker kept going up, the vaccines only prevented death, not transmission, and transmission was way worse than death, you needed a mask AND a vaccine to not literally be killing people literally, then you needed a vaccine AND a mask AND a booster AND a second mask, and if you're a trucker who doesn't want a vaccine and you're Canadian, then you're a racist misogynistic Nazi. And what finally ended it all was Russia invading Ukraine.

Expand full comment
Kenny Easwaran's avatar

What ended it all was the omicron wave dying down and not being succeeded by a new variant comparable to delta or omicron.

Expand full comment
DxS's avatar

Noah, should we really scorn warnings from AI's top research scientists and CEOs -- Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman and Dario Amodei? Each is on record that AI worst-case risks deserve the same care as preventing pandemics or nuclear war.

Shouldn't we trust experts when they caution their field has dangers? Lead poisoning is real. Antibiotic resistance is real. Global warming from fossil fuels is real.

Failing to be careful around real technology risks isn't "owning the decels." It's being as childish and destructive as they are.

I'd like to know why you so contemptuously disagree with the people who actually make cutting-edge AI. Or I'd like to be reassured that you don't.

Expand full comment
anzabannanna's avatar

The reason he behaves this way is because it is the nature of of his consciousness. To behave otherwise, he must change its nature...or more accurately: its nature must be changed by something.

Expand full comment
Michał Zdunek's avatar

He must question his prior that tech is always good, but that's difficult. Or he has to recognize that AI is not like other tech, but has the potential to create an alien life form.

Expand full comment
Kevin Barry's avatar

Yes because they are extremists. See the results here for more measured prediction of AI extinction risk. https://www.astralcodexten.com/p/the-extinction-tournament

Expand full comment
Daniel Kokotajlo's avatar

On the contrary, the XPT forecasters had no clue what was going on and made embarrassing obvious mistakes: "The most aggressive of them thinks that in 2025 the most expensive training run will be $70M, and that it'll take 6+ years to double thereafter, so that in 2032 we'll have reached $140M training run spending... do these people have any idea how much GPT-4 cost in 2022?!?!? Did they not hear about the investments Microsoft has been making in OpenAI? And remember that's what the most aggressive among them thought! The conservatives seem to be living in an alternate reality where GPT-3 proved that scaling doesn't work and an AI winter set in in 2020." See this thread for discussion. https://forum.effectivealtruism.org/posts/YGsojZYtEsj2A3PjZ/who-s-right-about-inputs-to-the-biological-anchors-model?commentId=y6p8ckiShzW3bLKAw

"Extremists" my ass. You are talking about a whole bunch of leaders in the field.

Expand full comment
Kevin Barry's avatar

To borrow a phrase from election Twitter, if you're deskewing the data from the start with the purpose of finding out how its wrong, you might have some trapped priors.

Expand full comment
Michał Zdunek's avatar

If you are calling some of the most legendary and frontier people in a field extremists in the field, you might have some trapped priors.

Expand full comment
Daniel Kokotajlo's avatar

I'm not doing that. I'm just disputing your claim that the people who think AI x-risk is a serious possibility are extremists and that the XPT forecasters are the real authorities on the matter.

Expand full comment
Robert Merkel's avatar

I'm broadly in sympathy but... (and here goes nothing...)

There's a reason why "Don't Create the Torment Nexus" became a meme.

In the case of AI, we're getting endless search engine spam, faked revenge porn as a service, chat bots that strip-mine the past efforts of millions of knowledge workers of various types to produce answers that are maybe 95% correct but 100% convincing to the credulous - and sufficiently good at generating passable essays that schools and universities will have to go back to examinations in many cases to test that students actually know enough to be able to use the outputs of LLMs wisely.

Next up, I'm sure IVR vendors are already working out how to use LLMs to avoid connecting customers to humans who can actually solve their problem.

I'm not suggesting for a moment that we should abandon recent developments in AI. But I do believe that libertarian techbros should pull their heads in and accept that sometimes society is going to tell you "no, don't do that" or "do that within these guardrails".

Expand full comment
David Burse's avatar

There are funded start ups working on a way for us to replicate ourselves for eternity. Has benefits while I am still alive. Feel like golfing instead of work? No problem AI you will be work you that day. But wait, AI you is better at your job than real you. Clients and customers prefer AI you. You are gone, AI you is in. You die? No problem. AI you continues on.

Expand full comment
Doug S.'s avatar

"AI Stole My Life" by The Cog is Dead

https://youtu.be/0ATuqedB3uo?si=Y-D8KSOmNlkWUhQd

Expand full comment
David Burse's avatar

Clever. Thanks!

Expand full comment
Joe's avatar

You are not the AI version of you, but AI-you will have lots of reasons to convince people otherwise, most superficially for access to your bank account.

Expand full comment
David Burse's avatar

Time to put it all in gold and bury out on the property. Let AI try to dig it up.

Expand full comment
JamesLeng's avatar

That sort of academic dishonesty was a pre-existing problem which LLM-written essays merely made more urgent. No argument from me on the broader need for guardrails, though. Nobody has yet done for memetics what the first world war did for industrial chemistry and international diplomacy, and I suspect the testing and debugging process won't be much more pleasant. https://shadowjackery.tumblr.com/post/721059908254121985/the-gladdest-thing-under-the-sun

Expand full comment
Rina's avatar

Accelerationism is a fundamentally nihilistic and misanthropic ideology—Nick Land is not, and never will be, a positive and optimistic person. Why do you provide cover for people who want to replace all life with that which most efficiently maximises entropy? It feels like you’re operating purely based on vibes, and I find that pretty disappointing.

Expand full comment
Alistair Penbroke's avatar

There are some really deep flaws in this analysis, beyond the specifics, to do with how you're defining progress and acceleration. From the perspective of the people you're attacking all this is mirror world logic. To them, you are the anti-science ideologically motivated decelerationist, and they are the progressives.

For example, insisting on 100% renewables when there are no workable solutions for intermittency results in a DEceleration and a reduction of the rapid progress that has come from using nuclear, coal and gas. Germany has slammed into this at full speed and is now sharply decelerating, due to sky-high energy prices. Renewables haven't avoided this outcome. Instead in a desperate attempt to avert grid instability they have reopened coal plants. All this was predicted years ago by the so-called "decels" who warned that this strategy would slam the breaks on progress. They were ignored by self-proclaimed "progressives" and the resulting actual economic deceleration is now evident to see.

Likewise, the mRNA vaccines will forever be associated in many people's mind with anti-science pseudo-progress. Dozens of claims about these things were presented as 100% proven undisputable science and then walked back within months. Science done properly makes gradual but solid forward progress. It doesn't randomly jump around the map making an endless stream of bold claims with 100% confidence that turn out to be completely wrong. That isn't science and it's definitely not progress: it's pseudo-science and anti-progress.

The problem here is that you classify these things are progress because progressives tell you they are progress, therefore anyone who disagrees is just inexplicably confused. But to normal people (non-progressives), a thing is not automatically progress just because it's new. It must be shown to actually improve people's lives first. Only then does it get to be classed as progress. All the examples you picked are contentious because they fail this basic test. For example 100% renewables all the time would yield no progress in normal people's lives, because electricity is fungible and how it was generated makes no difference. The argument that it will yield progress is actually an argument it will avoid a regression due to catastrophic weather, not that it would itself yield better living conditions than today. So the argument that renewables = progress is already on thin ice, because merely preserving what we've got isn't strictly speaking what progress means. But then if you don't think climatologists are honest/competent, their predictions aren't convincing and so the claim that renewables = progress falls apart completely. At that point renewables are anti-progress because the huge efforts made to replace generation capacity is just wasted time and money that could have been spent on things that actually improve people's lives.

As for AI, that's obviously only progress if you're the one replacing other people with it. If you're the one getting replaced it represents a significant deceleration in your life prospects. No surprise that it's not universally viewed as progress.

Expand full comment
Kenny Easwaran's avatar

Science *does* jump around with an endless stream of bold claims that turn out to be completely wrong. It’s only because you don’t follow scientific journals that you didn’t realize this until a bit of science was finally relevant to your life.

Expand full comment
Alistair Penbroke's avatar

The word "science" in the sentence you're responding to refers to the abstract ideal, what lay people think science is or should be. Obviously the mismatch between what science claims to be (calm, thorough, rational) and what an overfunded academy has turned it into is huge, hence the tizzy over all the anti-progress pagans that have suddenly cropped up everywhere.

Expand full comment
Kenny Easwaran's avatar

What do you mean “overfunded”? Science *is* thorough, but has never been the calm, rational thing it’s practitioners like to imagine.

Expand full comment
Alistair Penbroke's avatar

I mean governments and foundations allocate too much of their budget to research grants. If you overfund research then you get a lot of people chasing too few good ideas, leading to fake "discoveries", fraudulent numbers, excessive levels of compliance and so on.

A lot of the problems with science would go away if funding were slashed to 0.1% of the current level for example. It would force much more rigorous prioritization, and many of the people who can't hack much more than endless modelling studies would exit research, much to the betterment of mankind.

Edit: and I should mention that I knew about this "jumping around the map" problem before COVID which is why I was skeptical much earlier than most people were. Oh boy, was that a frustrating experience....

Expand full comment
Kenny Easwaran's avatar

I'm not sure why it's a good thing to have fewer people chasing good ideas, fewer fake "discoveries", and fewer fraudulent numbers, if you're also cutting actual good ideas, real discoveries, and real numbers by the same amount.

It's true that a lot of the problems with science would go away if funding were slashed to 0.1% of the current level. But that's just because a lot of science would go away. A lot of the benefits of science would also go away.

Similarly, a lot of the problems with elections would go away if we just eliminated 99.9% of elections.

I don't see why it would better mankind to eliminate most of the ordinary not-particularly-talented scientists.

Expand full comment
Alistair Penbroke's avatar

You're assuming linear scaling, I'm assuming that when funding is scarce what's left will be focused on the good stuff, so quality would go up. Quality isn't spread around evenly.

Also a lot of the fraud that happens is because grant funding is a firehose. It's an easy way to get money and status, nobody is gonna check the paper backing your TED talk, or if they do it'll be in a decade and you can just act all offended and bluff your way through it. Dry up the funding and those sorts will drift away to other easier scams.

But I'd actually be OK with just total defunding, at this point. Is science something governments+charity should really be involved with at all? That's not obvious to me. Companies generally do their own research because so much academic output is of no use to them, and they're the primary consumers of science. By way of analogy, do we expect charities and universities to develop their own smartphones? No, it's obvious that the commercial sector is far more proficient at that. There are lots of such examples where it's just obvious to everyone that the public sector cannot contribute. Perhaps science is the same way, and in a generation or two people will look back on governments running science the same way we look back on there being exactly one telephone company or one steel company.

Expand full comment
Maxwell E's avatar

Spot on. This is the worst piece I've ever read from Noah, and it shows some glaring holes in his ability to temper his priors. I just wish he wouldn't be so arrogant about pretending a topic outside of his wheelhouse has no nuance whatsoever.

Expand full comment
Buzen's avatar

I really don’t understand how all the ones shouting that solar energy is ridiculously cheap refuse to discuss why the many expensive subsidies can’t be eliminated. Also, the areas with the most solar energy installations like California and Germany have the highest electricity prices, why is that? I’m not a decel, but moving all the solar and wind subsidies into nuclear fission/fusion and advanced geothermal would be more accelerated than propping up already cheap technology.

Expand full comment
Melvin's avatar

> Also, the areas with the most solar energy installations like California and Germany have the highest electricity prices, why is that?

Backwards causality maybe? The places with the highest electricity prices are the places where solar makes the most sense so they'll be the first places to adopt it.

The reason that solar can simultaneously be cheaper but hard to justify is that solar costs you a lot more money in the short term but a lot less money in the long term. The other problem is that if prices continue to come down then you might be better off waiting another five years to get better panels cheaper.

Expand full comment
Jo Waller's avatar

Solar is cheap and many companies are running at a profit without subsidies but the consumer pays the going rate for the highest cost of fossil fuels, usually gas prices, which go into the mix of sources where your electricity comes from.

I don't understand why an industry that pays CEOs $10 million in bonuses and makes $trillions still gets $billions in subsidies, grants and tax exemptions. Clue- it's not renewables https://georgiedonny.substack.com/p/will-we-be-bankrupted-by-net-zero

Expand full comment
Treeamigo's avatar

Great point about embracing tech advancement, but the examples chosen are positively tribal.

Solar cost is a product of Moore’s law and lightly regulated Chinese manufacturing. I live in CA and use passive and active solar and know its advantages and limitations. One could easily use Solar as an example of why governments shouldn’t force choices and disrupt markets to please donors and activists (though fundamentally that is government’s prime objective) as California (a perfect spot for solar given air con demand spikes on sunny summer days and excess generation in summer afternoons/ early evenings could be directed east to time zones where the sun is already setting but the air con is still on) has completely mismanaged its grid and power generation infrastructure and has the highest electricity costs in the continental US.

As for Covid, you cannot disassociate people’s vax stance from the governments actions (lockdowns, school closings, masking, massive subsidies, vax passports and mandates). All tied up in civil liberties. And if one were to do a cost/benefit or harm analysis, I am not sure the first example I would pick is one comparing a 99.97 pct survival rate to a 99.995 pct survival rate.

For that difference was it worth sacking tens of thousands of soldiers and first responders from their jobs? Was it worth possibly permanently damaging the academic confidence and psyches of tens of millions of kids? Was it worth the spike in deaths from alcoholism, drug abuse, suicide, undiagnosed cancers? Was it worth trillions in economic damage followed by massive debt and inflation?

Personally, I think I’d steer very clear of Covid as an example for the brilliance of “science” and of our government’s ability to use it to make sensible decisions. There are many better examples, and if I were to choose to mock anyone for extremely costly and unscientific beliefs it would be the fearful white progressive pro-lockdown, vax passport tribe who made vaccination a wedge issue (for instance the Biden admin forcing a vax mandate for employment when evidence was in that the vax didn’t prevent transmission and the majority of people had already had Covid). Is that science or divisive politics to please one’s base? By the way I have had 4 MRNA vaxxes, am not anti-vax and during Covid worked on a project for a state government forecasting infections, deaths hospitalizations.

The best argument for tech is simple. Our lives are more comfortable, longer/healthier, easier and more data enriched due to tech. I grew up seeing how my grandparents lived in the 1960s with early 20th century tech (coal furnace, wringer washer, amputation for diabetes) and it wasn’t pretty. Not that all of todays tech would have made them happier but it would have made life a lot easier. And of course they grew up in an era where people rode horses.

That our society and culture have poorly adapted to tech and have hollowed out social and economic life for a large percentage of the population is not tech’s problem- it is our problem.

As with Covid vaxxes, though, one should not look at the declinist or Luddite issue in isolation but enmeshed with everything else going on (culture plus tech). I’ve done a lot of reading from the late 1800s and early 1900s on the issue of tech and society - there were a lot of smart people thinking about those issues back then. Some of the ideas were crack pot (some gave us nice craftsman furniture and architecture) but many took the social and cultural issues seriously while not being anti-progress.

Tech advancement is going to happen. We can’t control that (though we will make some temporarily bad decisions on subsidy and regulation)

Tech will take care of itself. Personally I think we need to devote more time and attention to social and cultural issues. Not that I have any answers.

Expand full comment
anzabannanna's avatar

> As for Covid, you cannot disassociate people’s vax stance from the governments actions (lockdowns, school closings, masking, massive subsidies, vax passports and mandates).

You have it backwards: he *can't not* disassociate it, because that's how his consciousness renders it: disassociated. Noah is speaking his truth, he is not joking or lying, though he is speaking untruthfully in an absolute/shared sense of the word (which humans have little notion of here in 2023).

Expand full comment
Treeamigo's avatar

His stats are accurate and I know there are some people who are truly/primarily anti-vax (RFK Jr) disassociated from the general civil liberties/authoritarian impulses, but I fully agree that most of the left-leaning Covid wedge issue people focus more on the vax and try to sweep everything else (much more consequential) under the carpet.

It is also true that too many of the Covid anti-vaxxers have been sucked into false info black holes.

The lies they’ve been told and manipulation and gratuitous harm imposed by the government and the elites certainly explain mistrust of narratives pushed by those people, but it doesn’t excuse gullibility when it comes to consuming narratives pushed by other people with questionable motivations. Distrust everything on every politicized issue would be a better approach. Nor do I excuse willful blindness of progressives when it comes to the incredible and lasting damage caused by their unscientific Covid policies.

Expand full comment
Andrew Bronson's avatar

Great comments johnny and Tree.

I think Noah has drank too much KoolAid.

Solar panels come from China and are made with coal fired electricity and non-renewable inputs. In 25 or 30 years, they will all be in the landfill. (hopefully not). I have a $35K off grid setup and I only hope it out lives me.

I'm a cattle vet and got in line for the first mRNA vaccines offered in Canada. My motivation was to help with herd immunity. Never happened. And, the Covid vaccines join the rest of the corona virus vaccines in veterinary world in that they produce poor herd immunity. Most have negligible utility in veterinary medicine.

I don't know enough about AI to comment. I suspect humans will be able to "throw the switch" and thus end the sci fi aspects of AI.

So I keep reading Noah and the comments so I can get a wide range of opinions of what intelligent people are thinking.

Please keep commenting.

Expand full comment
Alistair Penbroke's avatar

The sci-fi aspects are in fairness not what's being targeted in this article, but rather fear of obsolescence.

On the other hand, it's kinda unclear how that point links to the rest of his argument. Fearing that tech will replace you is pretty normal, right? That isn't a generalized "fear of progress", it's a specific fear of not having a job tomorrow.

Expand full comment
David Burse's avatar

"or do I excuse willful blindness of progressives when it comes to the incredible and lasting damage caused by their unscientific Covid policies"

Old sayings: "Fool me once, shame on you, fool me twice, shame on me" "quit pissing on my leg and telling me it's raining". These are old sayings for a reason.

Expand full comment
Andrew Joe's avatar

We live better than kings did 100 years ago thanks to technological progress, and decels want to slow it down?

Expand full comment
afdjkhgakdfjhbl's avatar

Kings don't live paycheck to paycheck.

Expand full comment
Kenny Easwaran's avatar

Don’t they? What savings does a king have that protect him in case he loses his job?

Expand full comment
Bobson's avatar

Can we base it on an anomaly, like the Dutch king? He's a KLM pilot.

Expand full comment
Jason's avatar

Seems reasonable to pop one’s head up now and again to assess what the priorities should be for oneself and one’s society.

Expand full comment
Joe's avatar

Random example: What accelerationism promised were smart homes that you could control more easily and quickly than traditional control mechanisms. What we got was a mix of decent hardware, bad software, gross privacy invasions, and security risks. This is becauae accelerationists fundamentally base their visions on science fiction stories, and not on any extrapolation of what real people do.

Expand full comment
Buzen's avatar

Depends on what you use. Apple HomeKit doesn’t have bad software, privacy, or security issues. If you use Amazon or Google and cheap Chinese “good” hardware that requires use of different sketchy Chinese apps then you get the other problems. You get what you pay for and nobody is forcing anyone to use home automation.

Expand full comment
anzabannanna's avatar

See: climate change. Unless it interferes with your image, then feel free to ignore.

Expand full comment
User's avatar
Comment deleted
Oct 4, 2023
Comment deleted
Expand full comment
Buzen's avatar

Computers are responsible for most other life improvements for the last 50 years, so not really a mixed bag. Some big improvements you missed in the last century: instantaneous communications wherever you go, safe fast travel by trains, planes and automobiles, GPS, nuclear fission, Ozempic, huge increases in crop yields and food supply, on demand entertainment, clothing cheap enough to throw away, lots of new music and art due to cheap effective computerized tools. I do agree on the new vaccine approvals though. The FDA is over zealous on most medication approvals, but waved through the new Pfizer booster with data on only a few mice.

Expand full comment
Doug S.'s avatar

Instantaneous communications? You mean, like, the telegraph and radio?

Expand full comment
Doug S.'s avatar

Indeed. Most of the modern world came into being in the 1920s.

Expand full comment
Don Bemont's avatar

I think you are a cheerleader for the right side here. It's true, we must move forward technologically. It's just that there is an actual reason why, and then there is the popular reason why. You focus on the latter.

The truth is that any civilization's place in the world depends heavily on its technological advancement. Fall behind our rivals in tech, and we will fall behind militarily and economically. Since this isn't a video game, we cannot fine tune the aims of our technological explorations; we either promote or we dampen, and either way we live with the results.

Bottom line is that support for science and technology is one of the most important ways that we act patriotically. But this is rarely stated.

Instead, we get a lot of happy talk about how science and technology is improving the lives of everyday people. Certainly entertainment, communication, medicine, energy, transportation, etc. have come a long ways since I was a child, and most people would be outraged if told to give up their favorite specific items from that list. Life is easier, safer, and more fun. That's a big deal.

However, technological advances always change life, typically in ways that are almost never foreseen, often not thought about much in retrospect.

They worried that the automobile would scare the horses, but in the end it hollowed out cities.

If anything is clear from history, it is that technological change brings unpredictable societal change, and that there will be winners and losers. Whether you are looking at global warming, inner city blight, small town decline, the death of family owned farms, political polarization, rising economic inequality, the rise in emotional problems including suicide -- the list goes on and on -- technological progress has its fingerprints all over them all.

Thus, we are in a bind. As technological progress accelerates, the number of people disadvantaged by that progress is rising. The disaffected portion of society is rising, and many of them sense that progress has not been good for them. As this number rises into the neighborhood of half the population, both democracy and technology are put at risk.

Yet, the obvious solution -- slow down the technology -- is a certain loser for our civilization because others on the planet will not slow down. And would use their advantage to gleefully stomp on us, particularly after all these years of our lording it over them.

So yes, Noah, you are on the right side on tech development, but I am uneasy with the "tech is obviously good for everyone" cheerleading. Simply the fact that we would need to force people to use the technology ought to clue us in to the way this particular message is faltering.

Expand full comment
JamesLeng's avatar

I'd say the even more obvious solution is a Georgist land value tax and UBI. Those personally disadvantaged by progress might be a lot more willing to tolerate it if they were at least getting a legitimate no-questions-asked cut of the action, in cash, rather than grueling commutes between dead-end jobs and tiny shared apartments, technological table scraps, and conditional-benefit programs competing with targeted advertising to see who can be more obnoxiously intrusive.

Expand full comment
Don Bemont's avatar

I think that people underestimate the extent to which "disadvantaged by progress" extends into non-economic areas.

For example, every new technological ability replaces a bunch of peoples' particular abilities, which can be central to their identity and sense of status within their group. Invent a bird song identifying app, and each county across the country the person or two renowned among bird watchers for their amazing skill at picking out bird songs, suddenly are no longer special.

Instead of grandparents knowing that they offer invaluable wisdom to their grandchildren, they hardly know what those kids are talking about. And so feel less useful.

Maybe most central (and politically problematic) is the way that the technology makes it increasingly difficult for parents to be the ones raising their children. Every round of communications technology (starting way back, but accelerating rapidly) intrudes more forcefully so that outsiders (generally driven by profit motive, not by concern for the kids) guide children's sense of the world.

That just scratches the surface. You could fill a book with the examples.

Don't misunderstand, I am not advocating tamping down technological progress -- that would be suicidal. But at the same time, it's important to recognize the number of individuals who are harmed in all sorts of ways. The threat might not quite be comparable to climate change, but it does have the potential to blow up democratic societies in particular.

Expand full comment
JamesLeng's avatar

Oh, absolutely there are non-economic aspects to it as well - there's this whole tabletop RPG thing I've been working on for years wrestling with how to properly model "creative energy" and "political capital" and "legitimacy" and such on societal scales - but if basic material prosperity and security could be managed by most people with no more than an hour of effort per week, they'd have plenty of time and energy to figure out the rest.

Expand full comment
David Hugh-Jones's avatar

Another really simple reason for decelism might be that America is aging. Maybe old people are more fearful of change and less enthusiastic about the future.

Expand full comment
George Carty's avatar

Indeed: young people yearn to change the world while old people just want a quiet life.

Expand full comment
anzabannanna's avatar

Some young people would simply like to be able to afford food and a home, maybe reproduce....

Expand full comment
David Burse's avatar

I'm an "older" person, and I'd like both a quite life and for younger persons, such as my kids to afford food, home and reproduce already.

Expand full comment
User's avatar
Comment deleted
Oct 4, 2023
Comment deleted
Expand full comment
David Hugh-Jones's avatar

No, I don’t think change is always a good thing. But the article was about cases where change plausibly is good.

Expand full comment
User's avatar
Comment deleted
Oct 4, 2023
Comment deleted
Expand full comment
David Hugh-Jones's avatar

I don't know why you think that about me!

Expand full comment
User's avatar
Comment deleted
Oct 4, 2023
Comment deleted
Expand full comment
David Hugh-Jones's avatar

I promise you, I don’t think all fear is irrational.

Expand full comment
Buzen's avatar

In Afghanistan there are no out of wedlock births, cellphone social media and elders are respected more than youth. Is their society one you would rather live in?

Expand full comment
pourteaux's avatar

excellent- and would only add some other popular decel phobias: GMOs, EVs, and SSRIs

GMOs- safe and provides cheap food abundance

EVs- clean and more performant than gasoline cars

SSRIs- the best we’ve got for depression and anxiety… for now!

Expand full comment
ReadingRainbow's avatar

SSRIs? What? They are wildly overprescribed and generally don’t work, and have common serious side effects that are often ignored.

SSRIs are a perfect example of a “technology” that was embraced without proper understanding and has had terrible consequences down the road.

But hey, we made a lot of money.

Expand full comment
pourteaux's avatar

sad to say you are wrong about this one!

Expand full comment
Jason Christa's avatar

I thought SSRIs work great for people with severe depression, but there is no evidence at all that they work for people with mild depression.

Expand full comment
ReadingRainbow's avatar

You’re absolutely correct, not sure what they are on about.

https://www.madinamerica.com/2022/08/antidepressants-no-better-placebo-85-people/

Expand full comment
pourteaux's avatar

Looks like you have it reversed- those with severe depression often need an SSRI with something additional, or need to switch to a more aggressive option like an SNRI or MAOI. Those with Mild Depression may not need any medicine, just therapy, or a low dose SSRI.

Expand full comment
pourteaux's avatar

Furthermore, SSRIs are generic medicines... a month of Prozac is $10 without insurance! It's not a pharma cash grab.

Expand full comment
ReadingRainbow's avatar

They are now, 40 years later. Don’t worry, they keep inventing new ones with very little supporting evidence to add on that aren’t generic.

Expand full comment
David Burse's avatar

We live PT on Kauai, and no shortage of anti-GMO crazies. (Same people will, of course, want to round up anyone hesitant about COVID vax into camps, but I digress).

Also EMF. Some years ago, the local utility coop decided to install smart meters that radio your power usage once a month for a few seconds, to reduce meter reader expense. Oh the howls. "I had to move my daughters bed to the kitchen so she wouldn't get brain cancer" says woman calling into radio station on her cell phone....

Expand full comment
Sebastian Rako's avatar

This was a great read. Thank you for putting this together, I enjoyed it thoroughly. Are you on Threads by any chance?

Expand full comment
LudwigF's avatar

Good article. Thanks very much.

Expand full comment
afdjkhgakdfjhbl's avatar

Can you explain how we definitely know that AI isn't going to take away some people's jobs? And, a bit more far-fetched but as far as I know entirely possible, how do we definitely know AI won't trigger some sort of disaster (Skynet, etc)? I don't like being a "decel" but I've never heard any coherent arguments that AI is safe.

Expand full comment
Treeamigo's avatar

I have no idea whether “AI” is safe. Or even what AI will be except right now another way to sell adverts. I have used machine learning for years so am comfortable with “AI’s” antecedents and their potential limitations (which comes down to us).

Much of our productivity increases and economic growth comes precisely from putting people out of work. Or rather, the activities the displaced people get involved in increase society’s product.

Expand full comment
afdjkhgakdfjhbl's avatar

The only real world applications of AI I've seen is students trying to cheat on essays. I don't really know what other practical purposes it has. But I'm generally opposed to anything that puts people out of work.

Expand full comment
Alistair Penbroke's avatar

It's a pretty big upgrade to the productivity of us programmers.

I'm using an LLM right now actually. It both helps me write this program and is used by that program. It's for a fairly typical business use case that would otherwise need a lot of tedious manual work.

Expand full comment
Mikhail Amien Johaadien's avatar

The historical evidence suggests that jobs will shift - some will have to move jobs, but you end up with more jobs overall. Note AI is special in that it benefits the middle ability bracket the most. All the studies suggest that this should help reduce inequality in productivity - so its actually quite a progressive technology. This is because it raises the quality of output for the average worker - the best don't get that much benefit...

Expand full comment
afdjkhgakdfjhbl's avatar

Maybe any of this could be explained in terms the layman would understand? The average American will continue to doubt and dislike AI until somebody dumbs it down for us.

What does it actually do? How is not going to trigger some sort of disaster / apocalypse? How is it going to reduce inequality? I've looked online for answers to those question, and I asked in a comment here yesterday, but most answers have way too much jargon. It's not just me that tunes out jargon, it's pretty much everybody.

Expand full comment
Mikhail Amien Johaadien's avatar

In terms of what AI actually does? Its a thinking machine. I personally don't think its that different from what goes on in our brains - but not as effectively. Its only a danger in so far as anything that can perform logic and learn is a danger. Its not super-intelligent, its just got some thinking ability. There are 7.9 billion thinking machines already in the world - why be afraid of a few more?

Expand full comment
Mikhail Amien Johaadien's avatar

The easiest way to explain is via an example. Image generation is one - if I'm a less able illustrator/painter I can use AI to fine-tune my images and compete with the best illustrators out there. The best guys can already generate whatever they want - so don't get as much benefit. Or programming - a less able coder can use copilot to help write complicated code - while a good programmer can already write the code without any help. I'm not making a moral judgement btw - its just the net effect is larger for those less skilled.

Expand full comment
Buzen's avatar

There are many studies showing it improves performance of workers, but the person losing the job will be the one whose performance is worse than the worker using an AI. Someone opposed to anything that puts people out of work is a decel who would be happy in their buggy whip factory job that was saved by their desired ban of the automobile.

Expand full comment
David in Tokyo's avatar

FWIW, as someone with an MPhil (all-but-thesis) in AI (1984, under Roger Schank), allow me to point out that the current round of AI is pure BS. There's no there there. For example. How do LLMs do multiplication? They look up the particular problem at hand in the database. If the particular problem at hand (e.g 1024 x 365) isn't in its data set, it says something stupid. The LLM technology is not capable of identifying abstract operations, recognizing that such an operation is being discussed, and performing such an operation when a specific answer is required. (Long story short: LLM's are template instantiation programs, not reasoning programs.)

You've been told "Neural nets are a model of neurons", right? That's a lie. A "neural net" "neuron" has under 10 inputs, under 10 outputs, and all those are to adjacent "neurons". An average mammalian neuron has hundreds of inputs, thousands of outputs, and accepts inputs from, and provides outputs to enormously distant other neurons. Oh, yes, and real neurons perform logical operations, not just sum-and-threshold, on subsets of their inputs. "Neural nets" are trivial, real neurons are enormously complex. You;'ve been lied to.

Of course, like my advisor, I do believe that computers can "become intelligent". But that will require our understanding what "intelligence" is, and we're not working on that.

This isn't "decelarationism", it's calling BS BS. Which is a rather different thing.

Expand full comment
Alistair Penbroke's avatar

Your numbers are way off. LLM neurons often have tens of thousands of connections and are connected between layers.

> LLM technology is not capable of identifying abstract operations, recognizing that such an operation is being discussed, and performing such an operation when a specific answer is required

Yes it is. Go play with the browser plugin or OpenAI functions API to see it do exactly that.

Also, are you aware that using analogies isn't the same thing as lying? I am viewing your text through a "window" that isn't made of glass, on a "desktop" that isn't a desk, using a "mouse" that doesn't squeak. Computer science is full of analogies to the natural world because we need words for new inventions, this isn't some weird conspiracy.

Expand full comment
Joe's avatar

Intelligent beings don’t need to ask Wolfram Alpha to successfully add numbers, though. They just need paper if the number is too long for short-term memory.

Expand full comment
Alistair Penbroke's avatar

I'm pretty sure a lot of otherwise quite intelligent people would fail to add large numbers reliably using longhand arithmetic on paper. And smart people know they might make a mistake whereas a calculator (or Wolfram Alpha) would not, so they'll reach for a tool. Just like GPT will do when given plugins or API functions.

Expand full comment
Kenny Easwaran's avatar

You’re wrong about how LLMs do arithmetic. They learn patterns in how prompts connect to continuations. Some of those patterns are commonly repeated questions, but other patterns are not. They learn that when you multiply two three digit numbers you usually get a six or seven digit answer, and also learn something about the inputs that make the difference between six and seven digits. They learn that when your input ends with a zero, the output will as well. If they learn enough of these patterns, then they actually have just as good an understanding of multiplication as any human. (Note: I don’t say they get as good an understanding of multiplication as a symbolic rule-based AI. There’s no good evidence that humans are actually symbolic and rule-based, though it *feels* like we are a lot of the time, because we’ve picked up enough patterns to pretend like we are. But even the most educated people break down and go with their intuitions and feelings a good fraction of the time, just like these neural AIs.)

Expand full comment
anzabannanna's avatar

This comment is such an awesome demonstration of Maya in Hinduism.

Expand full comment
John O’Toole's avatar

Congrats on finding a new chart for the reduction in Solar costs!

Expand full comment
Buzen's avatar

Here’s a chart of electric rates in California which has the most installed solar power. Seems the inverse of the cost chart, why is that?

https://www.cpuc.ca.gov/industries-and-topics/electrical-energy/electric-costs/historical-electric-cost-data/bundled-system-average

Expand full comment
Doug S.'s avatar

Because the places where people pay high prices for electricity are the places that you can make (or save) the most money by generating it?

Expand full comment