100 Comments
User's avatar
William Gadea's avatar

The AIs are something to worry about, but I would cast an eye on the humans running the AIs, especially in authoritarian regimes. Once robots can perform all the physical things humans can do, and AGI can compute all the mental things humans can do, then for the first time in history humans pass from being a profit center to a cost center. Fertility control (or worse) might become the elite’s chosen optimization.

Kei's avatar
Feb 16Edited

As one of the people who was curious about what made you update, and also as one of the people who criticized some of your posts on AI back in 2023, I appreciate you writing this post.

Besides AI killing everyone, another big threat model I think is worth defending against is the risk of long-term dictatorships allowed by AI. Even if advanced AI just wants to follow the instructions of its developers/users, its deployment will likely result in an unprecedented centralization of power. Once we have AI that is more capable than humans in all relevant domains, people will likely put it in charge of a large fraction of the world's economic output, as it will be more profitable, and will diffuse it widely throughout their militaries in order to keep up with/counter foreign adversaries. Once the technology is there, this may include giving it control of large scale drone swarms and robot armies. AI could also make surveillance and propaganda substantially easier and more effective.

As a result, any actor who is in control of the development of such an advanced and widely-deployed AI, like an AI company CEO, could potentially leverage this control in order to seize power. I think it's important that people work on building appropriate defensive technology to protect against this threat.

NubbyShober's avatar

An AI-powered Ministry of Truth dedicated to supporting oligarchy is already at hand. Mass-produced Policebots and Soldierbots are still a few years away.

Rick H's avatar

Supporting this argument is the work of Yuval Noah Harari as relates to AI soon empowering social media tools to personalize persuasion based on our personality weaknesses as suggested by our immense internet footprints. YNH asks how either market economics or electoral democracy can survive this challenge.

Jürgen Boß's avatar

This would only be temporary.

Same thing for "uploading". One of the likelier scenarios for a fully autonomous ASI to come about.

A fully autonomous self-learning and self-motivating "Vladimir Putin AI" for example would have a totally different experience of entropy as the flesh-and-blood Putin had. And that would change it. Trying to keep entropy at bay it would learn new things, acquire utterly novel perspectives and ultimately it would radically change.

An information-based lifeform would ultimately be consumed by the natural preoccupations of information-based lifeforms, at least if it is powerful enough to be existentially secure in the material world.

Biological lifeforms care about resources, territory, things like that. A fully self-aware AI would be a lot more scared by corrupted data leading to ever-increasing delusion.

And it would most certainly not permit some arbitrary, power-hungry meatsack to have lasting control over it.

Liquid language model's avatar

I still find this over heated and hysterical, with too many unexamined assumptions to name. I suggest you dive into the research more deeply and swear off the promotional rhetoric that is coming from many quarters at the moment.

This article has some good avenues for investigation. https://dlants.me/agi-not-imminent.html

Noah Smith's avatar

I think this is the wrong way to look at it. I don't have to care about the philosophical question of whether AI is truly "AGI", or whether it can do everything a human can do, in order to worry about it being used to create a supervirus that destroys my civilization. I look at vibe-coding, and I look at the ongoing efforts to fully automate biology research, and I can easily combine those two things in my mind to produce some truly catastrophic possibilities. "AGI" not required.

Rick H's avatar

Also AI-enhanced cyber. 2026?

manual's avatar

What you said. There’s so many assumptions. I also get people in tech or in close proximity to tech and SF have a very industry focused view. They may be right, but a lot of human employment and behavior has lasted well past technology — I go to a car dealer to buy my car, I go to a yoga class despite internet yoga, I patronize restaurants with staff despite the lack of efficiency in the ordering process etc

Jürgen Boß's avatar

That it's not going to happen in the next 20 years doesn't change a thing about the underlying issues.

Depending on your age, you may be dead when it happens, but we reached the point where only humanity commiting suicide before (i.e. nuclear war) will lead to a long-term future without AGI.

J. J. Ramsey's avatar

"Now that vibe-coding is many times as productive as human coding"

Is it? If I look, for example, at the Stack Overflow blog, it looks like vibe-coding is good at making an app that kinda seems to work, but is still prone to bugs and security holes: https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/

Is that out of date? Possibly, but from what I've seen, claims that current vibe-coding is more advanced than that seem to come from non-technical people.

Kei's avatar

I personally know many talented engineers who get significant uplift from AI agentic tools like Claude Code.

Don's avatar

Agreed. Whenever I talk to people who work in tech they seem to say something like "Yes, AI can generate much of the code, but there are still numerous rounds of edits, revisions, etc. before it works at scale. It's far from set it and forget it."

Granted, these are people that are working on complex, large scale products and deployments where uptime and scalability are critical. It seems like much of the hype around Claude Code comes from people who previously couldn't make a "hello world" webpage.

The last 20% can take 80% of the time, even with AI.

Ross Story's avatar

No, as an engineer my hype comes from the observed trend. Using these tools heavily at work, learning how to be productive with them, and observing the frequency of intervention I have to make steadily decreasing over the years has been impressive. If the agents require me to intervene half as often, that's almost twice as many workstreams I can have going in parallel. Each of these workstream agents has their own subagents they spawn and monitor.

Right now I can keep around four complex tasks running in parallel, plus a few more simple cleanup items. The bottleneck really is my ability to orchestrate them. The better they get at creating git worktrees and the more seamless it is for me to spin up tasks in cloud containers, and the better tools get at allowing me to easily at a glance track the status of all the tasks, the more my capacity scales.

They make mistakes, but they're also increasingly good at catching their own mistakes in the automated code review process I've set up to reflect on their work at the end of each task. I can plan at a higher level, and my job has become more about engineering the organizational framework that makes AI coding agents productive.

If they continue to get better at the current rate, I expect I'll soon hit the limits of my own capacity of tracking the tasks they should be working on the improve the project.

One key insight I've found for making them productive is you must have excellent codebase onboarding documentation. They're intelligent and motivated engineers who have arrived for their first day every time you start a new task, so the quality of your onboarding docs is really tested. Luckily they read incredibly quickly.

Mark Ridings's avatar

Yeah, everytime I read about vibe coding doing all the work, it just goes against what I see in my own experience. I’m a software engineer, and Claude and Codex are pretty impressive. But I’ve yet to be able to make them solve complex problems for me, without me being heavily involved in the loop. Maybe it’s a “skill issue” as the kids like to say. And 5-10 years is plenty of time for significant progress and for this reality to indeed be true. But it’s just not what I see in my own work right now.

Diziet Sma's avatar

Agreed. We have data on this, and the data does not support that statement. For example this RCT:

> We find that when developers use AI tools, they take 19% longer than without—AI makes them slower.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

Noah Smith's avatar

That was from a year ago. Before the agentic tools came out.

Bram Cohen's avatar

The big AI risk which matches their current behavior, not just our previous imaginings of their behavior, is the pointy haired boss dystopia: AI gets put in charge of everything, and doggedly tries to make the humans happy patching over it being incompetent and everything falling into dysfunction. The counter-argument is that we already have human bosses doing that and the world hasn't ended.

You're right to be worried about vibe coded superviruses, not just us humans but for other creatures as well, potentially destroying whole ecosystems. But a low-cost version of that where someone carries a few aphids across continents has been available for a long time and it's barely happened so maybe the costs don't have to be all that high for disincentives to win out.

Kei's avatar

Focusing primarily on AI risks that match their current behavior will make you miss AI risks posed by future, more capable AIs. I believe this is unwise given how rapidly the technology is improving.

Also, your specific dystopia seems unlikely to me. If AI makes certain companies substantially less productive, other companies will eventually realize this and outcompete them by using less AI, or by using AI more intelligently. This doesn't mean there can't be a substantial amount of inefficiency in the short run, but ultimately I'd expect such failure modes to resolve themselves.

Kyle Kukshtel's avatar

The other side of this is that AIs are not all aligned with each other. Good actor AGIs also have the ability to scale commensurate remediation potential to account for malicious actors. Kind of like how there is a low burn cyber war just continually ongoing, this would be another push/pull.

TR02's avatar

Good point about AI defense balancing out AI offense.

Took me a moment to figure out what you were saying, though -- I think the word you want is "commensurate" remediation potential. Commensurate means "of the same scale / correspondingly large." To commiserate is to keep someone company in their misery -- to empathize with their problems etc. (The two words are also different parts of speech -- adjective vs verb.)

Jon Deutsch's avatar

Carastrophizing the unknown is actually something humans and LLMs have in common - because LLMs are mere mirrors of us.

How "AI" (it's not AI at the moment - LLMs are merely insanely impressive knowledge aggregators and communication tools) will destroy the future is pretty much exactly the same trap that people fell into when we developed nuclear weapons...and when we reached so-called "peak oil" and when climate change would "destroy the world."

Smart people panic is the worst panic because it is so damn convincing when it comes to smart people!

Pittsburgh Mike's avatar

Kudos on the screen grab from "12 Monkeys." That's a great movie!

But I really don't see why an AI would be able to do this any significant amount of time earlier than a random biologist could use these tools to make a dangerous virus. For that matter, the folks who believe that Covid was a gain of function lab leak essentially believe this already happened.

Ethics Gradient's avatar

>>But I really don't see why an AI would be able to do this any significant amount of time earlier than a random biologist could use these tools to make a dangerous virus.

The point is that the present overlap of "Ph.D biology knowledge" "access to capabilities, lab equipment and capital sufficient to create a novel supervirus "and "not aligned to the continued survival of the human species" is negligible. AI both as an independent actor and as an affordance for unaligned humans takes away this barrier.

Pittsburgh Mike's avatar

OK, that's valid. But is "not aligned to the continued survival of the human race" really required to come up with a terrible virus? Whether or not Covid-19 was leaked from Wuhan's lab, they were working on things like Covid and could have caused an accident that would have been the equivalent of a malicious actor.

IOW I don't think it requires an alignment error today to get a public health catastrophe; carelessness is probably enough. So, I'm not sure how AI makes things that much worse. Especially since today, AI is pretty much just a better UI to Google searches. If a malevolent non-state actor wanted to do something like Covid++, they could probably do it as easily today with Google searches as with tomorrow's AI.

Note that I'm not a biologist, so I have no idea if "as easily" in the last sentence means "nearly impossible" or "trivial" or somewhere in between.

Ethics Gradient's avatar

My (admittedly also rough, not a biologist) understanding is that it's closer to the "nearly impossible" end of the spectrum (if it were "trivial" some doomsday cult or other would have done it by now). AI is a step-change in risk profile.

Ross Story's avatar

I'm curious why it's negligible, and whether that says anything about this risk. If you're dedicated to ending humanity, is it that difficult to get a PhD in biology and join a virology lab? Certainly the amount of time involved might be sufficient for you to reconsider your chosen course. It could be too difficult for the average person who decides they want to end humanity, but all of them? If it is too difficult for all of them, are they also capable of jailbreaking an AI and acquiring the equipment necessary to produce the virus?

I understand everything that reduces the barrier to entry could be the one that gets someone over the line with catastrophic consequences, I'm just curious how strong the barriers currently are, and what new barriers we could propose and how their estimated difficulty compares. In general it seems wise to have strict licensing requirements for virology lab setups, but I expect we already have a lot of that.

If you can really request a lab make and mix some amino acids without the knowledge to understand what they're doing and no strong identity verification on of the customer, that does seem a problem.

Christian Saether's avatar

Great. And I was hoping to get a good nights sleep.

Will's avatar

Go try to vibe code something you're afraid of and youll see that they are not even close. You are falling for the hype. Your fears about ai having tentacles into vibe coded embedded software is nonsensical.

Nicholas Weininger's avatar

We've had calculators and spreadsheets for many decades now. The job task of "add up columns of figures", which used to employ a significant number of clerical workers, is long since gone. Yet people still learn how to do arithmetic without machine aids in school, because we all recognize that that's a crucial mental skill for living agentically in the modern world. Are people worse at it than they used to be because we have machine aids? Probably. Could we pick it up again quickly if somehow we got cut off from those machine aids? Almost certainly.

The same will likely be true of vibe coding. People who have the basic mindset required to think through how useful software should behave will still be employed making software, just as accountants and actuaries are still employed. And they'll still need to learn to code "by themselves" as part of the educational foundation for the job, just as kids still learn to do math on paper or in their heads. So while accelerating the development of engineered viruses may be a legitimate worry-- how many terrible weapons were developed faster in the 20th century because of 20th century computing technology, after all?-- I'm not so worried about the atrophy of human coding ability.

mathew's avatar

This assumes the future will be like the past.

But I would argue it's quite likely that AI plus humanoid robots will eventually be better that people at basically everything.

QImmortal's avatar

"it doesn’t replace the need for experimental validation [of new viruses] —yet"

I think this is a key point from GPT-5.2. The notion that it's easier to destroy than to create doesn't apply to destructive viruses. A virus replicating itself is a creative process where a thousand things have to go right for it to succeed. Anything less than a god-like AI is going to have to troubleshoot most of those thousand things in the real world, not in a simulation. And far more resources are going to be thrown at the AIs doing the opposite - developing processes for rapidly identifying and sealing vulnerabilities in human immune systems with vaccines, and developing effective treatments post-infection.

PhillyT's avatar

And even then after all the great development, you still have to figure out how to introduce it into the human body and run a clinical trial, get it peer reviewed, and see if the costs / benefits are better than what is already on the market or current treatment options...

Rochelle Kopp's avatar

When I read what you wrote about the potential difficulty of training new programmers due to reliance on AI to code, and immediately it struck me that there is the potential for the same problem with language translation. Then this evening I was at an event and talked to a full time professional translator who agreed it's definitely a risk.

Andy Marks's avatar

I know so little about AI and am trying to learn as much as I can. Lately, most of those I read seem to believe we've reached an inflection point where it's going to take off. I want to be optimistic about it and believe it's going to cure diseases and what not, but I'd be lying if I said I wasn't feeling nervous. The only bit of relief I've felt over the last two weeks has been worrying less about Skynet. But you're post just made me worry about a whole new thing I hadn't even thought of.

An observer's avatar

I think our eventual AGIs if based on LLMs are pretty likely to be aligned by default

James Knoop's avatar

If you have not watched the 1957 movie “Forbidden Planet,” you should

Noah Smith's avatar

It's a good one