35 Comments
User's avatar
Bram Cohen's avatar

The big AI risk which matches their current behavior, not just our previous imaginings of their behavior, is the pointy haired boss dystopia: AI gets put in charge of everything, and doggedly tries to make the humans happy patching over it being incompetent and everything falling into dysfunction. The counter-argument is that we already have human bosses doing that and the world hasn't ended.

You're right to be worried about vibe coded superviruses, not just us humans but for other creatures as well, potentially destroying whole ecosystems. But a low-cost version of that where someone carries a few aphids across continents has been available for a long time and it's barely happened so maybe the costs don't have to be all that high for disincentives to win out.

Kei's avatar

Focusing primarily on AI risks that match their current behavior will make you miss AI risks posed by future, more capable AIs. I believe this is unwise given how rapidly the technology is improving.

Also, your specific dystopia seems unlikely to me. If AI makes certain companies substantially less productive, other companies will eventually realize this and outcompete them by using less AI, or by using AI more intelligently. This doesn't mean there can't be a substantial amount of inefficiency in the short run, but ultimately I'd expect such failure modes to resolve themselves.

William Gadea's avatar

The AIs are something to worry about, but I would cast an eye on the humans running the AIs, especially in authoritarian regimes. Once robots can perform all the physical things humans can do, and AGI can compute all the mental things humans can do, then for the first time in history humans pass from being a profit center to a cost center. Fertility control (or worse) might become the elite’s chosen optimization.

Kyle Kukshtel's avatar

The other side of this is that AIs are not all aligned with each other. Good actor AGIs also have the ability to scale commiserate remediation potential to account for malicious actors. Kind of like how there is a low burn cyber war just continually ongoing, this would be another push/pull.

Christian Saether's avatar

Great. And I was hoping to get a good nights sleep.

Pittsburgh Mike's avatar

Kudos on the screen grab from "12 Monkeys." That's a great movie!

But I really don't see why an AI would be able to do this any significant amount of time earlier than a random biologist could use these tools to make a dangerous virus. For that matter, the folks who believe that Covid was a gain of function lab leak essentially believe this already happened.

Andy Marks's avatar

I know so little about AI and am trying to learn as much as I can. Lately, most of those I read seem to believe we've reached an inflection point where it's going to take off. I want to be optimistic about it and believe it's going to cure diseases and what not, but I'd be lying if I said I wasn't feeling nervous. The only bit of relief I've felt over the last two weeks has been worrying less about Skynet. But you're post just made me worry about a whole new thing I hadn't even thought of.

An observer's avatar

I think our eventual AGIs if based on LLMs are pretty likely to be aligned by default

J. J. Ramsey's avatar

"Now that vibe-coding is many times as productive as human coding"

Is it? If I look, for example, at the Stack Overflow blog, it looks like vibe-coding is good at making an app that kinda seems to work, but it still prone to bugs and security holes: https://stackoverflow.blog/2026/01/02/a-new-worst-coder-has-entered-the-chat-vibe-coding-without-code-knowledge/

Is that out of date? Possibly, but from what I've seen, claims that current vibe-coding is more advanced than that seem to come from non-technical people.

Don's avatar

Agreed. Whenever I talk to people who work in tech they seem to say something like "Yes, AI can generate much of the code, but there are still numerous rounds of edits, revisions, etc. before it works at scale. It's far from set it and forget it."

Granted, these are people that are working on complex, large scale products and deployments where uptime and scalability are critical. It seems like much of the hype around Claude Code comes from people who previously couldn't make a "hello world" webpage.

The last 20% can take 80% of the time, even with AI.

Liquid language model's avatar

I still find this over heated and hysterical, with too many unexamined assumptions to name. I suggest you dive into the research more deeply and swear off the promotional rhetoric that is coming from many quarters at the moment.

This article has some good avenues for investigation. https://dlants.me/agi-not-imminent.html

Jon Deutsch's avatar

Carastrophizing the unknown is actually something humans and LLMs have in common - because LLMs are mere mirrors of us.

How "AI" (it's not AI at the moment - LLMs are merely insanely impressive knowledge aggregators and communication tools) will destroy the future is pretty much exactly the same trap that people fell into when we developed nuclear weapons...and when we reached so-called "peak oil" and when climate change would "destroy the world."

Smart people panic is the worst panic because it is so damn convincing when it comes to smart people!

Nicholas Weininger's avatar

We've had calculators and spreadsheets for many decades now. The job task of "add up columns of figures", which used to employ a significant number of clerical workers, is long since gone. Yet people still learn how to do arithmetic without machine aids in school, because we all recognize that that's a crucial mental skill for living agentically in the modern world. Are people worse at it than they used to be because we have machine aids? Probably. Could we pick it up again quickly if somehow we got cut off from those machine aids? Almost certainly.

The same will likely be true of vibe coding. People who have the basic mindset required to think through how useful software should behave will still be employed making software, just as accountants and actuaries are still employed. And they'll still need to learn to code "by themselves" as part of the educational foundation for the job, just as kids still learn to do math on paper or in their heads. So while accelerating the development of engineered viruses may be a legitimate worry-- how many terrible weapons were developed faster in the 20th century because of 20th century computing technology, after all?-- I'm not so worried about the atrophy of human coding ability.

mathew's avatar

This assumes the future will be like the past.

But I would argue it's quite likely that AI plus humanoid robots will eventually be better that people at basically everything.

Will's avatar

Go try to vibe code something you're afraid of and youll see that they are not even close. You are falling for the hype. Your fears about ai having tentacles into vibe coded embedded software is nonsensical.

Craig Gordon's avatar

your not thinking outside the box...the key to this whole issue is energy and where we get it as emnegy pus intelligence leads to that species ruling the world. Elon musk has it right whether you like his politics or not...the creation of energy has to be from outside this world and until AI can do that without us there is slim chance they want to etermite us even if a right ai combines with bad human, religion or country. Would love to hear your thoughts on AI and creation of energy..the buzz is going to be completely different soon.I still am blown away that oe man's company has more satellites out in space that nay country...does anyone think about the implications of that?

Mark's avatar

So Bad Bunny was the problem all along!

Kei's avatar
3hEdited

As one of the people who was curious about what made you update, and also as one of the people who criticized some of your posts on AI back in 2023, I appreciate you writing this post.

Besides AI killing everyone, another big threat model I think is worth defending against is the risk of long-term dictatorships allowed by AI. Even if advanced AI just wants to follow the instructions of its developers/users, its deployment will likely result in an unprecedented centralization of power. Once we have AI that is more capable than humans in all relevant domains, people will likely put it in charge of a large fraction of the world's economic output, as it will be more profitable, and will diffuse it widely throughout their militaries in order to keep up with/counter foreign adversaries. Once the technology is there, this may include giving it control of large scale drone swarms and robot armies. AI could also make surveillance and propaganda substantially easier and more effective.

As a result, any actor who is in control of the development of such an advanced and widely-deployed AI, like an AI company CEO, could potentially leverage this control in order to seize power. I think it's important that people work on building appropriate defensive technology to protect against this threat.

Lee Drake's avatar

I remain unconvinced that advances in AI intelligence means it is getting close to anything resembling consciousness. Does the AI regret the bad advice it gave you about your code? Does it realize a mistake and correct it without you asking it for more help? Put more simply - what is it after the prompt?

When teaching the physics of soft X-rays, I’d often lean hard on a color analogy. I’d hold up a stuffy, or pick someone with a particularly bright shirt. I’d ask everyone to watch the color for a moment and report if it changed color, or if they could describe a standard deviation for it. Then I’d ask them to imagine the room pitch black - what it still a red stuffy/shirt? The answer of course is that there is no color without light. The Japanese word is aka. English is red, Spanish rojo. If you’re a wavelength person it’s 700 nm. If particles are your thing, 1.7 eV. But without light there is no red energy/wave, just darkness. It can’t “color” without light. And the trillions of ways in which life on earth exists because of light - from photosynthesis to the Mona Lisa - are just derivatives of the light.

So what is an LLM after the prompt? Its weights sit statically like the thesis_final_final_v6.doc on the desktop. Its weights didn’t change after you asked it a question. The trainers may be able to train it later on its experiences - but in a controlled way. Another way to think of it is- if you ask it the same question but change the words, is it the same “AI” responding? Is there a Claude or millions of different words “clauding” on shelves in data centers?

Until the GPU fires up after being unable to go to sleep, we’re probably ok. It’s scary to think of our agricultural system running off vibe code. But scarier for Claude to need megawatts of energy on demand. What we should no worried about is ourselves. What if we become dependent on AI, then the AI companies all go broke in a financial crises? What if youth unemployment skyrockets because they don’t land junior positions, and we run out of people who know how to do things? Hell, what happens if immigrants realize they are hated, and stop trying to come to the US? What if the climate change bill comes due early?

Noah Smith's avatar

It's not consciousness I'm concerned about