Discussion about this post

User's avatar
Jeff Rigsby's avatar

So it sounds as if we should start to worry less about technological stagnation and more about whether progress in AI is happening too quickly to keep it safe.

I was surprised to see that when Matt Yglesias writes about AI risk, he gets very strong pushback from a bloc of his paid subscribers who think the risk doesn't exist and that it's all Luddite nonsense.

It was hard to tell how many of those people were conservatives predisposed to this view by all the propaganda about global warming being a hoax, and how many were just going a little too far with the technophile, "abundance agenda" framework.

Either way it seems clearly wrong. People don't seem to appreciate that even if there's some theoretical argument for why there can't be existential AI risk, you can accept the argument with 99 percent confidence and still think AI risk reduction is a very, very high priority. Why is there so much resistance to this idea?

Expand full comment
Liam's avatar

If you're looking for AI slowdown risks, the imminent end of Moore's law is a big one. (Don't believe me that that's happening? Nvidia's CEO thinks it is: https://www.marketwatch.com/story/moores-laws-dead-nvidia-ceo-jensen-says-in-justifying-gaming-card-price-hike-11663798618).

We've gotten a lot of the recent AI progress via a faster-than-exponential increase in the amount of computation used to train the models. That can't be sustained, and it stops being sustainable sooner if hardware isn't getting better at an exponential rate. There are ways around it -- specialized (neuromorphic/sparse/etc) hardware, algorithmic breakthroughs, quantum computing -- but if those don't pan out or take too long, it's reasonable to expect the current wave of progress to slow down.

Of course, there's enough innovation here already to disrupt quite a few industries even if progress stopped tomorrow.

Expand full comment
92 more comments...

No posts