I thought I’d take a break from writing about weighty matters like trade wars and presidential elections, and talk about something more whimsical and lighthearted — how to keep quasi-sentient computers from taking all our jobs and then hunting us to extinction with swarms of autonomous drones!
Well, OK, I don’t think we’re going to be facing off with Skynet anytime soon. But generative AI is undeniably a very powerful new technology, and the list of powerful new technologies that the human race hasn’t used for destructive purposes is very short indeed. So it probably makes sense to start thinking about how to use regulation to decrease the likelihood that AI will be used to cause catastrophes.
That’s a lot easier said than done, however. It’s very hard to predict what kind of regulation would make a technology safer before the harms materialize, and it’s very easy to create regulation that slows down technological progress. So a priori, a fairly likely outcome of AI regulation is that AI progress slows down, but AI still ends up causing harm in ways that the regulators never anticipated.
Realizing this basic difficulty, the Biden administration wisely taking a light touch — at least in the U.S. There was some speculation that Biden’s executive order on AI last October would focus on limiting AI capabilities. But instead, the order’s main protection against existential risk is simply a mandate for safety testing on foundational models (like ChatGPT and Gemini). It also has provisions to protect against the non-existential risks of AI — job displacement, deepfakes, erosion of privacy, and so on.
Other attempts currently in the pipeline would go farther, however. California legislators are proposing a bunch of state-level AI safety bills. One of these is State Senator Scott Wiener’s SB 1047, which would impose a whole bunch of safety requirements on companies building foundational models. If the models are found to cause major havoc, the bill would hold companies liable for some portion of the costs.
I am no expert on AI technology, but I wonder if regulations like this can actually be implemented as written. SB 1047 demands that AI companies know all sorts of things that their model can and can’t do before the model is trained on the data. As far as I know, that’s not even possible; you don’t really know what a model can do before it’s trained. In fact, even after a model like ChatGPT is trained, it seems impossible to really know what dangerous stuff it could be used to do. Does anyone think we know, right now, whether GPT-4 is actually capable of causing catastrophic harm, in the hands of the right villain? I don’t think so. And GPT-4 has been out for over a year now; when we’re talking about new models, we’ll have even less knowledge to go on.
So I think bills like SB 1047 that require AI companies to make all sorts of safety claims about their models will mostly result in the companies making B.S. claims of total and complete safety, which no one will be able to verify one way or the other.
I don’t think regulations like these are going to slow down AI innovation much, but I also don’t think they’re going to do a lot to limit real risks from AI — mostly because AI companies are already trying hard to limit those risks on their own. In fact, I’m not sure if anything can eliminate those risks, since generative AI is kind of a black box, and since it’s very hard to know what it’s capable of. But I do have a few ideas about how to go about regulating AI effectively, in ways that wouldn’t spam companies with red tape.
Reserve resources for human use
I’ll talk about some ideas about how to mitigate existential risk from AI — the “Skynet” scenario and things like that. But first I want to talk about what kind of regulation might limit economic risk. A whole lot of people are scared of AI displacing people from their jobs, and many worry that our species could even become economically obsolete like horses. But I think there is one simple regulation that could prevent this outcome.
Keep reading with a 7-day free trial
Subscribe to Noahpinion to keep reading this post and get 7 days of free access to the full post archives.