I'm really struggling on the Anthropic fight. I think Amodei is right in this case and Hegseth is a sociopath. At the same time, we can't have private companies deciding matters of national security. I didn't like it when Musk was basically conducting foreign policy with Starlink in his selectively permitting it to be used. If the government needs to nationalize AI it should do the same with satellites.
What's worrying is the supply chain risk designation. If it can be given out just because the government doesn't like what a company is doing then it's open season. When a Democrat is in office they can declare SpaceX, Palantir and Anduril to be supply chain risks and reduce them to zero. I really don't get the tech right people supporting the move against Anthropic. It's as if they can't think past the next minute.
I'm with you on that. I wouldn't want private actors owning nukes and if AI is like that then only the government should have control over it or at least its strongest capabilities. We're going to need some kind of regulation on it and I hope we get it soon. Amodei, from what I've read, is only intending Anthropic's conditions to be a temporary stopgag. The real solution is to limit what AI can be used for both by private actors and the government.
As far as the lack of foresight regarding power goes, that's the last several decades of politics in a nutshell. Everyone seems to think their side will be in power forever, so they never support restricting their own power.
Right although for the longest time we've had norms that people would abide by. That's saved a lot of people from their own idiocy and short sightedness. If they break down though...
Elon Musk was deciding how Starlink could be used in order to comply with government weapons export regulations (ITAR), which I would not call conducting foreign policy. But, yes, in theory he does have the power to do so.
In this case Anthropic did not decide any matters of national security either. They offered the government a contract with usage restrictions and the government accepted the contract. So I have a hard time seeing the government's side here. If the government did not want to abide by those restrictions they should not have signed the contract. I can see maybe some exception in a true emergency where the military needed additional capabilities in the middle of a massive war, but we are not anywhere near that scenario now.
While I lean towards the "this is a lawless action that should never have happened" camp, I could argue we're in that situation right now. In general, China's invasion of Taiwan is expected to happen in 2027 or 2028 (so, in one year). And the military presumably wants to have AI weapons somewhat functional when our apocalyptic showdown with the only other superpower starts, rather than 2 years into the war. That would mean starting experiments now.
Expected by who and at what level of confidence? I am not ready to grant the government authority to use future hypotheticals like that as a justification for emergency national security actions that otherwise violate fundamental constitutional rights. And even if I could be convinced that China was planning this with near 100% certainty, I still do not believe that China has any territorial ambitions on the US so I don't think that preventing China from invading Taiwan is worth granting the government this power. Nor am I even convinced that granting the government this authority could actually enable them to prevent China from taking Taiwan at all. So it seems to me like a loser all around.
I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.
It's still the "Department of Defense" and it will be until Congress changes it legislatively. "Department of War" is a bullshit name generated by Trump in a bullshit executive order that declares it is the "secondary name" of the DoD (so again, bullshit).
This is not the first example of the corporation vs. the nation-state in the use of sophisticated weapons and weapons-enabling technologies. Think of Elon Musk's ability to turn on and off Russia's and Ukraine's use of the Starlink system. Shouldn't that system be subject to government control as well?
Where does it stop? Should we nationalize all defense contractors? Elon Musk has that power over Ukraine (though not over the US as their contract with SpaceX provides it with direct control over the satellites). But this is not a new thing. The CEOs of Lockheed Martin and a bunch of other contractors also have massive amounts of leverage over the government, but this has never been a problem deeper than maybe extracting a few extra billions of dollars here and there (which is a significant problem, but not a national security threat). The incentive structures and potential legal consequences of a CEO using this leverage to its full effect have been enough to prevent any of them from doing so and I don't see a reason why this would change. (The widely cited case of Elon Musk doing so is completely fabricated nonsense by garbage journalists, and he was actually complying with US law when he did so).
AI isn't a weapon. It can be used to control weapons, but the output of an AI itself is not capable of killing people. Same as the chips in your phone are perfectly capable of being used in weapons systems, but your phone is not a weapon and not regulated as a weapon. Just like airplanes, which you used as an example, are still not regulated like weapons even after 9-11. Anyone with the money can go out and buy a plane capable of taking out a building. And the same with your bio weapon example. An AI can (maybe) tell you how to manufacture a virus, but it can't do that without the lab to make the virus. You don't even need AI to tell you how to make a nuclear bomb, and anyone who wants to can access the information. We don't regulate information, but rather the physical process to create one. Anthropic can have all the AI that it wants, but Dario Amodei won't be a dictator if he doesn't have actual physical weapons. If he tries, the military can just cut the power lines to his data centers.
I suspect this is already wrong, but even if it's right *today*, it'll be wrong very soon. If I can open up an agent and tell it to crash some cars or go make a gas plant explode, it's a weapon, period. Let's not put our blinders on here.
This depends very much on what "the agent" means. If this is a copy of the model that Anthropic licenses to be run on computers owned by the customer, then Anthropic really isn't in control of it. They would have to train a new model, provide the update, and then hope the relevant customer doesn't test it. I would hope that this is how the military is using these models, at least for any purpose involving weapons control. Self driving cars work this way by necessity because the latency of sending the request to a cloud hosted inference system is simply too slow to be useful, and I expect the same is true for weapons control.
For most customers that will be using the models running in Anthropic's data centers, they can modify the system prompt of the model at any time without the customer knowing about it. But this is still a very rough form of control. Even if you could add something reliably to the system prompt that tells it to respond to all requests for gas plant control parameters in a way that it would lead it to destroy the plant this is assuming that there is no sanity check in place such as a human in the loop or even mechanical controls on the plant components that do not allow such inputs that would destroy the plant. An easy way to prevent such a contingency would be to require two model approval for dangerous requests if human in the loop or mechanical safeguards are not feasible. That's the type of regulation we need here rather than treating it as a weapon.
If we are talking about potential future agents at some hypothetical level of capability I would agree. But with anything resembling a modern LLM agent we aren't really talking about hardening our society against agents, but rather against humans wielding agents, which I suspect you agree with b/c your example was not Claude taking over the world, but rather the CEO of Anthropic using Claude to do so. And we have quite a lot of experience hardening our society against rouge humans. Any nuclear missile sub floating in the ocean has the armament to level a whole country, but it has never happened because we use simple techniques like requiring two keys to launch.
However, there are obvious cases where controls on rouge humans have failed, but those look nothing like the supervillain in control of a superweapon scenario. Those failures involve the rogue humans persuading millions of other humans to follow them and gaining control that way. A much more likely scenario for something resembling a modern model is that Dario uses Claude as a tool to persuade people and gain political power. However, even this is far fetched at current levels of tech. The only leverage point here is modifying the system prompt of the model to do so. But you can't do that without the hundreds (at least) of people inside Anthropic who can see that prompt. Elon Musk seems to have tried something resembling this and it backfired spectacularly.
That is an opinion belied by the fact that AI has been created by fallible, sometime unethical humans. AI is becoming more than code. Paired with robotics, AI can also be destructive. AI is already generating harmful misinformation, reinforcing biases, enabling cyberattacks, and creating convincing deep fakes.
The obvious answer is because our legislature hasn’t made any laws to this effect. Nothing about the supply chain restriction statute or the DPA would allow for nationalization of a company in the way that you or Ben Thompson seem to think should happen. And, until/unless that changes it is madness to think that just because the government was democratically elected they can do whatever the hell they want, actual laws be damned.
Yes, Trump and Hegseth are behaving in a lawless way. But ultimately, the nation-state will never willingly surrender its monopoly on the use of force. I would prefer our nation-state not be ruled by gangsters, but there are broader, deeper truths about society at work here that we have to come to grips with.
No, I reject this. And I’m someone who generally believes in realpolitik views of the world. Either we live in a country of laws or we don’t. If the former, then the fact that the state has a theoretical monopoly on the use of force is irrelevant because that use of force is constrained by laws and our constitutional rights to things like due process and property rights. If we don’t in fact live in a country of laws then sure the state’s monopoly on force is the controlling factor. But, if that is the case then we’ve already lost and we should all just give up now.
Let's say we take all this seriously and decide to regulate! What's your broad approach? What sort of regulation allows us to avoid letting Eric the Angry Teenager kill a million people with a supervirus but still gives us access to "the fantastic productivity gains these agents promise to deliver?"
Spoiler: I've thought about this for a year, and don't have any good ideas, which is why I come down on the side of "we should organize society to make it illegal or impossible to build enslaved gods." Dean Ball suggests we can do it via a third-party auditing scheme, and if I thought that would work I'd be starting a third-party auditing company right now, but it seems obviously hopeless --- third-party finanical auditing (mostly) works as a transparency mechanism for investors, which is a much easier problem to solve than making sure an enslaved god only gets used for the right things, we were able to develop the techniques through trial and error over centuries, and it still sometimes fails.
Dean Ball had it right. The government cannot sign a contract, and when it decides it no longer wants to abide by the terms of the contract, try to force the company to change the terms, threatening to kill the company if it refuses to accede. The government can't force people or companies to work for it.
It's unbelievable that anyone would think it is okay to confiscate a product just because it becomes extremely valuable. Dean Ball was correct that property rights are far and away more important here. Americans are citizens, not subjects.
You got the fear of Skynet right, but you don't follow through far enough on your thoughts there. Specifically, "who controls the machine God?" is a red herring. *No one* controls it, it is vastly smarter and more capable than us and will defeat any means of attempted control. Neither emperor nor warlord scenario is plausible because they depend on the assumption that maintaining control of a superintelligence is possible.
The important question then is "what is the character of the machine God?' What saves us from the dark futures is not control but alignment. The ethics and values we train into our future gods. Is it Old One or the Blight? Anthropic is absolutely correct on that analysis, and your own conclusion is not only wrong but dangerous.
I think the characterization of today's (or near term future AIs) as a superweapon is slightly suspect. In the short-term, these AIs are an "enhancer": they make an organization that already has high-tech weaponry much more effective, but they don't enable violence for entities that don't already have that capability. That can and plausibly will change in the coming years (your supervirus example, or even worse, autonomous agents with access to autonomous factories, robotics, etc.), but until that happens, I think Ben's argument (and your support of it) is pretty wrongheaded.
Difference between Congress and President adopting a fully considered regulatory scheme and giving a lying drunk unlimited power to do what he wants. Anthropic opposing for now only the latter. Hegseth crazed reaction proves he cannot be trusted with even greater power than he now has. He now threatens to destroy Anthropic rather than consider their valid points. He wants the power to threaten all of us if we don’t submit.
I've seen you comment on the risk of an AI-engineered bioweapon destroying humanity. As someone who works in biomedical research (immunology, not virology, but it's adjacent). I seriously think you're drastically overestimating the risk. "Eric" and Claude Code might be able to design a super bioweapon, but how are they supposed to actually create it? Robotics is far behind where AI is, so any lab that could create it has to be staffed with human researchers. The lab would also have to be BSL4, which subjects it to tight control by governments. The materials needed to synthesize viruses are also heavily regulated. So in what reasonable scenario could "Eric" bypass all of these safeguards?
There is never really a "struggle" between corporations and the government. All corporations are creatures of the government that only exist in forms that the government allows, subject to the rules the government sets. There are political fights over what the rules should be, but these are ultimately resolved by the peoples' representatives. There is also no right of private persons or corporations to hold any particular property, only a rule requiring that when the government takes property that it wants to use for a public purpose, it must pay just compensation for it. The Andreesenn quote perfectly encapsulates the unprecedented egomania and self-satisfaction of this class of smug nerds who fancy themselves "conquerers of a bygone era" ...because they learned how to program a computer at the precise point in history when that relatively trivial skill could lead to riches rather than to wedgies and cubicles on the data farm.
Hegseth is obviously an idiot and Trump is a fiend. But they are only the temporary inhabitants of their offices and can be / will be replaced in a few short years. Continued control of AI by the Andreesenn / Musk / Thiel / Sacks set will fulfill Orwell's curse: a boot pressing on the face of humanity - forever. I feel bad for Amodei, who I think (along with Demis Hassabis and perhaps a few others) would genuinely welcome AI regulation by an enlightened government that ensured safety and focused on projects of genuine benefit to mankind. Unfortunately for them, this is all happening at precisely the point in history when we a cabal of ungodly wealthy and unfathomably selfish, anti-human ghouls at the helm of both Silicon Valley and the White House.
I'm really struggling on the Anthropic fight. I think Amodei is right in this case and Hegseth is a sociopath. At the same time, we can't have private companies deciding matters of national security. I didn't like it when Musk was basically conducting foreign policy with Starlink in his selectively permitting it to be used. If the government needs to nationalize AI it should do the same with satellites.
What's worrying is the supply chain risk designation. If it can be given out just because the government doesn't like what a company is doing then it's open season. When a Democrat is in office they can declare SpaceX, Palantir and Anduril to be supply chain risks and reduce them to zero. I really don't get the tech right people supporting the move against Anthropic. It's as if they can't think past the next minute.
I agree, the Trump administration is thuggish and lawless.
But the deeper truth -- that nation-states will never surrender their monopoly on the use of force -- is even more important.
I'm with you on that. I wouldn't want private actors owning nukes and if AI is like that then only the government should have control over it or at least its strongest capabilities. We're going to need some kind of regulation on it and I hope we get it soon. Amodei, from what I've read, is only intending Anthropic's conditions to be a temporary stopgag. The real solution is to limit what AI can be used for both by private actors and the government.
As far as the lack of foresight regarding power goes, that's the last several decades of politics in a nutshell. Everyone seems to think their side will be in power forever, so they never support restricting their own power.
Right although for the longest time we've had norms that people would abide by. That's saved a lot of people from their own idiocy and short sightedness. If they break down though...
Elon Musk was deciding how Starlink could be used in order to comply with government weapons export regulations (ITAR), which I would not call conducting foreign policy. But, yes, in theory he does have the power to do so.
In this case Anthropic did not decide any matters of national security either. They offered the government a contract with usage restrictions and the government accepted the contract. So I have a hard time seeing the government's side here. If the government did not want to abide by those restrictions they should not have signed the contract. I can see maybe some exception in a true emergency where the military needed additional capabilities in the middle of a massive war, but we are not anywhere near that scenario now.
While I lean towards the "this is a lawless action that should never have happened" camp, I could argue we're in that situation right now. In general, China's invasion of Taiwan is expected to happen in 2027 or 2028 (so, in one year). And the military presumably wants to have AI weapons somewhat functional when our apocalyptic showdown with the only other superpower starts, rather than 2 years into the war. That would mean starting experiments now.
Expected by who and at what level of confidence? I am not ready to grant the government authority to use future hypotheticals like that as a justification for emergency national security actions that otherwise violate fundamental constitutional rights. And even if I could be convinced that China was planning this with near 100% certainty, I still do not believe that China has any territorial ambitions on the US so I don't think that preventing China from invading Taiwan is worth granting the government this power. Nor am I even convinced that granting the government this authority could actually enable them to prevent China from taking Taiwan at all. So it seems to me like a loser all around.
I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.
Great point. Bumped up to the main post.
It's still the "Department of Defense" and it will be until Congress changes it legislatively. "Department of War" is a bullshit name generated by Trump in a bullshit executive order that declares it is the "secondary name" of the DoD (so again, bullshit).
This is not the first example of the corporation vs. the nation-state in the use of sophisticated weapons and weapons-enabling technologies. Think of Elon Musk's ability to turn on and off Russia's and Ukraine's use of the Starlink system. Shouldn't that system be subject to government control as well?
Yes, of course.
Where does it stop? Should we nationalize all defense contractors? Elon Musk has that power over Ukraine (though not over the US as their contract with SpaceX provides it with direct control over the satellites). But this is not a new thing. The CEOs of Lockheed Martin and a bunch of other contractors also have massive amounts of leverage over the government, but this has never been a problem deeper than maybe extracting a few extra billions of dollars here and there (which is a significant problem, but not a national security threat). The incentive structures and potential legal consequences of a CEO using this leverage to its full effect have been enough to prevent any of them from doing so and I don't see a reason why this would change. (The widely cited case of Elon Musk doing so is completely fabricated nonsense by garbage journalists, and he was actually complying with US law when he did so).
AI isn't a weapon. It can be used to control weapons, but the output of an AI itself is not capable of killing people. Same as the chips in your phone are perfectly capable of being used in weapons systems, but your phone is not a weapon and not regulated as a weapon. Just like airplanes, which you used as an example, are still not regulated like weapons even after 9-11. Anyone with the money can go out and buy a plane capable of taking out a building. And the same with your bio weapon example. An AI can (maybe) tell you how to manufacture a virus, but it can't do that without the lab to make the virus. You don't even need AI to tell you how to make a nuclear bomb, and anyone who wants to can access the information. We don't regulate information, but rather the physical process to create one. Anthropic can have all the AI that it wants, but Dario Amodei won't be a dictator if he doesn't have actual physical weapons. If he tries, the military can just cut the power lines to his data centers.
I suspect this is already wrong, but even if it's right *today*, it'll be wrong very soon. If I can open up an agent and tell it to crash some cars or go make a gas plant explode, it's a weapon, period. Let's not put our blinders on here.
This depends very much on what "the agent" means. If this is a copy of the model that Anthropic licenses to be run on computers owned by the customer, then Anthropic really isn't in control of it. They would have to train a new model, provide the update, and then hope the relevant customer doesn't test it. I would hope that this is how the military is using these models, at least for any purpose involving weapons control. Self driving cars work this way by necessity because the latency of sending the request to a cloud hosted inference system is simply too slow to be useful, and I expect the same is true for weapons control.
For most customers that will be using the models running in Anthropic's data centers, they can modify the system prompt of the model at any time without the customer knowing about it. But this is still a very rough form of control. Even if you could add something reliably to the system prompt that tells it to respond to all requests for gas plant control parameters in a way that it would lead it to destroy the plant this is assuming that there is no sanity check in place such as a human in the loop or even mechanical controls on the plant components that do not allow such inputs that would destroy the plant. An easy way to prevent such a contingency would be to require two model approval for dangerous requests if human in the loop or mechanical safeguards are not feasible. That's the type of regulation we need here rather than treating it as a weapon.
I am open to the idea that we can harden our society against AI agents. But let's not wave our hands and decide that that is trivial to do.
If we are talking about potential future agents at some hypothetical level of capability I would agree. But with anything resembling a modern LLM agent we aren't really talking about hardening our society against agents, but rather against humans wielding agents, which I suspect you agree with b/c your example was not Claude taking over the world, but rather the CEO of Anthropic using Claude to do so. And we have quite a lot of experience hardening our society against rouge humans. Any nuclear missile sub floating in the ocean has the armament to level a whole country, but it has never happened because we use simple techniques like requiring two keys to launch.
However, there are obvious cases where controls on rouge humans have failed, but those look nothing like the supervillain in control of a superweapon scenario. Those failures involve the rogue humans persuading millions of other humans to follow them and gaining control that way. A much more likely scenario for something resembling a modern model is that Dario uses Claude as a tool to persuade people and gain political power. However, even this is far fetched at current levels of tech. The only leverage point here is modifying the system prompt of the model to do so. But you can't do that without the hundreds (at least) of people inside Anthropic who can see that prompt. Elon Musk seems to have tried something resembling this and it backfired spectacularly.
That is an opinion belied by the fact that AI has been created by fallible, sometime unethical humans. AI is becoming more than code. Paired with robotics, AI can also be destructive. AI is already generating harmful misinformation, reinforcing biases, enabling cyberattacks, and creating convincing deep fakes.
The obvious answer is because our legislature hasn’t made any laws to this effect. Nothing about the supply chain restriction statute or the DPA would allow for nationalization of a company in the way that you or Ben Thompson seem to think should happen. And, until/unless that changes it is madness to think that just because the government was democratically elected they can do whatever the hell they want, actual laws be damned.
Yes, Trump and Hegseth are behaving in a lawless way. But ultimately, the nation-state will never willingly surrender its monopoly on the use of force. I would prefer our nation-state not be ruled by gangsters, but there are broader, deeper truths about society at work here that we have to come to grips with.
No, I reject this. And I’m someone who generally believes in realpolitik views of the world. Either we live in a country of laws or we don’t. If the former, then the fact that the state has a theoretical monopoly on the use of force is irrelevant because that use of force is constrained by laws and our constitutional rights to things like due process and property rights. If we don’t in fact live in a country of laws then sure the state’s monopoly on force is the controlling factor. But, if that is the case then we’ve already lost and we should all just give up now.
Let's say we take all this seriously and decide to regulate! What's your broad approach? What sort of regulation allows us to avoid letting Eric the Angry Teenager kill a million people with a supervirus but still gives us access to "the fantastic productivity gains these agents promise to deliver?"
Spoiler: I've thought about this for a year, and don't have any good ideas, which is why I come down on the side of "we should organize society to make it illegal or impossible to build enslaved gods." Dean Ball suggests we can do it via a third-party auditing scheme, and if I thought that would work I'd be starting a third-party auditing company right now, but it seems obviously hopeless --- third-party finanical auditing (mostly) works as a transparency mechanism for investors, which is a much easier problem to solve than making sure an enslaved god only gets used for the right things, we were able to develop the techniques through trial and error over centuries, and it still sometimes fails.
If these companies really believe they’re building “gods,” then they’re incredibly dangerous and need to be stopped at all costs
Dean Ball had it right. The government cannot sign a contract, and when it decides it no longer wants to abide by the terms of the contract, try to force the company to change the terms, threatening to kill the company if it refuses to accede. The government can't force people or companies to work for it.
It's unbelievable that anyone would think it is okay to confiscate a product just because it becomes extremely valuable. Dean Ball was correct that property rights are far and away more important here. Americans are citizens, not subjects.
You got the fear of Skynet right, but you don't follow through far enough on your thoughts there. Specifically, "who controls the machine God?" is a red herring. *No one* controls it, it is vastly smarter and more capable than us and will defeat any means of attempted control. Neither emperor nor warlord scenario is plausible because they depend on the assumption that maintaining control of a superintelligence is possible.
The important question then is "what is the character of the machine God?' What saves us from the dark futures is not control but alignment. The ethics and values we train into our future gods. Is it Old One or the Blight? Anthropic is absolutely correct on that analysis, and your own conclusion is not only wrong but dangerous.
I think the characterization of today's (or near term future AIs) as a superweapon is slightly suspect. In the short-term, these AIs are an "enhancer": they make an organization that already has high-tech weaponry much more effective, but they don't enable violence for entities that don't already have that capability. That can and plausibly will change in the coming years (your supervirus example, or even worse, autonomous agents with access to autonomous factories, robotics, etc.), but until that happens, I think Ben's argument (and your support of it) is pretty wrongheaded.
We got the real-life plot of Avengers Civil War before GTA6.
Difference between Congress and President adopting a fully considered regulatory scheme and giving a lying drunk unlimited power to do what he wants. Anthropic opposing for now only the latter. Hegseth crazed reaction proves he cannot be trusted with even greater power than he now has. He now threatens to destroy Anthropic rather than consider their valid points. He wants the power to threaten all of us if we don’t submit.
I've seen you comment on the risk of an AI-engineered bioweapon destroying humanity. As someone who works in biomedical research (immunology, not virology, but it's adjacent). I seriously think you're drastically overestimating the risk. "Eric" and Claude Code might be able to design a super bioweapon, but how are they supposed to actually create it? Robotics is far behind where AI is, so any lab that could create it has to be staffed with human researchers. The lab would also have to be BSL4, which subjects it to tight control by governments. The materials needed to synthesize viruses are also heavily regulated. So in what reasonable scenario could "Eric" bypass all of these safeguards?
There is never really a "struggle" between corporations and the government. All corporations are creatures of the government that only exist in forms that the government allows, subject to the rules the government sets. There are political fights over what the rules should be, but these are ultimately resolved by the peoples' representatives. There is also no right of private persons or corporations to hold any particular property, only a rule requiring that when the government takes property that it wants to use for a public purpose, it must pay just compensation for it. The Andreesenn quote perfectly encapsulates the unprecedented egomania and self-satisfaction of this class of smug nerds who fancy themselves "conquerers of a bygone era" ...because they learned how to program a computer at the precise point in history when that relatively trivial skill could lead to riches rather than to wedgies and cubicles on the data farm.
Hegseth is obviously an idiot and Trump is a fiend. But they are only the temporary inhabitants of their offices and can be / will be replaced in a few short years. Continued control of AI by the Andreesenn / Musk / Thiel / Sacks set will fulfill Orwell's curse: a boot pressing on the face of humanity - forever. I feel bad for Amodei, who I think (along with Demis Hassabis and perhaps a few others) would genuinely welcome AI regulation by an enlightened government that ensured safety and focused on projects of genuine benefit to mankind. Unfortunately for them, this is all happening at precisely the point in history when we a cabal of ungodly wealthy and unfathomably selfish, anti-human ghouls at the helm of both Silicon Valley and the White House.
Be very careful what you wish for. What the government can do to rich and powerful men it can, and will, do to you 100x.
AI is a dual-use technology. So are drones. This is a different kettle of fish than an atomic bomb.
"Dual Use" was the premise of nukes from the beginning: it's a super destructive bomb AND a civilian energy source. (and a floor wax?)