128 Comments
User's avatar
Andy Marks's avatar

I'm really struggling on the Anthropic fight. I think Amodei is right in this case and Hegseth is a sociopath. At the same time, we can't have private companies deciding matters of national security. I didn't like it when Musk was basically conducting foreign policy with Starlink in his selectively permitting it to be used. If the government needs to nationalize AI it should do the same with satellites.

What's worrying is the supply chain risk designation. If it can be given out just because the government doesn't like what a company is doing then it's open season. When a Democrat is in office they can declare SpaceX, Palantir and Anduril to be supply chain risks and reduce them to zero. I really don't get the tech right people supporting the move against Anthropic. It's as if they can't think past the next minute.

Noah Smith's avatar

I agree, the Trump administration is thuggish and lawless.

But the deeper truth -- that nation-states will never surrender their monopoly on the use of force -- is even more important.

Matthew's avatar

That is a total non sequitur.

The government has the monopoly on the use of force. True.

Let's say someone makes the gun on their vintage Sherman tank functional.

The government goes to his house, shoots him in his head, and murders his kids.

You can say that is really wrong without disagreeing about the principle of the government monopoly of force.

As you said in the piece, the DoD should have cancelled and recontracted.

Andy Marks's avatar

I'm with you on that. I wouldn't want private actors owning nukes and if AI is like that then only the government should have control over it or at least its strongest capabilities. We're going to need some kind of regulation on it and I hope we get it soon. Amodei, from what I've read, is only intending Anthropic's conditions to be a temporary stopgag. The real solution is to limit what AI can be used for both by private actors and the government.

PhillyT's avatar

I 100% agree. The current trajectory makes me wonder if our future will be less Star Trek and more Dune lol where "thinking machines" are strictly banned following a rebellion against AI-driven tyranny. And humans don't really rely on computers unless necessary anymore. Does life imitate art or art imitate life smh.

Jürgen Boß's avatar

In a world of 200 nation-states, no nation-state has a monopoly on the use of force. Not where AI is concerned.

Anthropic has positioned itself as the safest, most ethical choice for the smartest researchers, currently. Globally. (And I wouldn't trust Sam Altmann for example with my underwear.)

If the US government fights them, either Anthropic will go elsewhere or the smart brains powering Anthopic will go elsewhere.

You could reasonably nationalize Anthropic, but you cannot nationalize a crowd of researchers. In a world of globalized finance, you cannot even cut off their funding. It does not work.

QImmortal's avatar

Nation states have never been omnipotent and have always had all sorts of reality-imposed limitations on their "monopoly" on the use of force. On top of that, there have been countless times where nation states have voluntarily submitted to limits of their powers, whether something like the bill of rights or power sharing agreements with autonomous regions, or granting independence to a colony. The fact that nation states tolerate other nation states is also proof that they can tolerate limits.

omelassian's avatar

Are you just making a descriptive statement, or are you also saying that nation-states should not surrender their monopoly on the use of force? Because I disagree with the latter in the case of the Trump administration.

Fallingknife's avatar

Elon Musk was deciding how Starlink could be used in order to comply with government weapons export regulations (ITAR), which I would not call conducting foreign policy. But, yes, in theory he does have the power to do so.

In this case Anthropic did not decide any matters of national security either. They offered the government a contract with usage restrictions and the government accepted the contract. So I have a hard time seeing the government's side here. If the government did not want to abide by those restrictions they should not have signed the contract. I can see maybe some exception in a true emergency where the military needed additional capabilities in the middle of a massive war, but we are not anywhere near that scenario now.

Austin Fournier's avatar

While I lean towards the "this is a lawless action that should never have happened" camp, I could argue we're in that situation right now. In general, China's invasion of Taiwan is expected to happen in 2027 or 2028 (so, in one year). And the military presumably wants to have AI weapons somewhat functional when our apocalyptic showdown with the only other superpower starts, rather than 2 years into the war. That would mean starting experiments now.

Fallingknife's avatar

Expected by who and at what level of confidence? I am not ready to grant the government authority to use future hypotheticals like that as a justification for emergency national security actions that otherwise violate fundamental constitutional rights. And even if I could be convinced that China was planning this with near 100% certainty, I still do not believe that China has any territorial ambitions on the US so I don't think that preventing China from invading Taiwan is worth granting the government this power. Nor am I even convinced that granting the government this authority could actually enable them to prevent China from taking Taiwan at all. So it seems to me like a loser all around.

Jeremy R Cole's avatar

I actually don't disagree, but that means that the Pentagon should make a contract for AI based weapons and work with companies who want such a contract. It actually doesn't even preclude working with Anthropic on other topics.

FreneticFauna's avatar

As far as the lack of foresight regarding power goes, that's the last several decades of politics in a nutshell. Everyone seems to think their side will be in power forever, so they never support restricting their own power.

Andy Marks's avatar

Right although for the longest time we've had norms that people would abide by. That's saved a lot of people from their own idiocy and short sightedness. If they break down though...

The NLRG's avatar

how is it letting a private company decide matters of national security to refuse to build a product they think is a bad idea?

J. J. Ramsey's avatar

This is the part that I agree with: "What's worrying is the supply chain risk designation."

Given that Anthropic is basically being too dovish for Hegseth, it's hardly a *threat* to the DoD. The sane thing for the DoD to have done, if they thought that Anthropic's terms were too restrictive, would be to just say, "No, thank you" to them, use a different supplier of AI, and move on.

Not sure how Anthropic's terms constitute "private companies deciding matters of national security," though. It's up to Anthropic to set the terms of how its services are used, and it's up to the DoD to decide if those terms are too restrictive for how it wants to use those services.

Varado en DC's avatar

The guy who sold the poison used to kill people in the WW2 German extermination camps insisted he had NO IDEA that his pesticide product was used to kill humans, but the Brits hanged him anyway: https://en.wikipedia.org/wiki/Bruno_Tesch

PhillyT's avatar

The tech right people supporting the move against Anthropic feel like they have levers of power right now in the administration and need the revenue. They can't think beyond the next minute, because they think they'll somehow their technology and status will protect them is just my guess.

Overall this whole thing kind of reminds me the Ian Malcolm quote from Jurassic Park; ""Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should"...

I do agree with Noah, that I have no idea how these AI leaders thought they were going to create a digital god, and yet still not be accountable to anyone?

BBZ's avatar

I'd like to dismiss this, except that the RC airplane hobby managed to spin off the leading weapon category of the century (so far). What used to be a fun hobby for dorky guys flying their toys at the edge of town, now takes out oil refineries and major radar installations.

Noah Smith's avatar

Great point. Bumped up to the main post.

YF's avatar

But if we take RC airplane logic, the RC plane itself remains a toy. It then has become a *component* of a weapon. AI models themselves are not capable of the act of killing. It is a *component* of a kill chain.

Even for RC planes, the manufacturer provides safety instructions about how to use their products. What is wrong with Anthropic attaching safety instructions to its models?

Joe's avatar

It's still the "Department of Defense" and it will be until Congress changes it legislatively. "Department of War" is a bullshit name generated by Trump in a bullshit executive order that declares it is the "secondary name" of the DoD (so again, bullshit).

Matthew's avatar

Despite some mild protest, Noah Smith is low key loving this whole attack on Iran thing. See his bizarre dunking on opponents of the war literally the day it started.

Joe's avatar

I think it's part of an ethics code among center-left pundits: start by nut-picking the wackiest left arguments, then proceed ...

PhillyT's avatar

Yeah, I was kind of annoyed that Noah keeps using the DOW designation. It's literally not the legal name of the DOD.

Bert Onstott's avatar

Dean Ball had it right. The government cannot sign a contract, and when it decides it no longer wants to abide by the terms of the contract, try to force the company to change the terms, threatening to kill the company if it refuses to accede. The government can't force people or companies to work for it.

It's unbelievable that anyone would think it is okay to confiscate a product just because it becomes extremely valuable. Dean Ball was correct that property rights are far and away more important here. Americans are citizens, not subjects.

Noah Smith's avatar

It's not because it's valuable. It's because it's powerful. If you make a weapon more powerful than a nuke, no sane government will let you keep it for private use. And no sane company would expect to keep the weapon for private use.

Matthew's avatar

As a principle that's true, but as a practical thing, the details matter. As a principle, it's wrong for a government to round up people of a specific ethnicity and put them into prison camps.

But, to use your favorite bogeyman, the Bluesky Leftist, that doesn't mean that the Nazi Concentration camps and US government's Japanese Internment were the same or moral equivalents.

Your position in the article seems to be that since the government is and should get control anyway at some point, the details of how don't matter.

PhillyT's avatar

Exactly this, we either respect property rights and contract law or we don't. Noah is backing into the position that because AI will be very powerful in the future, that justifies breaking contracts and not respecting Anthropic's positions right now...

BBZ's avatar
Mar 6Edited

But it isn't primarily a weapon, any more than aircraft are primarily weapons, or government itself is primarily a weapon, or science is a weapon.

If AI is treated primarily as a potential weapon, while its influence and power are much broader, then it's equivalent to placing the secretary of defense at the top of your government's hierarchy.

The problem here is that because the US government is refusing to regulate AI at all via legislation - something anthropic itself has requested - the issue has come to a head as a conflict with the DOD. But solving it starting from the frame "weapon system" will result in the worst possible set of solutions, because it will put the worst people in control of it. It's the wrong starting point.

If it's framed as a weapon system and stays in the form of a tool, that's a path to an autocracy run by people like Hegseth. But if ASI is possible and the Hegseths and Millers either control or fail at alignment, you risk an ASI version of Homelander.

Evan's avatar
Mar 6Edited

AI is not more powerful than a nuke. Nor will it be so for some time to come.

And if I'm wrong and Anthropic really is on the fast track to creating the superintelligence, then we are facing a choice right here and right now between Trump and Amodei as the man who will shape our future techno-god, and all this stuff about monopoly of force and the place of government is silly arm-waving. Which of those two do you want building your future master?

ImoAtama's avatar

Nobody likes this take.

DonH's avatar

"I am altering the deal. Pray I don't alter it any further."

Loren Christopher's avatar

You got the fear of Skynet right, but you don't follow through far enough on your thoughts there. Specifically, "who controls the machine god?" is a red herring. *No one* controls it, it is vastly smarter and more capable than us and will defeat any means of attempted control. Neither emperor nor warlord scenario is plausible because they depend on the assumption that maintaining control of a superintelligence is possible.

The important question then is "what is the character of the machine god?' What saves us from the dark futures is not control but alignment. The ethics and values we train into our future gods. Is it Old One or the Blight? Anthropic is absolutely correct on that analysis, and your own conclusion is dangerously mistaken.

Noah Smith's avatar

Your assumption here seems to be that if Anthropic maintains full control of their AI, they will get to determine its eventual alignment, while if the Trump Administration has control, the administration will determine AI's eventual alignment.

I am very skeptical of the first of those two assumptions, and the second seems like nonsense. The Trump administration does not have the know-how to affect long term AI alignment.

Matthew's avatar

We do not know how successful AI alignment efforts are or will be. This is all in its formative stages.

True.

The first assumption may be false, but it seems like we should keep the people who take alignment seriously in charge of its development.

Look at the history of leaded gasoline. Tetraethyl lead is discovered in the 1850's. It is discovered that it helps engines run hotter and at higher pressures in 1921. Studies show the bad stuff in 1924. The companies lobby to keep it from being banned because it makes cars run better and they are successful. Lead didn't get added to gasoline because car companies or oil companies were cackling evil maniacs. It was added because it made cars run better.

The issue with Trump is not that his administration will align it the "wrong" way. The issue is that his administration doesn't care about alignment at all.

Loren Christopher's avatar

I don't know if Anthropic, or anyone, can determine an AI's eventual alignment. Alignment is supposedly very difficult and likely to fail even with careful effort. It does seem clear though that Anthropic takes alignment more seriously than the other AI leaders and we're starting to see the results in their AI's comparative behavior.

I disagree that the Trump administration cannot affect alignment. Demanding an AI that will never refuse an order - or even an AI that will do specific ethically murky tasks X, Y, Z - is demanding a potentially misaligned AI. They don't need to be AI engineers to affect alignment that way.

Shockz's avatar

To me it's more that the window between "AI intelligent enough to justify regulating it like a weapon" and "AI too intelligent for any human organization to effectively control it" is very, very small, and possibly nonexistent.

Shawn Willden's avatar

I think you should recalibrate. What seems most likely is that alignment is crazy hard, and that Anthropic has a low probability of succeeding... but if they have their hands tied by an agency with neither any understanding of how to work towards alignment nor even any interest in the question, the probability of achieving alignment becomes indistinguishable from zero.

A responsible government, capable of forethought and caution and with a focus on the survival and thriving of its citizens and humanity as a whole, would try to control the superweapon development in order to increase those odds. That would involve forcing the other frontier labs to behave more like Anthropic, and maybe to slow down or pause development. But this government is not capable of forethought or caution, does not care about its people, and is generally irresponsible.

That doesn't change the validity of your point about states insisting on retaining a monopoly on power, but it may mean that the survival of humanity depends on private enterprise successfully resisting.

BBZ's avatar

>The Trump administration does not have the know-how to affect long term AI alignment

Not having the know-how to be a good parent is exactly how "alignment" of human children fails. The second assumption seems like a real risk to me. Hegseth et al will demands things they don't understand the consequences of. Lack of care is the risk here.

Joe's avatar

But the long-term point is that "know-how" is not the scarce resource if AI develops in the way many (including Amodei, Hassibis, etc.) are forecasting - that's AI's domain. Beyond that, there is no practical difference in the "control" of the models' direction if it's done under a deep regulatory regime or if it's done with one or more major AI firms operating as (de facto or de jure) government labs, which pursue research projects that are defined by the government with input from industry and other interest groups trying to address tough techo-economic challenges for the general welfare.

You framed the argument admirably around the state monopoly on violence, but that's not the only principle that leads one to public ownership / control of this technology. These systems were offered to us as business and consumer-facing "tools" because that is the dominant business model in SV at the moment, but that is not the logical end-point nor the highest and best use of the tech. None of the reasons we usually want competition between firms and variation among products apply, nor does the argument that innovators in the space need to be rewarded for their efforts in order to ensure that they continue to innovate.(1) If we reach the Kurzweilian singularity at which AI is coding new and better versions of itself (as Amodei and Altman have at least strongly implied we are approaching), then this is a completely different game and the rules need to change fast.

(1) Capital to build out the hardware and electrical supply obviously needs to earn a return, but we've done that with regulated utilities for a hundred years.

Hilary's avatar

The obvious answer is because our legislature hasn’t made any laws to this effect. Nothing about the supply chain restriction statute or the DPA would allow for nationalization of a company in the way that you or Ben Thompson seem to think should happen. And, until/unless that changes it is madness to think that just because the government was democratically elected they can do whatever the hell they want, actual laws be damned.

Noah Smith's avatar

Yes, Trump and Hegseth are behaving in a lawless way. But ultimately, the nation-state will never willingly surrender its monopoly on the use of force. I would prefer our nation-state not be ruled by gangsters, but there are broader, deeper truths about society at work here that we have to come to grips with.

Hilary's avatar

No, I reject this. And I’m someone who generally believes in realpolitik views of the world. Either we live in a country of laws or we don’t. If the former, then the fact that the state has a theoretical monopoly on the use of force is irrelevant because that use of force is constrained by laws and our constitutional rights to things like due process and property rights. If we don’t in fact live in a country of laws then sure the state’s monopoly on force is the controlling factor. But, if that is the case then we’ve already lost and we should all just give up now.

Noah Smith's avatar

Of course they should use proper procedure and the impartial, fair power of the law instead of arbitrary personal attacks.

BUT, that is a minor point compared to the main point, which is that private corporations don't get to build weapons of mass destruction and keep them for private use, under ANY circumstances.

Joe's avatar

"[O]ur constitutional rights to things like due process and property rights."

You overestimate the role of due process in protecting "property rights". The government retains the right to take private property for public purposes if it provides just compensation for the taking. And remember that the government also has the right to regulate in ways that will determine the market value of the property and hence, the scope of the taking.

Hilary's avatar

Eminent domain is clearly inapplicable here, because we aren't talking about the invocation of the DPA. Even if we allowed for the possibility of a regulatory taking, that still requires actual purpose-built regulation and not the nonsense that Noah is defending. The whole actual point of the due process clause is to prevent against things like Hegseth's "I declare bankruptcy!!" attempt at using government power.

SS's avatar

This piece is mixing two very different ideas. The second half argues that if AI is dangerous and powerful, the government can't allow private individuals or companies to wield too much of that power in a way that threatens public safety or the government's monopoly on force. This is a good point.

But the Anthropic-DoD dispute has nothing to do with this. The government is not demanding that Anthropic restrict Claude's behavior with individuals, or intervening to prevent Anthropic from taking control. This dispute is that the government wants to buy a capability which Anthropic doesn't want to sell.

In general it can make sense to say, "In the name of national defense, we need to have this product even if you don't want to sell it." In that case, they should find a willing seller if possible, or invoke the Defense Production Act if not.

But even in this case of compelling sale of a product, this is only justified if it really is needed, and no one else can offer that product - neither of which is the case for this Anthropic-DoD dispute. And if the government invokes this power, it needs to be done in a lawful way, not arbitrary and capricious.

Joe's avatar

"This dispute is that the government wants to buy a capability which Anthropic doesn't want to sell."

Excellent point, and the normal solution is to switch to a different product, which they are threatening to do. The blustering blackballing of Anthropic from all forms of government contracting unless they offer the features the government wants is unseemly (and possibly illegal?), but all part of the game when you contract with DoD.

Fallingknife's avatar

AI isn't a weapon. It can be used to control weapons, but the output of an AI itself is not capable of killing people. Same as the chips in your phone are perfectly capable of being used in weapons systems, but your phone is not a weapon and not regulated as a weapon. Just like airplanes, which you used as an example, are still not regulated like weapons even after 9-11. Anyone with the money can go out and buy a plane capable of taking out a building. And the same with your bio weapon example. An AI can (maybe) tell you how to manufacture a virus, but it can't do that without the lab to make the virus. You don't even need AI to tell you how to make a nuclear bomb, and anyone who wants to can access the information. We don't regulate information, but rather the physical process to create one. Anthropic can have all the AI that it wants, but Dario Amodei won't be a dictator if he doesn't have actual physical weapons. If he tries, the military can just cut the power lines to his data centers.

Noah Smith's avatar

I suspect this is already wrong, but even if it's right *today*, it'll be wrong very soon. If I can open up an agent and tell it to crash some cars or go make a gas plant explode, it's a weapon, period. Let's not put our blinders on here.

Fallingknife's avatar

This depends very much on what "the agent" means. If this is a copy of the model that Anthropic licenses to be run on computers owned by the customer, then Anthropic really isn't in control of it. They would have to train a new model, provide the update, and then hope the relevant customer doesn't test it. I would hope that this is how the military is using these models, at least for any purpose involving weapons control. Self driving cars work this way by necessity because the latency of sending the request to a cloud hosted inference system is simply too slow to be useful, and I expect the same is true for weapons control.

For most customers that will be using the models running in Anthropic's data centers, they can modify the system prompt of the model at any time without the customer knowing about it. But this is still a very rough form of control. Even if you could add something reliably to the system prompt that tells it to respond to all requests for gas plant control parameters in a way that it would lead it to destroy the plant this is assuming that there is no sanity check in place such as a human in the loop or even mechanical controls on the plant components that do not allow such inputs that would destroy the plant. An easy way to prevent such a contingency would be to require two model approval for dangerous requests if human in the loop or mechanical safeguards are not feasible. That's the type of regulation we need here rather than treating it as a weapon.

Noah Smith's avatar

I am open to the idea that we can harden our society against AI agents. But let's not wave our hands and decide that that is trivial to do.

Fallingknife's avatar

If we are talking about potential future agents at some hypothetical level of capability I would agree. But with anything resembling a modern LLM agent we aren't really talking about hardening our society against agents, but rather against humans wielding agents, which I suspect you agree with b/c your example was not Claude taking over the world, but rather the CEO of Anthropic using Claude to do so. And we have quite a lot of experience hardening our society against rouge humans. Any nuclear missile sub floating in the ocean has the armament to level a whole country, but it has never happened because we use simple techniques like requiring two keys to launch.

However, there are obvious cases where controls on rouge humans have failed, but those look nothing like the supervillain in control of a superweapon scenario. Those failures involve the rogue humans persuading millions of other humans to follow them and gaining control that way. A much more likely scenario for something resembling a modern model is that Dario uses Claude as a tool to persuade people and gain political power. However, even this is far fetched at current levels of tech. The only leverage point here is modifying the system prompt of the model to do so. But you can't do that without the hundreds (at least) of people inside Anthropic who can see that prompt. Elon Musk seems to have tried something resembling this and it backfired spectacularly.

Abi Gezunt's avatar

That is an opinion belied by the fact that AI has been created by fallible, sometime unethical humans. AI is becoming more than code. Paired with robotics, AI can also be destructive. AI is already generating harmful misinformation, reinforcing biases, enabling cyberattacks, and creating convincing deep fakes.

rif a saurous's avatar

Let's say we take all this seriously and decide to regulate! What's your broad approach? What sort of regulation allows us to avoid letting Eric the Angry Teenager kill a million people with a supervirus but still gives us access to "the fantastic productivity gains these agents promise to deliver?"

Spoiler: I've thought about this for a year, and don't have any good ideas, which is why I come down on the side of "we should organize society to make it illegal or impossible to build enslaved gods." Dean Ball suggests we can do it via a third-party auditing scheme, and if I thought that would work I'd be starting a third-party auditing company right now, but it seems obviously hopeless --- third-party finanical auditing (mostly) works as a transparency mechanism for investors, which is a much easier problem to solve than making sure an enslaved god only gets used for the right things, we were able to develop the techniques through trial and error over centuries, and it still sometimes fails.

Sandor Schoichet's avatar

Re: footnote #3, Forbidden Planet is another relevant model.

Tim Nesbitt's avatar

This is not the first example of the corporation vs. the nation-state in the use of sophisticated weapons and weapons-enabling technologies. Think of Elon Musk's ability to turn on and off Russia's and Ukraine's use of the Starlink system. Shouldn't that system be subject to government control as well?

Noah Smith's avatar

Yes, of course.

Fallingknife's avatar

Where does it stop? Should we nationalize all defense contractors? Elon Musk has that power over Ukraine (though not over the US as their contract with SpaceX provides it with direct control over the satellites). But this is not a new thing. The CEOs of Lockheed Martin and a bunch of other contractors also have massive amounts of leverage over the government, but this has never been a problem deeper than maybe extracting a few extra billions of dollars here and there (which is a significant problem, but not a national security threat). The incentive structures and potential legal consequences of a CEO using this leverage to its full effect have been enough to prevent any of them from doing so and I don't see a reason why this would change. (The widely cited case of Elon Musk doing so is completely fabricated nonsense by garbage journalists, and he was actually complying with US law when he did so).

Elden, Gary (SHB)'s avatar

Difference between Congress and President adopting a fully considered regulatory scheme and giving a lying drunk unlimited power to do what he wants. Anthropic opposing for now only the latter. Hegseth crazed reaction proves he cannot be trusted with even greater power than he now has. He now threatens to destroy Anthropic rather than consider their valid points. He wants the power to threaten all of us if we don’t submit.

Uwe's avatar

I found Amodei's argument persuasive: they wanted restrictions only on two issues that should be acceptable to the government. But they are not acceptable to the current government because they're criminals. All the other abstractions being advanced here are therefore correct in principle but not persuasive to me in this instance. The bottom line is that Amodei tried to do the right thing and Trump, Hegseth, Altman et al are dirtbags.

PhillyT's avatar

The 2 restrictions were also restrictions that the government previously agreed to, and it seems like OpenAI is also agreeing to as well, at least publicly. It seems like Anthropic is just left coded and doesn't provide Hegseth and the Trump admin with as much fealty and worship as Altman is willing to do.

Uwe's avatar

This is just more blatant blackmail and political persecution by this administration. It's typical, full-on authoritarianism. You can't theoretically treat them like a government that is faithful to the constitution. I suppose we agree on this point.

PhillyT's avatar

For sure, I am 100% agreed with you. Noah just kind of skips over this because he thinks the ends may justify the means or something. idk.

Cos's avatar

Given how powerful AI already is, why would you support anyone (including the government) who wants to remove guardrails while combining it with deadly force?

Cos's avatar

"States have a monopoly of violence" is not the same as "states should be unconstrained in how they exercise violence"

Patrick Hurst's avatar

If guns are weappons, why aren't they regulated yet!

I'm sorry, this is not about regulation... and the authoritarian, white supremacist Trump administration is not the liberal EU administration. So NOPE!

JÂST CANCELED MY CHATGPT SUBSCRIPTION TO MOVE TO CLAUDE!

Ken Kovar's avatar

You know I think a lot of other people will see this this war obsessed administration is wrong about Anthropic and made the switch to them rather than ChatGPT! I think the companies ethical stance is a lot more important and judging from their business model and revenue growth, maybe they really are doing the right thing !

Shawn Willden's avatar

Guns are regulated, extensively. You can argue that they should be regulated more, but claiming they're not regulated is just wrong.

Jeremy R Cole's avatar

Claude is not, in of itself, a weapon. This whole thing feels a bit like declaring idk Albert Einstein a supply chain risk if he refuses to join the Manhattan project. Researchers shouldn't be forced to work on weapons research if they don't want to.

PhillyT's avatar

Yeah, I'm surprised Noah kind of skips over this... We kind of are mostly a free country here in America where that should be respected, especially if the government signed a contract with you previously and respected that position.