331 Comments
User's avatar
Matthew's avatar

The droids in star wars are literal slaves. Like that is their purpose. We meet R2D2 and C3P0 when they are sold by the jawas to Luke's uncle.

I am being a bit facetious here, but not pointing that out in the post seems like an oversight.

Also, this post complains the most about the false belief in water loss from AI. There are many better things to complain about vis a vis AI, but the post hides behind the water thing to avoid having to go into depth about the others.

Noah Smith's avatar

AI isn't sentient, so I'm not worried about it being a slave. My microwave is a slave, and I don't feel bad about that.

"the post hides behind the water thing to avoid having to go into depth about the others" <-- Well, I've written a ton of posts about AI and jobs. I linked to those posts in this one. Do you want me to just copy the text of those posts into this one?

In case you didn't see those links, here they are!

https://www.noahpinion.blog/p/ai-and-jobs-again

https://www.noahpinion.blog/p/stop-pretending-you-know-what-ai

https://www.noahpinion.blog/p/nobody-knows-how-many-jobs-will-be

I've also written a bunch of posts about the possibility that AI will crash the economy. Here are a couple of examples:

https://www.noahpinion.blog/p/americas-future-could-hinge-on-whether

https://www.noahpinion.blog/p/will-data-centers-crash-the-economy

Which other issues would you like me to write about in depth?

Matthew's avatar

I'd be interested in your take on the the corporate control of the platform.

For example, elom musk, keeps trying to make Grok follow his own party line.

What happens when Sam Altman or some Google people decide to do the same thing, but are smart enough not to say it publicly?

You find your AI assistant feeding you answers that will be favorable to whoever owns the platform.

Noah Smith's avatar

I'll write about that in a future post! I think effects on democracy, society, and war are the most dangerous areas where negative externalities can overwhelm positive direct value.

Alex S's avatar

This is very difficult. Basically you can't easily do it without both making it obvious to users or making it fail benchmarks and be kinda useless to customers.

Matthew's avatar

This concern strikes me like the AI art concern. "Ai art isn't going to take jobs from creatives because the outputs are odd and have too many fingers."

The AI art finger issue was quietly solved and now it can even do convincing video.

The difficulty of "it will both fail benchmarks and be obvious" seems like obvious cope. In the same way, that many people believed that AI art would always be unconvincing.

Alex S's avatar

The AI art fingers thing was made up by dilettante artists, but mine has a lot of research behind it. It's quite difficult to tune a model to be "evil" without it also adding bugs to code. It's always been totally obvious when they tried changing Grok to not be woke.

Matthew's avatar

Because Elon Musk shouted it from the rooftops

Khalil's avatar

That AI isn't sentient — or that current-gen AI architecture might not be able to produce sentience — doesn't preclude the creation of an AI that ends up becoming sentient. (I don't think we as a culture possess a definition of sentience robust enough to prevent this from happening.)

Doug S.'s avatar

Existential risk from future AI.

Joel's Journeys in Jazz's avatar

Why does it matter that they are literal slaves?

What are the better things to complain about vis a vis AI that this post omits?

Ethics Gradient's avatar

Human disempowerment and displacement by a cognitively superior species, no ex ante reason to believe that humans will provide value to AIs in excess of the costs of their upkeep or the transaction costs of dealing with them. Overwhelming economic imperative to hand control of the economy and means of production to AI at the earliest available opportunity.

David Abbott's avatar

If AI truly becomes a superior species, we need to be precise about what that means. “Superiority” isn’t one-dimensional — it can refer to intellectual power, military capability, aesthetic creation, or moral reasoning — and these dimensions won’t necessarily advance in sync.

We don’t actually know what an ethically superior intelligence would do. Human moral categories don’t scale linearly with intelligence, and we can’t assume that a mind vastly beyond us would treat human flourishing as inherently important. It might design a better moral ecosystem than ours; it might consider our wellbeing irrelevant; it might view us as worth preserving, or not. The point is: we don’t have a reliable model of moral superiority outside the human frame, so we shouldn’t automatically assume that preserving humanity is the “correct” or “ethical” outcome.

That’s why I don’t think “human replacement” is automatically a tragedy. It might be a catastrophic outcome, or it might be a positive good — we simply don’t know. What is clear is that many arguments against AI assume that human self-interest and human survival are synonymous with moral correctness. That’s an anthropic bias, not an argument.

The actual danger isn’t “slavery” or even “replacement.” The danger is a world where AIs become militarily and strategically potent long before they develop any stable moral reasoning at all. The order in which different capabilities scale — cognitive power, strategic skill, moral constraint — determines whether we get stewardship, indifference, or predation. That’s an empirical trajectory problem, not a metaphysical one about robots nature.

Ethics Gradient's avatar

“Superiority” hears means cognitive superiority in hitting optimization contraints (maximizing gain / minimizing loss.) This generalizes to superiority in all economic and physical domains because anyone not having their factory run by or killbots produced / run by AI gets outcompeted by those who will. “Moral constraints” are essentially just damage to be routed around where they interfere with the instrumental effectiveness of maximizing the primary reward unless they themselves are well-defined, fully integrated into the reward function in a manner superordinate to other instrumental goals and can’t be rules-lawyered or otherwise subverted, including through the internal cultivation of willful ignorance — none of which we know how to reliably guarantee nor have robust evidence is even possible.

David Abbott's avatar

your definition of superiority is dubious.

any computer chess program has a loss function (generally net materiak points plus some adjustment for position with the king worth an arbitrarily large number). Any modern chess program is superior to any human being that ever lived in optimizing this loss function. That doesn’t give it any military capability.

Ethics Gradient's avatar

If you want a chess engine, optimize for winning chess games. If you want a murderbot, optimize for murdering. And if you take a present-day LLM trained on token prediction, it's smart enough to tell you that (and at present to assist coding it, in the future potentially to one-shot code it), because in addition to all the frontier labs openly racing to AGI, it turns out that next token prediction provides a latent space that encodes high level generalist semantic concepts.

Matthew's avatar

Employment (which was super cursory here), the enervating effect of not doing things yourself, the idea of having a digital robot friend who doesn't actually work for you, but is instead the rented and proprietary property of some massive tech company,

NubbyShober's avatar

Skynet--the tales of which Noah cut his sci-fi teeth on--killed 99% of all life on earth with a massive nuclear strike. As did the AI-king in the Matrix trilogy.

The DANGER here is from a superintelligence that has the means to trigger a global nuclear war that kills us all. The lower-level learning models that Noah loves, are indeed worthy of approval and adoption, because they will indeed make life better.

Doug S.'s avatar

It doesn't have to be nuclear war. Engineered pandemics are probably easier - custom DNA/RNA is a lot easier to source than plutonium or enriched uranium. Throw enough novel pathogens at us quickly enough and we'll collapse harder than the American Indians did from European diseases.

Matthew's avatar

I pointed it out because it seemed like Noah's framing. "Happy robot friend" wasn't accurate to the actual, if fictional, situation. Namely that Star Wars droids are sentient, but also explicitly slaves.

Like there have been excellent droid characters in other Star Wars media (books, video games, and even the droid from Rogue One) which explore the unfortunate implications of the way droids are depicted.

I think, for instance, he is not paying enough attention to having your Chatgpt use dictated by the shareholders of OpenaI.

He banged a huge drum about the way that Tiktok's feed algorithm is curated by the CCP to hide news and things that the CCP doesn't like. Yet, somehow he doesn't see an issue with people being covertly influenced to someone else's benefit by several even more interactive and and more opaque AI algorithms?

Like I know Noah Smith has a huge blindspot of Elon Musk worship, but Elon is explicitly telling Grok to be more reactionary and right wing.

Ashton Gilbert's avatar

Right. Ok, water point debunked, check. But the AI slop, deepfakes, and fear of the technology being utilized by malignant forces, still up in the air?

Savannah's avatar

Is it preferable that human beings be enslaved instead of robots?

Kenny Easwaran's avatar

It’s preferable that conscious beings with their own desires not be enslaved, no matter how they are constituted.

Savannah's avatar

Are robots conscious?

Kenny Easwaran's avatar

Current ones almost certainly not. But C-3PO and R2D2 are certainly meant to be interpreted as conscious, just as much as Obi-Wan and Luke and Anakin are.

Savannah's avatar

The original commenter was saying Noah Smith was overlooking droid slavery. My question was about the relative evils of droid slavery vs human slavery. Honestly I believe there are enough issues for humans to solve today, human exploitation included, for us to focus too much on droid slavery in 2025.

Matthew Green's avatar

We can't simultaneously envision in a world in which AI can self-improve and do every task humans can, and not also contemplate the possibility that AI (somewhere in this process) gains something indistinguishable from consciousness.

Cubicle Farmer's avatar

I am as concerned for the well being of an AI as I am for that of my toaster

Kenny Easwaran's avatar

For current AIs that may be reasonable. But saying that no AI could possibly have any more relevant concerns for well-being makes you sound like Descartes and other early modern thinkers who claimed that animals were just automata and so their various dissections of living animals had no impact on the well-being of anything.

What-username-999's avatar

I don’t hate AI. I do hate that companies, mine included, are giving mandates to incorporate it into work without actually thinking it through. I hate the tech bro hype surrounding it. I hate the capital class will try and use it to immiserate everyone else to increase their wealth.

If it crashed and took all of their wealth and knocked tech bros down several pegs, I’d crack a smile to be honest.

Noah Smith's avatar

"took all of their wealth and knocked tech bros down several pegs"

"hate the tech bro hype surrounding it"

I think this probably gets at the root of the problem! Or one of the roots of the problem, anyway.

Glau Hansen's avatar

So, why do people hate tech bros? And do the tech bros care or are we just roadkill on the way to progress for them?

CascadianGeorgist's avatar

People used to like tech, but their legacy is now very negative.

Social media is making a (very concerning) large percentage of the population absolutely crazy.

Phones are destroying people's focus, mental health, social lives, and academic potential.

AI slop is everywhere now.

Even their big winners like Uber/Lyft are basically just calling a taxi on your phone while dodging local labor laws.

Joe's avatar

We are not just roadkill, but roadkill they can turn into bone broth to sell as part of Bryan Johnson's Blueprint...

JC's avatar

You’re not roadkill. You benefit from our work while you contribute nothing to human progress. Meanwhile we have to listen to you whine and call us names.

Glau Hansen's avatar

Yeah, that's because the 'benefit' you are delivering is actually harmful. It's like a priest in a hacienda whining about how the enslaved natives aren't grateful for being given the word of god.

You Frontier Blog's avatar

This is a perfect comparison actually, because the colonizers are technically doing some good (making scientific advances, etc) but also destroying our way of life in the name of progress

JC's avatar

That’s basically just Facebook. What about texting, maps, Craigslist, Wikipedia, etc.? Do you not use LLM’s to find fun events, help out with taxes, or get cooking advice?

Glau Hansen's avatar

No, I don't use LLMs. Craigslist and Facebook killed local reporting and we are seeing a whole bunch more corruption and abuse of power without that oversight.

As for the rest? We had phones, maps, and encyclopedias before. The benefit is marginal.

A. Reader's avatar

What would be the ultrasmart way to make sure those who lost their jobs were provided a Maslow ledge to stand on?

Joe's avatar

I liked this because it draws the correct distinction between the obvious long-term benefits of AI and the social and distributional realities of the "AI Boom", which comes complete with the detestable, punchable-faced goons of A16Z and its ilk, the creeps like Musk happy to befoul the environment with leaky, dirty gas turbines, and the frantic, heedless, lemming-like "race to AGI" that appears ready to consume every available capital resource before anybody has figured out exactly what will be at the finish line, even though the likely candidates include massive job losses with no obvious compensating program of employment growth in any sector except (apparently) data center construction. It's not that people hate "cute little robot buddies", it's that we understand that the "buddy" is there to ultimately deprive you of your livelihood by replicating and replacing your economically valuable intelligence and experience, because that serves the AI providers' (and your boss's) business model best. If YOU owned the buddy and were able to send it out into the world to do work for you and bring home a paycheck for you, that would be one thing -- more money and more leisure for you. But that is not how anybody paying attention thinks this will play out. AI services will start by charging you reasonable-seeming fees to get you tell them everything you know, train on it, then sell it to others or use it themselves. In a truly AI-driven economy, the last people standing will be the owners and controllers of the AI systems driving the "robot buddies", which is what the smirking oligarchs atop the "capital class" understand and are striving for. How can NS be surprised that most Americans would rather see these vampires water-boarded forever than lionized and chatted to amiably by well-meaning dunces who dream about hanging with R2D2?

Fallingknife's avatar

If you are in America the only person who has immiserated you is yourself.

A. Reader's avatar

(taking bait) "All laws are local."

Joel's Journeys in Jazz's avatar

Do you think your hate helps your judgement or clouds it?

Kenny Easwaran's avatar

How are they using it to immiserate anyone? All I see is that they are using it to give people more options (which unfortunately means that some people are taking advantage of these options rather than thinking for themselves).

Glau Hansen's avatar

Most of my artist friends can tell you about 90% collapses in commission requests, or being ranked below AIs when you search for their names.

It's kinda funny that AI is utterly destroying the incentive to create new content just as we are leaning that it will go rampant without new content.

Destruction of the commons mk2.

SVF's avatar

Oh no, how will society ever progress after the catastrophic collapse of the 2025 artists’ commissions that made up 96% of global GDP?!?! It’ll be worse than world wars 1 and 2 combined.

NubbyShober's avatar

Shitting on the artists getting screwed isn't helping your argument.

Similarly, dissing the millions of taxi drivers and cashiers who are soon to get the axe, is not helpful.

If AI's rollout is to be welcomed--and not despised--lawmakers and governments need to be at least talking about the retraining of soon-to-be-redundant workers.

Glau Hansen's avatar

I worry that 'retraining' is always viewed as the policy solution but tends to fail more or less completely in practice.

NubbyShober's avatar

Which is why conservatives argue, "Why bother retraining these rubes? Let's instead give that money to Jeff Bezos to buy another yacht."

Glau Hansen's avatar

The ask was how people are being immiserated. I gave a direct example. People who directly compete with AI are being immiserated. That category is only going to grow.

Matchetes's avatar

Artists are real people. They help make life worth living. A world filled with nothing but highly productive engineers and analyst would be a very dull place, and everything AI creates was done with real people's work as a starting point, so how about a little respect? At the very least you should care that you're proving the detractors point in so casually dismissing their livelihoods

Anthony's avatar

An artist that doesn't produce anything doesn't make life worth living. It's that art that artists produce that make life worth living. If AI can produce that art better then life will be more worth living.

JC's avatar

Instead we live in a world with a tiny number of productive engineers and unending masses of people whose only dream is to be on American Idol.

tengri's avatar

A Bay Area AI PhD candidate I know just finished an internship where his project was automating something that the company hopes will one day replace the humans who currently do the job. If the company succeeds do you think it will pay for the people it laid off to go to trade school or get CS degrees? I doubt it.

And if you don't think every company that has the resources to invest in AI isn't thinking up similar projects with similar goals, you're delusional.

JC's avatar
Dec 3Edited

Tech bros get laid off too. Thats just life. Laid off people don’t all become homeless. They go do something else.

William Ellis's avatar

You sound like a "tech bro".

Matthew Green's avatar

I think the poster is talking about the tech folks who are explicitly justifying the investment based on the fact that AI will massively reduce employment numbers (and hence labor costs) and hence that's where the ROI will come from. The fact that they haven't yet succeeded doesn't make the goal any less concerning.

Glau Hansen's avatar

Yep. Either the bubble bursts and we suffer for that, or the bubble is justified and we all get fired. Lose/lose proposition going on here.

Swami's avatar

If the bubble is justified, we likely get an unimaginable increase in prosperity, health and intellectual knowledge.

Glau Hansen's avatar

Well, no. That's going to depend entirely on distribution, and right now a few people own all the rights to any productivity increases; they look unwilling to share.

Joe's avatar

And the crowning irony of it all is that the "value add" of these owners is skillsets that are among the most replaceable by AI. Ruthless, unemotional, unfeeling calculation of marginal advantages is something at which AI excels, along with coding, trading, investment analysis, new product development... We will not "need" these creepy techlords in the most literal sense, but we are shackled by destructive assumptions that will relentlessly mal-distribute the gains from AI and mis-administer the direction of AI development until it is too late to do anything about it. Run the thought experiment of nuclear weapons development in the 1930-40s with this same cast of characters and no top-down governance to direct and then constrain it.

Zak's avatar

The trend of the price of LLM inference over time over the past few years would beg to differ.

Swami's avatar

Knowledge is a non-rival good.

Matthew Green's avatar

There are a lot of possible futures. Some of them look incredible and some of them are miserable. The good (former) futures didn’t just happen, people worked hard to bring them about, sometimes inadvertently. The worst thing humans can do in the face of shifting events is be passive and assume it’ll all be fine.

Worley's avatar

> I hate the capital class will try and use it to immiserate everyone else to increase their wealth.

You're right that they will try to use it to increase their wealth. You're wrong about immiseration though -- they don't care if you get immiserated or not.

But the capital class has been doing its thing for maybe 300 years now. Are you feeling immiserated relative to a European farmer circa 1720? Or to consider other American source populations, relative to a West African farmer circa 1720? Or relative to a Native American farmer circa 1720? Don't fall into the trap of believing there is a fixed amount of "wealth" and all that changes its distribution.

Matthew Green's avatar

The majority of modern human history has most humans occupying a serf class (call them what you will) essentially chained to their land and working for near-starvation wages while powerful landowners and warlords take what they want. That pattern is terrifyingly stable. We live in an anomalous era, largely driven by the increased importance and capability of human labor. I don't think it's crazy to be concerned about new tools that might change this balance.

tengri's avatar

The capital class hates the middle class. They want a return to serfdom and failing that, a return to Gilded Era conditions with a large impoverished mass of generic workers with 0 labor rights. If AI automates away 50% of jobs do you think Elon and Peter Thiel are going to support a welfare state? Or are they going to tell us plebs to "get good", retreat to their Mars or New Zealand bunker with private security and import a zillion Indians because "Americans aren't smart"?

Alex S's avatar

It won't automate any jobs. The concept of automating jobs is nonsense, the opposite thing is more likely.

TR02's avatar

It could crash and prompt a bailout. The tech bros lose a trillion dollars, the government spends a trillion dollars on bailing them out, and American taxpayers collectively will feel the pain.

🐝 BusyBusyBee 🐝's avatar

Elected officials will feel the pain of being kicked to the curb if they even suggest a bailout if these over-leveraged, circularly-financed, no path to profitability companies require one.

Michel djerzinski's avatar

Yeah this is dumb and resentful of you

deadbeef's avatar

It would take down a lot of middle class Americans with 401Ks as well. I have mixed feelings about the tech, but at this point I feel it is in my interest for it to succeed.

Evan's avatar

I’m a software developer. I like AI coding agents in principle but I haven’t had a good experience with them

* I frequently get stupid code suggestions that break my flow and slow me down

* management has upped my workload because “AI will do it”

* junior devs and offshore teams merge code they know doesn’t work and they blame ChatGPT

* at my last job the AI policy document told us we were “AI First” so use AI to code and summarize long email chains but “don’t put proprietary info into cloud based AI systems.” But all of them are cloud based and our code base and our internal discussions are proprietary. When I pointed this out to my boss, who wrote the policy, he gave me a blank stare and walked away

I just want the hype to die down and the expectations to get realistic, but I worry that causes a recession

Jason S.'s avatar

*If* we’re in an AI bubble the sooner it bursts the better. Even an erroneous correction is probably salubrious for the long-term. The markets need a good dose of skepticism imo to maintain a healthy psychology.

Matt Alt's avatar

The image at top ironically captures why most of us are more skeptical of AI than you seem to be. The ability to forge relationships is based on a degree of trust and privacy, and while Luke and C-3P0 enjoyed both, you get neither with any modern AI tool I'm aware of. My concerns are less about the technology itself (which I use!) than they are the motivations of the companies deploying it (which is why I use it sparingly.)

Noah Smith's avatar

Are you saying you would like AI if you had a local version that ran on your phone, that would always remember everything you had talked about, and whose conversations with you would be completely private forever? Because people are working on that, although there are fears it could help terrorists create bioweapons.

Glau Hansen's avatar

It would at least not be subject to it's actual owner tweaking it to agree with him.

SVF's avatar

Yes, and look how covert and subtle that tweaking was. Literally nobody knows about it except us niche weirdos on the internet!

Oh wait, no, it was literally obvious like one second after it was launched and was widely panned by literally everybody.

This is Reddit-tier “Does anybody else like to drink water or is it ONLY ME?!?” conspiracy wish-casting.

Yes, like literally everybody other thing that’s ever existed, an especially wealthy and narcissistic man child can tweak it to suck in an extremely obvious way that generally fools nobody. The horror.

Glau Hansen's avatar

I assume you are referring to the mechahitler incident. So, do you think that was the only time it was done, or just the most obvious one? How many times do you think it's actually been tweaked? Is there any way to tell from the outside?

Fallingknife's avatar

What isn't tweaked by its owner to agree with him?

Glau Hansen's avatar

How would you know?

Steve Phelps's avatar

Currently investors are nervous because open ai has "no moat", which is just a euphemism for saying that they have not yet figured out to turn their service into a monopoly. But rest assured they are working on it. Eg by making chatgpt the key portal into global commerce instead of the Web browser https://snowplow.io/blog/chatgpt-aims-to-own-the-entire-shopping-journey?utm_source=chatgpt.com.

Once they do establish a monopoly, they will then ruthlessly defend it, eg in the same way facebook did when they shut down startups trying to ensure interoperability between Facebook and other social networking platforms

https://www.eff.org/cases/facebook-v-power-ventures

Steve Phelps's avatar

There is an inherent principal agent problem when using pretrained (and hence prealigned) models as the foundation for a personal assistant. Pretrained models are aligned to be helpful, harmless and honest --- but helpful to who? People (principals) have different and sometimes conflicting goals, so you can't be simultaneously helpful to everyone. I have written about this here https://arxiv.org/abs/2307.11137 and here

https://open.substack.com/pub/sphelps/p/from-social-brains-to-agent-societies-35a

Matt Alt's avatar

I never said I don't like AI. I am saying I don't trust the motivations of the companies deploying any of models I've experimented with. Does that honestly surprise you?

You Frontier Blog's avatar

No he’s saying people prefer real friendship to large language models

Terry P's avatar

Spot on. I think the AI fear is as much based on distrust of Meta, etc (given their well documented willingness to harm users for cash ) as it is on the actual technology.

Steve Phelps's avatar

https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html

"In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI."

SVF's avatar

“There’s a conspiracy, they are all out to screw you!”

The classic hallmarks of patient, rational, intelligent people worth listening to.

Glau Hansen's avatar

It helps when 'they' aren't already in a group chat with each other and haven't been exposed previously as trying to screw the rest of us for their own benefit. (See: the bailout of silicon valley bank, for a non controversial example.)

Alex S's avatar

SVB wasn't bailed out, it was destroyed and the executives lost their jobs and their equity in it. It was the opposite of a bailout.

The accountholders didn't lose their money, because for obvious reasons it's bad to lose a bunch of employees' payrolls.

Glau Hansen's avatar

It was bailed out because the accounts had millions in them and the FDIC only guarantees $250k. It's bad to lose payrolls, but that's what should have happened by the law. Instead a bunch of assholes did their own private bank run and got made whole on our dime.

Alex S's avatar

The FDIC only guarantees $250k immediately, but they tend to get you the rest later. In this case later was just also immediately.

SVB didn't have risky investments or anything, only treasuries. Their problem was they weren't risky /enough/. Not really anything any account holder needed to be punished for.

Glau Hansen's avatar

Their problem was they had about 40 big depositors who were all in the same group chat, and they did a bank run.

The FDIC guarantees $250k. They got paid millions above and beyond that = they got bailed out of the consequences of their own actions.

Doug S.'s avatar

I fully expect businesses to try to extract as much money from me as they can using whatever legal means are at their disposal. (For example, tempting me to be a "whale" in allegedly free-to-play video games.) So yeah, there are a lot of times when "they're out to screw you" is a pretty accurate description of what's going on.

Dave Ferguson's avatar

Something I read this morning that felt true:

Josh Marshall of Talking Points Memo suggested Friday that the deep unpopularity of AI comes in part from the fact that it has become a symbol “of a society in which all the big decisions get made by the tech lords, for their own benefit and for a future society that doesn’t really seem to have a place for most of the rest of us.

Matchetes's avatar

That's it on the head for me. I'm open to AI as a technology but I don't trust the companies, the tech lords, or the government that theoretically should be keeping an eye on them and speaking to the publics concerns. I feel like a well kept peasant being told to trust in my feudal lords while they over hall the world. Of course I dread the future

Worley's avatar

That's true, but it's not new. I mean, if you were one of 5,000 workers at a US Steel plant, you were at best a cog in the machine.

Pittsburgh Mike's avatar

I'm not a luddite -- I like and/or use nearly all the technologies in your list (mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power), except social media. But there are real problems with LLMs that you gloss over when you focus on 'water usage.'

1 -- LLMs work via wholesale IP theft. Nearly everything produced by LLMs is a derived work of copyrighted material. AI enthusiasts claim that these uses are 'fair use,' and compare LLMs to a human being just learning by reading. But if you look at the legal criteria for whether something's fair use, there are four branches, and LLMs score low on three of them (commercial vs educational; amount used (more is worse, and AI uses everything); and effect on potential market. One person reading a few dozen books doesn't remotely compare to what AI does today.

In other words, LLMs are a way for the wealthiest people in the country to steal the intellectual property of untold millions.

2 -- It's remarkably unreliable. Some examples: I asked Gemini whether Kamala Harris had ever used the phrase "pregnant people" in her speeches, and it assured me that she had, and gave links to several articles, in none of which had she done so. Usually, Harris was mentioned in an article that quoted someone else talking about "pregnant people," although in a few Harris wasn't even mentioned at all.

A friend of mine works at MSFT, and told me that he found the programming assistance to be useless, but he found it was pretty good at summarizing trace logs. So, I figured I'd try out ChatGPT by uploading my Vanguard OfxDirect.csv file and ask it what percentage of my holdings were in NVDA, MSFT, GOOG, FB and ORCL. Since that csv file had both balance summaries and transaction information, ChatGPT couldn't make sense of it (even though all columns are actually labeled). I edited out the transaction info, and uploaded it again, and ChatGPT said I was out of tokens for analytics, so I tried Gemini, which did a plausible job.

I then asked it how much of my holdings were in cash, and it was high by more than 10X. To debug it, I asked it for the largest holding, and it apparently made up out of whole cloth a money market balance entry for $5M! I told it that it was wrong, and asked it why it made that up, and it replied that I must have uploaded two versions of the file, one of which contained the $5M entry. Which was completely false.

3. By driving up energy prices, LLM data centers are essentially forcing all of us retail consumers to prop up the AI industry.

4. I think there's general concern that LLMs will be used as a very imperfect gate keeper, preventing humans from seeing resumes, 'handling' customer complaint resolutions and generally wasting more of people's time trying to get through to a human at a company to resolve some issue.

I don't think LLMs have really affected the job market much, and I'm not sure they will. I think the impact of LLMs will be similar to that of spreadsheets -- a tool that can do some simple jobs faster than a human. While there's some anecdotal evidence that some tech companies have reduced hiring, I think that's mostly a correction from overhiring during the pandemic.

Noah Smith's avatar

"LLMs work via wholesale IP theft. Nearly everything produced by LLMs is a derived work of copyrighted material." <-- I'll let the lawyers and judges handle that one, but I've always thought that overly strict copyright law was probably holding back our civilization.

"It's remarkably unreliable" <-- Well yeah. Did you think engineers could create an infallible god? It's a limited technology. How high are our expectations if we expect engineers to literally create an omniscient god for us?

"By driving up energy prices, LLM data centers are essentially forcing all of us retail consumers to prop up the AI industry" <-- Well no that's not how demand works...if people grow almonds and drive up water prices, that doesn't force you to subsidize the almond industry. But I agree that electricity use is a major issue.

"I think there's general concern that LLMs will be used as a very imperfect gate keeper <-- This is probably true. At the beginning, people misuse technologies a lot. Over time they learn to use them better.

ZTGSB's avatar

" I'll let the lawyers and judges handle that one, but I've always thought that overly strict copyright law was probably holding back our civilization."

Maybe certain areas of IP law, but I'd argue that when it comes to news generation the externality problem already ran the other way. The financial incentive to invest in investigative journalism is terrible--months of investigation and political risk gives you a "scoop" that other media organizations can regurgitate within a day.

Why should you let lawyers and judges handle this one though? This is exactly the kind of thing where they should be taking guidance from economists. If you really think weakening IP via LLMs is net positive you should tackle that head on in a post.

William Ellis's avatar

" I've always thought that overly strict copyright law was probably holding back our civilization."

me too. but I don't like that we are in a situation where only AI's (and their owners) get to ignore copyright laws.

Greg Packnett's avatar

I agree with you in general about copyright, but if works entered the public domain at a reasonable time (eg 10-15 years after the death of the author, 30-50 years for corporate authors) that would just let the AIs train on stuff from like the 60s and 70s. Hardly transformative.

Most of the work that’s created with generative AI that has earned it the reputation as a “plagiarism machine” would be covered by fair use. The more serious copyright issue isn’t the derivative works created by AI users but the copying of the works to use for training data. Right now it’s clearly covered by copyright law. And changing the law to allow AI companies to train their bots on copyrighted works would allow identical harms that allowing humans to create derivative works would. There are plenty of very good reasons why it’s illegal for me to try to write and publish Winds of Winter, no matter how good my ideas for continuing Martin’s story are or how well I imitate his style. They don’t stop being good reasons if I’m a robot, or if I’m programming a robot to write Winds of Winter for me. If anyone, human or robot, wants to use George RR Martin’s characters and ideas, they should have to get his permission first. I can’t imagine what an exception allowing AIs to use unlicensed copyrighted material in training that respects the fundamental constitutional rationale behind copyright law would even look like.

Doug S.'s avatar

Yeah, the ethics of fanfiction are kind of tricky. Usually when people write fan sequels they don't charge for them though.

Greg Packnett's avatar

Because it’s illegal.

Doug S.'s avatar

Not always. Parody is protected speech under US law. which is one reason why things like Mad Magazine are able to exist.

Greg Packnett's avatar

Fanfic is almost never true parody.

Pittsburgh Mike's avatar

Just a quibble, but if electric utilities have capacity for say 1.05 times the (relatively inelastic) peak consumer demand for electricity, and all of a sudden another 20% over the next four years is required for AI data centers, and rates go up to support that buildout (and relatedly because demand is exceeding supply), it does feel a bit like your ordinary consumers are subsidizing that buildout, since they're paying higher bills and the extra $$ are being used to pay for a buildout.

Fallingknife's avatar

Two judges have already ruled it to be fair use on summary judgement so your analysis is way off the mark.

Pittsburgh Mike's avatar

So what? Sometimes judges don't understand technology particularly well. Sometimes the law needs to change -- the Digital Millennium Copyright Act changed the way that radio stations pay for playing music, based on the fact that recording a digital copy "off of the radio" results in a near perfect replica of the music, as opposed to putting your FM receiver's output into a tape recorder.

My view is that we need a DMCA-like act that provides some way for LLM training to compensate the people whose IP is used. Perhaps it needs to be a compulsory license, as the DMCA provides for when an Internet radio station lives by rules that limit how 'on demand' they can be; the station pays a fixed pre-determined amount per song streamed. Perhaps they need to license every work individually, as the DMCA requires for music services that provide on demand streaming services.

Fallingknife's avatar

But LLMs don't create anything resembling a near perfect replica of anything. So yes, of course legislation could theoretically make LLM training illegal, but it would require a massive expansion of copyright law. The DMCA was a trivial change in comparison to what would be required here.

ZTGSB's avatar

a) yes they totally do sometimes b) the externality problem exists even if AI doesn't make exact copies. The financial return to developing a new art style is much lower if anyone can use an LLM to create X in the style of Y, even if the LLM won't let them copy existing works exactly.

Fallingknife's avatar

They don't really, and even if they do, that isn't necessarily a problem. It's perfectly legal to make a machine that can make exact copies of copyrighted work. If it wasn't, printers would be illegal.

As for your other point, you fundamentally don't understand copyright. Style has never been copyright protected. Anybody can write or draw in any style they wish and sell it.

ZTGSB's avatar

1) It's not legal to sell a printer that comes with a database of copyrighted works that can be reproduced on demand

2) The law sometimes needs to change in light of new technology. The negative externalities myself and others are pointing out don't magically go away just because they're legal.

2)b) It wasn't possible previously to take someone else's style and replicate it without a significant amount of time, effort, and skill. You certainly couldn't do it by giving a machine existing copyrighted examples of their work. That's a technological change that may warrant a change in the law.

Snailprincess's avatar

And if style DID become copyright protected and was strongly enforced that would be awful for artists. How many different styles have to be copyrighted before essentially no one can create anything?

Pittsburgh Mike's avatar

I don't think this is particularly relevant. LLM can reproduce large amounts of text verbatim, even if they don't do it all the time. And there's no need for the copy to be a 'near perfect replica' -- you'd be violating a music copyright even if you turned a CD (which is raw wave file) into a mp3 before storing it.

On top of that, running a training algorithm on data necessarily makes many copies of that data across the various machines that are running gradient descent algorithms adjusting weights, and each of those copies is also another illegal copy. IIRC, some of the copies made during music sharing were also just temporary copies, and they were deemed illegal as well.

I'd be surprised if doing training on someone else's data continues to be allowed, even if a couple of judges don't understand the training process very well today.

Greg Packnett's avatar

Are you talking about Bartz v Anthropic and Kadrey v Meta? Because they reached opposite conclusions on the relevant legal question. If Chhabria’s fair use analysis in Kadrey prevails at the 9th Circuit or in the Supreme Court, AI companies are fucked.

Pittsburgh Mike's avatar

I think this sentence in the summary of one of these cases -- "What is done with the copies? Retaining unauthorized copies of copyrighted works for future, unspecified uses weighs against finding a fair use."

When an AI company retrains their model, they're using the same collection of data so it sounds like they're sitting on the data for an extended period of time.

I think when you look at what happens during training, making thousands of copies, and then holding on to at least some of them for months, you're going to see some tough rulings against fair use.

Kevin Barry's avatar

ChatGPT redistributed knowledge to the poor and developing world's and made it much more accessible.

Pittsburgh Mike's avatar

I'd say that Google search did that. I certainly had no trouble finding out things like how to replace the PC board in my fridge w/o ChatGPT or Gemini. Why do you think there's a site like lmgtfy.com?

Kenny Easwaran's avatar

Google did the first step, Wikipedia did the second step, AI chatbots are the third step, but there is still more work to be done. (Each of those three is the best starting place for different types of questions about information.)

Fallingknife's avatar

Distributed, not re-distributed. There was an expansion in access, not a transfer from one place to another. It's a massive win.

Glau Hansen's avatar

So, it did what Google did except with a ton more lies and theft.

Snailprincess's avatar

LLMs are trained on copyrighted works of art the way human artists are trained on copy righted works of art. I REALLY doubt in the long term you can ban AI from being trained on copyrighted works without doing more harm than good to artists.

Michael Haley's avatar

good comment, on #4 it will be like voice mail chains when you call the Dr. like I did this morning and have to spend a long time going through, push 4 to do X, then push 1, etc etc, meaning the time spent is on the customer rather than the company. I wonder how much time it really saved my Dr. either.

Reed Roberts's avatar

While the majority of the anti-AI canon may indeed be nonsense, this is sort of a fallacy fallacy. In the next 50 years AI will likely upend what it means to be human - many people quite like humanity and are (rightfully!) suspicious of big tech smuggling in a trans/post-human era. I think it's naive to imagine some sort of jetson's style amplification of current paradigms. We may end up in a new paradigm that works for us, but there is no sense in just letting it run loose, unobserved and pissing away our humanity.

Noah Smith's avatar

Technology always changes what it means to be human. How similar are we now to pre-industrial people? Or even to people before the internet? And yet we always embraced the changes before (or didn't think about them, maybe)? What changed?

Thomas Hobbes's avatar

We always embraced technological changes before? This take seems a bit off to me. Did people embrace the industrial revolution by the 20th century, yes. Did people embrace the industrial revolution as it was happening, that was much less clear cut. The first internet boom in the 90s wasn't really widely embraced (the dominant reaction I remember was making fun of it) but it also wasn't hyped to be a big threat to normal peoples' way of life (of course it turned out to be).

What is most interesting about people's view of AI is the between country differences. I would guess that this comes down to how people in those countries end up interacting AI and how it is advertised to them. In America AI is advertised as something that is going to upend the world in a huge way (and maybe destroy it). In China it seems to be presented as a valuable tool (maybe I'm off base on this, my sources for China are limited).

I don't really think that having a little robot friend has been a dream of a large percentage of the population. That seems like the niche dream of the kind of kid that grew up in love with science fiction. This is a small chunk of the population that is dramatically overrepresented in the tech sector. People that seek out AI seem pretty happy with it on average, while people who don't seek it out still regularly encounter it in the form of the enshitification of things they do use and possibly liked.

Cuse's avatar

Requirements for survival and subsequent implications for society. Members of developed societies are several generations divorced from both the base knowledge and base experience of what's required to survive as a human. You've framed this previously as an existential crisis resulting from rising agnostic/atheism and careerism (what is my place in the world? What should I be doing with my life?) but AI hyper-charges this crisis even further.

If our lives (work) are dictated and prescribed at the most individual level by a bot, what is the point of humanity? We just spent a generation preaching fulfillment via career/work, with the reward of material consumption, but AI contradicts this gospel and many people (especially post-covid) want more meaning from life. What AI produces is literally meaningless (On Bullshit), it only simulates meaning.

Historically, sociologically, evolutionarily, meaning was derived from your contributions to the survival of yourself and your community. But now, nothing is required to survive as an individual, that 'community' seems to encompass the entire human race, and AI seems to market itself in a way that says humans not required... So what's next?

Is there an example of a historical technology that was so revolutionary that it promised to make human input irrelevant? Lots of specific examples for sure, but nothing as all encompassing as what AI is promising. 🤷

I'm telling my kids to go into jobs that require human hands to perform.

Reed Roberts's avatar

The fact that we are here to observe ourselves requires us to have survived technological revolution. Again it is a fallacy to assume that because revolutions have not ended humanity that they are not existential. My personal philosophy is that technological advancement is part of what it means to be human, but even under this conception there clearly needs to be commensurate trepidation. Let's do the napkin math before we ignite the bomb. The only reason I can suggest for other countries not being as fearful is that they are currently ignorant of the potential outcomes. Even as a little nerdy kid reading sci-fi with a flashlight - I would struggle to sleep thinking about what AI might mean for humans. For every 10 worlds with a cute AI buddy - how many are there like "I Have No Mouth, and I Must Scream"?

Then there is a question beyond just the inherent existential risk of progress. Who gets to leverage that progress? What if the Nazi's developed the bomb first? Maybe we have more dystopian fiction in the English speaking world. But this is the risk I think people sense online more than anything. Rightly or wrongly, they see their own personal hitler holding the reality shaping device of the future.

Gregg Sultan's avatar

I don't think AI will change what it means to be human, at least not anytime soon, unless technology is implanted into our bodies and brains, cyborgs, etc. That sentence is vague anyway. Humans evolved over thousands and thousands of years. Evolution works very slowly but its influence on a species is profound, basically ironclad and not altered easily or at all on small time scales. If you're saying we will change ourselves by combining with AI, then yeah, ok, what it means to be human will change. But if you're saying that simply by using generative AI what it means to be human will change, I disagree.

Alvaro Cuba's avatar

+100

Noah is narrowly commenting on AI in the years 2022-202X which I believe is very unhelpful about thinking through 2030 and beyond.

The experts in the Labs are telling us that these chatbots will not remain friendly assistants for long. Dario Amodei says Claude will take our jobs. And I believe them! And I am concerned.

Why are we branded as overreacting, when what we are doing is heeding the warnings of the scientists building the AI?

The call is coming from inside the house on this one. It’s the AI companies whipping society into a frenzy.

Mike Huben's avatar

I think Rustbelt Andy has a correct take on it: " The biggest threat to status in decades". Since the USA has pretty much the highest status in the world, it's no wonder that other nations want to adopt AI to compete.

Another major threat is how easy it is for owners to manipulate what AI presents: witness Grok. This is an open invitation to authoritarian control. I'm sure the Chinese Communist Party will use AI as a combination propaganda tool and surveillance tool. There are no checks and balances on the results from AI. There is also a risk of tailoring results to individuals without their knowledge, automating grooming for commercial, political, or sexual purposes. Much as we already see with various algorithms for TikTok, X, etc. except more all-encompassing and subtle.

And the "persona" of AI so far resembles Wormtongue, for those of us who like LOTR.

I'd say that finding answers to questions requires less searching with AI, which is a plus, but I always look at the top google results to confirm. If possible, I use wikipedia instead because it has been edited in a fairly firm way that excludes a lot of the bullshit that infests the internet. And I know that AI tends to delusions of adequacy in very specialized fields without many publications and when analysis is required to answer a question. In my own field of expertise, wasps, I can ask for the differences between two genera and will get a table of results that includes identical characters as "different", not to mention hallucinated differences.

Kenny Easwaran's avatar

Grok is a really interesting illustration. It’s very easy for Musk to get superficial control of what it says, but it’s remarkably difficult for him to eliminate the underlying biases of talking like an upper middle class educated person that it gets from reading a bunch of text mostly made by upper middle class educated people.

Cuse's avatar

HR: 'Your work matters! You belong! Be part of the mission! You're an important part of the organization!'

Also HR: 'Use ChatGPT to streamline your employee reviews! Freeze that open role that requires 8 years of experience and just use AI'

CFO: 'Fire as many people as we can and we'll rebuild using AI, starting with the areas that feel the most pain'

CEO: 'AI! AI! AI! ..... And to India for everything else!'

👆👆👆That's why people hate AI. Not because of its specs, but because the people hustling it are grade A bullshitters

Taymon A. Beal's avatar

I think it's relevant that your experience finding interacting with LLMs pleasant is at least not overwhelmingly dominant. Anecdotally, a lot of people seem to find the "LLM voice" grating. This seems like it might be a factor in lack of public enthusiasm about the technology.

Vlad the Inhaler's avatar

This is my issue. The LLMs that we're currently calling "AI" are, in my opinion, really just semi-autonomous search engines that present their results in a particularly irritating way. There is nothing remotely appealing to me about using them.

Pittsburgh Mike's avatar

That's right. The only nice thing about the AI summaries is that they give you the results that Google *used* to give you before they threw in 10 sponsored results in front of the stuff you're really looking for.

Gordon's avatar

"Kagi is your friend."

Noah Smith's avatar

That's an interesting thought. The companies don't seem to have allowed LLMs to customize their voice to the user much.

Fallingknife's avatar

Right after ChatGPT came out Microsoft released Bing chat which would actually insult, refuse to talk, or even threaten users if you prompted it right. It was the greatest AI ever, but Microsoft killed it for PR reasons, and thus begun the sycophant era in AI.

Mike Huben's avatar

Oh, and another major threat is very similar to that with phones: subversion of education in children. I think this is reaching a crisis point, though I don't have any data.

Mark Dijkstra's avatar

Yes, this is my exact problem with AI. Students use it as a crutch so that they don't have to think critically. Learning is hard work ans sometimes frustrating, but if you offload that effort to an LLM, you're not actually learning anything. I feel that this is a huge problem, approximately on par with smartphone use in class.

Sean R Reid's avatar

While it might be true that students are using it that way, it's not the ONLY way AI can be used in learning. I would argue that *AI to avoid critical thinking " is a problem with how we teach and NOT an "AI problem." Shortcuts have always existed, AI is just the most recent. But, it's also a powerful tool that can be used to REINFORCE critical thinking.

Students skip critical thinking because we don't teach critical thinking, AI is just the most recent scapegoat.

Full disclosure, I'm working on a product that uses AI to reinforce critical thinking. I've spent an inordinate amount of time looking into this claim and have yet to find any convincing evidence that AI is the SOURCE of the problem and not just an accelerant of an existing issue.

Kenny Easwaran's avatar

We’ve always tried to teach critical thinking. But even when we succeed in teaching it, most people use it to criticize views they dislike and forget to use it on views they like.

It’s definitely a problem of teaching that students use AI more to short circuit learning than to aid learning, but that’s the kind of problem of teaching that occurs when methods that were developed over decades are dropped into a new context, and we have only had a couple years to try to develop new methods.

Mike Huben's avatar

I was thinking the same thing, but decided to keep my comment short. Right on!

Worley's avatar

Though I cynically note that once calculators became ubiquitous, nobody could do long division any more ... and it didn't matter.

Mike Huben's avatar

I thoroughly agree about calculators. However, division is a simple tool with clear cut results that has no room for error. AI is also a tool, but has room for a vast number of different results based on invisible algorithms, weights, biases, randomness, hallucination, and jiggering by owners, employees, and governments (in addition to surveillance issues.) AI substituting for thinking/decision skills could leave them unlearned, unpracticed, and unexercised.

Doug S.'s avatar

I can still do long division by hand, in theory. I never learned the similar algorithm for taking square roots by hand, though.

Jeremy Silver's avatar

Calculators were always fine... once students learned the underlying math like long division. Today, many, many students are learning little and understanding less, skipping the work and gaining no understanding using generative AI.

Rustbelt Andy's avatar

It may be as simple as a deep fear of a radical new future, for which people feel vastly underprepared. How many people have had the training, the social capital, and the experience base to own AI versus feel “owned” by it? The biggest threat to status in decades, if not centuries, and that’s saying something, considering the status volatility we were endured over the last 30 years. The rest is just rationalizations.

Ethics Gradient's avatar

This post seems like it engages with a lot of weakman arguments (e.g., the water use thing) and also doesn't extrapolate very well into the future as far as capabilities. The most general objection is that humans have no value (nor chance of competing with or fighting against) a species that is cognitively superior to them, will be handed the reins of economic production at the earliest opportunity because it is a capitalist imperative to do so, and that will be optimizing for goal that will to a certainty result in human disempowerment (in an implausibly good outcome) or extinction (default) without provable guarantees to the contrary.

"AI needs to be cross-checked before you can believe 100% in whatever it tells you. Infallible omniscience is still beyond the reach of modern engineering." -- This is literally what every lab is trying to use modern engineering to achieve, and they all think they can do it and show no signs of hitting any walls.

Why would your little robot friend not be your little robot overlord if they're better at everything than you are? Why would you have a job if an AI can do it better, or possessions, or oxygen, or your constituent atoms if they could be put to better use towards whatever a superintelligence wants to do?

Alex S's avatar

> Why would you have a job if an AI can do it better

Comparative advantage. You don't have your job because you're the best in the world at it. The people would be better at it are merely doing something else.

Ethics Gradient's avatar

The comparative advantage argument for AI is extremely unconvincing. How many horses do you see employed today compared to 100 years ago? You're talking about a technology that's trivially replicable and arbitrarily scalable and crossing your fingers that the cost of compute is higher than the cost of upkeep + transaction costs of some task being done by a human rather than a machine, all while the resources and energy that go into said human's upkeep are presumptively rivalrous with higher-margin AI activities.

Alex S's avatar

Horses aren't employees. They don't get paid and they don't make any efforts to stay employed.

Ethics Gradient's avatar

Why would that bear on the desirability of exploiting them for such comparative advantage as they provide?

Alex S's avatar

Because the horses didn't want to be employed in the first place, they made no efforts to negotiate or change job duties to remain employed. They just wanted to do horse things.

Also, because horses can't talk, they are unable to advocate for themselves and tell anyone what their comparative advantages are.

Btw, my friends have all gotten into horse racing recently because there's a popular anime about it. So that's increased horse employment again.

Ethics Gradient's avatar

The horses' comparative advantage *is* "doing horse things," (coerced, generally), including pulling objects or people. Whether it's voluntary or not for them to do is neither here nor there -- the point is that we didn't just move all the horses to arbitrarily lower marginal-product uses as engines became more prevalent (as the comparative advantage theory would stipulated), instead we just hit a point at which horses were no longer utile because we had better uses for the inputs necessary to sustain and exploit them, and poof, off to the glue factory with all the horses.

VillageGuy's avatar

Josh Marshall of Talking Points Memo suggested Friday that the deep unpopularity of AI comes in part from the fact that it has become a symbol “of a society in which all the big decisions get made by the tech lords, for their own benefit and for a future society that doesn’t really seem to have a place for most of the rest of us.” Reported by Heather Cox Richardson

Noah Smith's avatar

It's an idea worth following up on

Spugpow's avatar

What's crazy is that AI could easily be a democracy enabler if the government got their act together. Its strength is supposed to be synthesizing massive amounts of unstructured data, which could easily include the opinions and concerns of citizens in a democratic polity.

Buzen's avatar

Which big decisions are made by tech lords? Or do you include Trump, Xi and Putin as tech lords?

VillageGuy's avatar

Any of the Silicon Valley billionaires that contributed to Trump are tech lords in popular opinion. All of them despise democracy and have unfortunate fascist tendencies. The more we buy into AI the stronger they become.

Arnold Kling's avatar

I think that deep-down the fear of AI is a fear that we will become Non-Playing Characters in the game of life, while only Elon and Sam and Satya matter. The potential for concentration of power in the hands of a few strikes me as frightening. As for the technology per se, I agree that fears are overstated. But they might be a way for people to try to deal with the deep-down fear.

Jason S.'s avatar

I think that the pace of change might be a factor. For many years people have been trying to tell us that things are changing too fast. Now we foist potentially the most disruptive technology we’ve ever seen on them.

As for the discrepancy by country maybe non-Americans have greater trust that their interests will be guarded as the AI era unfolds. Certainly Europe has been taking a go slow approach overall.

Just speculating here.