The droids in star wars are literal slaves. Like that is their purpose. We meet R2D2 and C3P0 when they are sold by the jawas to Luke's uncle.
I am being a bit facetious here, but not pointing that out in the post seems like an oversight.
Also, this post complains the most about the false belief in water loss from AI. There are many better things to complain about vis a vis AI, but the post hides behind the water thing to avoid having to go into depth about the others.
I don’t hate AI. I do hate that companies, mine included, are giving mandates to incorporate it into work without actually thinking it through. I hate the tech bro hype surrounding it. I hate the capital class will try and use it to immiserate everyone else to increase their wealth.
If it crashed and took all of their wealth and knocked tech bros down several pegs, I’d crack a smile to be honest.
The image at top ironically captures why most of us are more skeptical of AI than you seem to be. The ability to forge relationships is based on a degree of trust and privacy, and while Luke and C-3P0 enjoyed both, you get neither with any modern AI tool I'm aware of. My concerns are less about the technology itself (which I use!) than they are the motivations of the companies deploying it (which is why I use it sparingly.)
Spot on. I think the AI fear is as much based on distrust of Meta, etc (given their well documented willingness to harm users for cash ) as it is on the actual technology.
I’m a software developer. I like AI coding agents in principle but I haven’t had a good experience with them
* I frequently get stupid code suggestions that break my flow and slow me down
* management has upped my workload because “AI will do it”
* junior devs and offshore teams merge code they know doesn’t work and they blame ChatGPT
* at my last job the AI policy document told us we were “AI First” so use AI to code and summarize long email chains but “don’t put proprietary info into cloud based AI systems.” But all of them are cloud based and our code base and our internal discussions are proprietary. When I pointed this out to my boss, who wrote the policy, he gave me a blank stare and walked away
I just want the hype to die down and the expectations to get realistic, but I worry that causes a recession
While the majority of the anti-AI canon may indeed be nonsense, this is sort of a fallacy fallacy. In the next 50 years AI will likely upend what it means to be human - many people quite like humanity and are (rightfully!) suspicious of big tech smuggling in a trans/post-human era. I think it's naive to imagine some sort of jetson's style amplification of current paradigms. We may end up in a new paradigm that works for us, but there is no sense in just letting it run loose, unobserved and pissing away our humanity.
I think it's relevant that your experience finding interacting with LLMs pleasant is at least not overwhelmingly dominant. Anecdotally, a lot of people seem to find the "LLM voice" grating. This seems like it might be a factor in lack of public enthusiasm about the technology.
This is my issue. The LLMs that we're currently calling "AI" are, in my opinion, really just semi-autonomous search engines that present their results in a particularly irritating way. There is nothing remotely appealing to me about using them.
That's right. The only nice thing about the AI summaries is that they give you the results that Google *used* to give you before they threw in 10 sponsored results in front of the stuff you're really looking for.
I'm not a luddite -- I like and/or use nearly all the technologies in your list (mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power), except social media. But there are real problems with LLMs that you gloss over when you focus on 'water usage.'
1 -- LLMs work via wholesale IP theft. Nearly everything produced by LLMs is a derived work of copyrighted material. AI enthusiasts claim that these uses are 'fair use,' and compare LLMs to a human being just learning by reading. But if you look at the legal criteria for whether something's fair use, there are four branches, and LLMs score low on three of them (commercial vs educational; amount used (more is worse, and AI uses everything); and effect on potential market. One person reading a few dozen books doesn't remotely compare to what AI does today.
In other words, LLMs are a way for the wealthiest people in the country to steal the intellectual property of untold millions.
2 -- It's remarkably unreliable. Some examples: I asked Gemini whether Kamala Harris had ever used the phrase "pregnant people" in her speeches, and it assured me that she had, and gave links to several articles, in none of which had she done so. Usually, Harris was mentioned in an article that quoted someone else talking about "pregnant people," although in a few Harris wasn't even mentioned at all.
A friend of mine works at MSFT, and told me that he found the programming assistance to be useless, but he found it was pretty good at summarizing trace logs. So, I figured I'd try out ChatGPT by uploading my Vanguard OfxDirect.csv file and ask it what percentage of my holdings were in NVDA, MSFT, GOOG, FB and ORCL. Since that csv file had both balance summaries and transaction information, ChatGPT couldn't make sense of it (even though all columns are actually labeled). I edited out the transaction info, and uploaded it again, and ChatGPT said I was out of tokens for analytics, so I tried Gemini, which did a plausible job.
I then asked it how much of my holdings were in cash, and it was high by more than 10X. To debug it, I asked it for the largest holding, and it apparently made up out of whole cloth a money market balance entry for $5M! I told it that it was wrong, and asked it why it made that up, and it replied that I must have uploaded two versions of the file, one of which contained the $5M entry. Which was completely false.
3. By driving up energy prices, LLM data centers are essentially forcing all of us retail consumers to prop up the AI industry.
4. I think there's general concern that LLMs will be used as a very imperfect gate keeper, preventing humans from seeing resumes, 'handling' customer complaint resolutions and generally wasting more of people's time trying to get through to a human at a company to resolve some issue.
I don't think LLMs have really affected the job market much, and I'm not sure they will. I think the impact of LLMs will be similar to that of spreadsheets -- a tool that can do some simple jobs faster than a human. While there's some anecdotal evidence that some tech companies have reduced hiring, I think that's mostly a correction from overhiring during the pandemic.
AI can make you lazy. I have Wikipedia for 95% of the questions I might need answered. I use my brain when I sit down to write. I do use spell check. I keep it simple and don't need a robot friend as my wife is my best friend! 😀
It may be as simple as a deep fear of a radical new future, for which people feel vastly underprepared. How many people have had the training, the social capital, and the experience base to own AI versus feel “owned” by it? The biggest threat to status in decades, if not centuries, and that’s saying something, considering the status volatility we were endured over the last 30 years. The rest is just rationalizations.
I think Rustbelt Andy has a correct take on it: " The biggest threat to status in decades". Since the USA has pretty much the highest status in the world, it's no wonder that other nations want to adopt AI to compete.
Another major threat is how easy it is for owners to manipulate what AI presents: witness Grok. This is an open invitation to authoritarian control. I'm sure the Chinese Communist Party will use AI as a combination propaganda tool and surveillance tool. There are no checks and balances on the results from AI. There is also a risk of tailoring results to individuals without their knowledge, automating grooming for commercial, political, or sexual purposes. Much as we already see with various algorithms for TikTok, X, etc. except more all-encompassing and subtle.
And the "persona" of AI so far resembles Wormtongue, for those of us who like LOTR.
I'd say that finding answers to questions requires less searching with AI, which is a plus, but I always look at the top google results to confirm. If possible, I use wikipedia instead because it has been edited in a fairly firm way that excludes a lot of the bullshit that infests the internet. And I know that AI tends to delusions of adequacy in very specialized fields without many publications and when analysis is required to answer a question. In my own field of expertise, wasps, I can ask for the differences between two genera and will get a table of results that includes identical characters as "different", not to mention hallucinated differences.
Josh Marshall of Talking Points Memo suggested Friday that the deep unpopularity of AI comes in part from the fact that it has become a symbol “of a society in which all the big decisions get made by the tech lords, for their own benefit and for a future society that doesn’t really seem to have a place for most of the rest of us.
and came across he phrase "skinny account." So I asked Chat GPT and got a useful answer.
We had a very small Thanksgiving and a whole 9" apple pie woud be too much so I asked Chat for a version suitable for a 6" plate. Worked fine.
Yes, technology does suppress old useful skills. No one should miss the slide rule but I do see a (sad) decline of the ability to do rough mental arithmetical approximtions -- how long did it take a frined to bicycle betwee two ciies -- but is that really a worry?
Oh, and another major threat is very similar to that with phones: subversion of education in children. I think this is reaching a crisis point, though I don't have any data.
Yes, this is my exact problem with AI. Students use it as a crutch so that they don't have to think critically. Learning is hard work ans sometimes frustrating, but if you offload that effort to an LLM, you're not actually learning anything. I feel that this is a huge problem, approximately on par with smartphone use in class.
I think some of the AI antagonism is the result of the polarizing decade the United States has had since 2016. Society has seemingly become more fragile due to politics, social media, and growing economic inequality. There is ambient anxiety in the air and the future is becoming less predictable every year. AI feels like the ultimate uncerainty curveball at a time when most Americans yearn for any kind of stability.
The droids in star wars are literal slaves. Like that is their purpose. We meet R2D2 and C3P0 when they are sold by the jawas to Luke's uncle.
I am being a bit facetious here, but not pointing that out in the post seems like an oversight.
Also, this post complains the most about the false belief in water loss from AI. There are many better things to complain about vis a vis AI, but the post hides behind the water thing to avoid having to go into depth about the others.
I don’t hate AI. I do hate that companies, mine included, are giving mandates to incorporate it into work without actually thinking it through. I hate the tech bro hype surrounding it. I hate the capital class will try and use it to immiserate everyone else to increase their wealth.
If it crashed and took all of their wealth and knocked tech bros down several pegs, I’d crack a smile to be honest.
If you are in America the only person who has immiserated you is yourself.
The image at top ironically captures why most of us are more skeptical of AI than you seem to be. The ability to forge relationships is based on a degree of trust and privacy, and while Luke and C-3P0 enjoyed both, you get neither with any modern AI tool I'm aware of. My concerns are less about the technology itself (which I use!) than they are the motivations of the companies deploying it (which is why I use it sparingly.)
Spot on. I think the AI fear is as much based on distrust of Meta, etc (given their well documented willingness to harm users for cash ) as it is on the actual technology.
I’m a software developer. I like AI coding agents in principle but I haven’t had a good experience with them
* I frequently get stupid code suggestions that break my flow and slow me down
* management has upped my workload because “AI will do it”
* junior devs and offshore teams merge code they know doesn’t work and they blame ChatGPT
* at my last job the AI policy document told us we were “AI First” so use AI to code and summarize long email chains but “don’t put proprietary info into cloud based AI systems.” But all of them are cloud based and our code base and our internal discussions are proprietary. When I pointed this out to my boss, who wrote the policy, he gave me a blank stare and walked away
I just want the hype to die down and the expectations to get realistic, but I worry that causes a recession
While the majority of the anti-AI canon may indeed be nonsense, this is sort of a fallacy fallacy. In the next 50 years AI will likely upend what it means to be human - many people quite like humanity and are (rightfully!) suspicious of big tech smuggling in a trans/post-human era. I think it's naive to imagine some sort of jetson's style amplification of current paradigms. We may end up in a new paradigm that works for us, but there is no sense in just letting it run loose, unobserved and pissing away our humanity.
I think it's relevant that your experience finding interacting with LLMs pleasant is at least not overwhelmingly dominant. Anecdotally, a lot of people seem to find the "LLM voice" grating. This seems like it might be a factor in lack of public enthusiasm about the technology.
This is my issue. The LLMs that we're currently calling "AI" are, in my opinion, really just semi-autonomous search engines that present their results in a particularly irritating way. There is nothing remotely appealing to me about using them.
That's right. The only nice thing about the AI summaries is that they give you the results that Google *used* to give you before they threw in 10 sponsored results in front of the stuff you're really looking for.
I'm not a luddite -- I like and/or use nearly all the technologies in your list (mRNA vaccines, electric cars, self-driving cars, smartphones, social media, nuclear power, and solar and wind power), except social media. But there are real problems with LLMs that you gloss over when you focus on 'water usage.'
1 -- LLMs work via wholesale IP theft. Nearly everything produced by LLMs is a derived work of copyrighted material. AI enthusiasts claim that these uses are 'fair use,' and compare LLMs to a human being just learning by reading. But if you look at the legal criteria for whether something's fair use, there are four branches, and LLMs score low on three of them (commercial vs educational; amount used (more is worse, and AI uses everything); and effect on potential market. One person reading a few dozen books doesn't remotely compare to what AI does today.
In other words, LLMs are a way for the wealthiest people in the country to steal the intellectual property of untold millions.
2 -- It's remarkably unreliable. Some examples: I asked Gemini whether Kamala Harris had ever used the phrase "pregnant people" in her speeches, and it assured me that she had, and gave links to several articles, in none of which had she done so. Usually, Harris was mentioned in an article that quoted someone else talking about "pregnant people," although in a few Harris wasn't even mentioned at all.
A friend of mine works at MSFT, and told me that he found the programming assistance to be useless, but he found it was pretty good at summarizing trace logs. So, I figured I'd try out ChatGPT by uploading my Vanguard OfxDirect.csv file and ask it what percentage of my holdings were in NVDA, MSFT, GOOG, FB and ORCL. Since that csv file had both balance summaries and transaction information, ChatGPT couldn't make sense of it (even though all columns are actually labeled). I edited out the transaction info, and uploaded it again, and ChatGPT said I was out of tokens for analytics, so I tried Gemini, which did a plausible job.
I then asked it how much of my holdings were in cash, and it was high by more than 10X. To debug it, I asked it for the largest holding, and it apparently made up out of whole cloth a money market balance entry for $5M! I told it that it was wrong, and asked it why it made that up, and it replied that I must have uploaded two versions of the file, one of which contained the $5M entry. Which was completely false.
3. By driving up energy prices, LLM data centers are essentially forcing all of us retail consumers to prop up the AI industry.
4. I think there's general concern that LLMs will be used as a very imperfect gate keeper, preventing humans from seeing resumes, 'handling' customer complaint resolutions and generally wasting more of people's time trying to get through to a human at a company to resolve some issue.
I don't think LLMs have really affected the job market much, and I'm not sure they will. I think the impact of LLMs will be similar to that of spreadsheets -- a tool that can do some simple jobs faster than a human. While there's some anecdotal evidence that some tech companies have reduced hiring, I think that's mostly a correction from overhiring during the pandemic.
Two judges have already ruled it to be fair use on summary judgement so your analysis is way off the mark.
AI can make you lazy. I have Wikipedia for 95% of the questions I might need answered. I use my brain when I sit down to write. I do use spell check. I keep it simple and don't need a robot friend as my wife is my best friend! 😀
It may be as simple as a deep fear of a radical new future, for which people feel vastly underprepared. How many people have had the training, the social capital, and the experience base to own AI versus feel “owned” by it? The biggest threat to status in decades, if not centuries, and that’s saying something, considering the status volatility we were endured over the last 30 years. The rest is just rationalizations.
I think Rustbelt Andy has a correct take on it: " The biggest threat to status in decades". Since the USA has pretty much the highest status in the world, it's no wonder that other nations want to adopt AI to compete.
Another major threat is how easy it is for owners to manipulate what AI presents: witness Grok. This is an open invitation to authoritarian control. I'm sure the Chinese Communist Party will use AI as a combination propaganda tool and surveillance tool. There are no checks and balances on the results from AI. There is also a risk of tailoring results to individuals without their knowledge, automating grooming for commercial, political, or sexual purposes. Much as we already see with various algorithms for TikTok, X, etc. except more all-encompassing and subtle.
And the "persona" of AI so far resembles Wormtongue, for those of us who like LOTR.
I'd say that finding answers to questions requires less searching with AI, which is a plus, but I always look at the top google results to confirm. If possible, I use wikipedia instead because it has been edited in a fairly firm way that excludes a lot of the bullshit that infests the internet. And I know that AI tends to delusions of adequacy in very specialized fields without many publications and when analysis is required to answer a question. In my own field of expertise, wasps, I can ask for the differences between two genera and will get a table of results that includes identical characters as "different", not to mention hallucinated differences.
Something I read this morning that felt true:
Josh Marshall of Talking Points Memo suggested Friday that the deep unpopularity of AI comes in part from the fact that it has become a symbol “of a society in which all the big decisions get made by the tech lords, for their own benefit and for a future society that doesn’t really seem to have a place for most of the rest of us.
Check back when AI takes over this Substack
AI is probably going to be pretty great until someone builds one that's smart enough to kill everyone. Then, not too long afterwards, everyone dies.
https://ifanyonebuildsit.com/
That's the way I use AI. I was reading up to comment on Stablecoin
https://thomaslhutcheson.substack.com/p/stablecoin-and-the-fed-balance-sheet
and came across he phrase "skinny account." So I asked Chat GPT and got a useful answer.
We had a very small Thanksgiving and a whole 9" apple pie woud be too much so I asked Chat for a version suitable for a 6" plate. Worked fine.
Yes, technology does suppress old useful skills. No one should miss the slide rule but I do see a (sad) decline of the ability to do rough mental arithmetical approximtions -- how long did it take a frined to bicycle betwee two ciies -- but is that really a worry?
I miss my Pickett slide rule!
Oh, and another major threat is very similar to that with phones: subversion of education in children. I think this is reaching a crisis point, though I don't have any data.
Yes, this is my exact problem with AI. Students use it as a crutch so that they don't have to think critically. Learning is hard work ans sometimes frustrating, but if you offload that effort to an LLM, you're not actually learning anything. I feel that this is a huge problem, approximately on par with smartphone use in class.
I think some of the AI antagonism is the result of the polarizing decade the United States has had since 2016. Society has seemingly become more fragile due to politics, social media, and growing economic inequality. There is ambient anxiety in the air and the future is becoming less predictable every year. AI feels like the ultimate uncerainty curveball at a time when most Americans yearn for any kind of stability.