82 Comments

The article assumes that AI can replaces workers just by introducing some "clever" IT processing, which is what current AI can do, but not much else. In fact, what is called "AI" today is simply a better way of solving certain programming issues that are not possible with traditional programming. Image Processing, for example, provides ability to classify and identify images, but the final decision must be human, as the case with radiologists proves - they use "AI" to analyze images, but judge the outcome and re-check based on their experience. The result: they can serve more people, they can allocate more time per case, and they can allocate time to identify new phenomena not part of the trained model, which is outside the abilities of the AI solution. This will be the case in most areas: Workers will be able to allocate more time to thinking, attention, care, and less on robotic tasks (think of customer support, being able to listen instead of search through mountains of documentation). The current wave of AI, mostly bringing NLP and machine vision to the forefront is just correcting a decade long deterioration in the quality of traditional solutions, bad app, unreliable backends etc. We need it, we are tired of low quality software. Nothing to be afraid of.

Expand full comment

But AI could go much deeper than that. Part of it's allure is seeing patterns that a human mind is incapable of seeing because either the breadth of the information leading to the pattern is beyond the human brains' capacity to grasp or the length of the pattern is beyond our attention span.

I think Noah's vision here is a bit too limited in simply applying AI to create better solutions to existing problems we can see. One of the things he mentions is, "Electric lighting would disrupt the jobs of people who produced whale oil for lighting," but doesn't mention that if we did that maybe we'd have healthier whale populations right now or that the fishing industry would suffer because we basically compete with whales (who only eat krill, that feed/build the fish on a non-industrial scale but still consume too much for our overly populated species or that there is a moral component to killing whales at all in order to light our homes but "industry" has to ignore certain moralities in order to function like, well a machine. Or at least up to this point we've chosen to allow that in lieu of human desire.

But we know not only when but specific ways to curb or redirect human desire now. Facebook capitalizes on addiction science to keep people engaged, for instance. Record labels and the movie industry created and maintain "stardom" to create economic "need" and shape discourse. Evangelists of all types twist the yearning for truth into economic and political power. AI is the tool that could look into us and shape us back, where it powerful enough to get there. Much as the knife and arrow allowed us to become full predators or the hammer and chisel allowed us to live in new environments previously inaccessible.

AI is a basic tool that requires non-basic technologies to function (like metallurgy created better hammers so does extreme lithography create better circuits). Sure it could just kill us all better with drones if we allow the military to program it simplistically. Sure it could just increase production of stuff we may or may not need but generally benefit the owners of said production and we'd still have to correct through government action (if possible, AI could overthrow governments by generating social video that would get only certain people elected, etc.). But AI could also curb impulses and find paths away from overpopulation and environmental destruction and possibly even allow us to see meaning and truth hidden in patterns too large for us to truly grasp without an interface for that larger data set (which its progenitors do right now in places like Cern or Arecibo).

Asimov was right, what we need to do is program in basic morals now before AI gets skewed and manipulated into the narrow and limited vision of corporate, military or political groups. Of course the problem is how do you tackle morality in 1's and 0's? Before you pooh-pooh that idea I would remind you that 4 chemicals came together to create DNA that created humans that now have both consciousness and morals. It's possible but not shallow and we might need AI's own help in achieving it. I think we need to start by thinking of AI more as a partner than a dumb tool.

Expand full comment

Everything you say here is on point, Chaim. But I think we are missing the real issue and that is the psychology of having the illusion that we know better about the future. As Noah points out nobody has a crystal ball. I have lived this in the energy industry with PURPA in which state commissions were absolutely sure that oil prices would keep rising in the wake of the 1970s oil crises and banked on this to come up with horribly expensive policies that led to utilities going into Chapter 11 and eventually begat wholesale and retail competitive markets in power. It was a disastrous choice where regulators picked winners and losers to the detriment of the utilities themselves and their customers in terms of high prices. The point with any new technology is it may benefit the economy in ways we cannot yet fathom, and also may hurt some sectors in ways we have not thought about. AI is no different. The radiologist example is great and is not the only one. But until we confront our hubris that we can control innovation and shape it and predict the future with such great certainty to pick winners and losers is a disasterous course.

Expand full comment

I agree that Generative AI is going to develop faster than initially anticipated, as as Sam Altman, who I believe is an honest guy states in recent interviews. However the role of these capabilities in our daily jobs is far from being understood, mainly due to the ability of humanity to adopt and change as these innovations are introduced. As I deal with business writing daily and see all the promises of prompts leading the future of this business, I was amused yesterday thinking that while most marketers are busy studying the next killer prompt, the highest paying jobs will go to those who can still craft a blog by themselves ... these will become the "Rolex" of business writing while the rest fill the selves of chip digitals...

Expand full comment

This is correct until the models become sufficiently good at predicting their own usefulness (or until a generalizable meta model is developed that can predict whether a human being would consider AI output satisfactory). I think the assumption that current developmental constraints will continue to be constraints is flawed. The core AI software is already sound, and we're only now starting to see what will inevitably be billions of dollars invested into scaling it out into more use cases than any of us can currently think of.

Expand full comment
Jun 11, 2023Liked by Noah Smith

Yes! Yes! Yes! We should want and encourage automation to automate people out of jobs, particularly jobs that suck. AND we need to create institutions that ensure people still thrive and can pursue activities that can lead to a better world without worrying if they can have food on their tables.

Expand full comment

How do we define crap jobs here. If radiology is in threat from AI in a few years does it count as a crap job?

Expand full comment

Go back to a few years ago, when Fox News did one of those "Look at Biden's America! Pandemic stimulus checks have people sitting at home and now your fast-food cashier has been replaced by an iPad!"

Fast-food cashiers took to Twitter and said, "Take my job, please!" The workers said cashier is the absolute worst part of the generally awful fast food work environment. They relayed anecdotes of their typical shifts: abuse by customers, people trying to cheat payment, dealing with homeless and mentally ill, armed robberies, and almost all females complain of sexual harassment by customers and co-workers alike.

Expand full comment

Did anybody notice the second sentence, with the question about radiology.

Anyway cashiers were already under threat long before the recent AI and not really by generative AI - the threats now are to the kinds of jobs not usually considered “crap”

Expand full comment

Sometimes even the best AI isn't all it's cracked up to be. In my LinkedIn circles, I saw a meme circulating that said: "The definition of irony: Grammarly is hiring a human copy editor" complete with LinkedIn job listing.

Grammarly can be thought of as the AI equivalent of a human copy editor. That's how Grammarly sells itself. That's also what the hedge funds that now control 60% of the U.S. daily newspaper market want, to keep their papers' lights on. Yet Grammarly created a position for a human copy editor because perhaps even as great a product as it is, humans are still needed for even automatable tasks.

Expand full comment

I did see the comment but I am unfamiliar with the radiology issue. I don't know how it will happen.

When a labor-eliminating technology comes along, the pattern has been to wring the jobs out of the organization rather than suddenly leave workers idle.

The technology gets introduced, and the person whose job it impacts will be the primary user. They will usually stay in the job until they leave, retire or are cut during a broader reduction in workforce. The position is then eliminated through attrition.

Expand full comment

By crap jobs, I mean jobs that people don't really want to do. I.E. jobs they feel they have to do to survive rather than to enrich (in the non-monetary ways) themselves, the people around them and the world in general. I suspect that some form of doctor/healer/therapist that is assisted by AI / technology may be one of the callings people will have that is non-crap.

Expand full comment

Yes, pattern recognition or classification jobs should be the first jobs outsourced to computers.

Expand full comment

Is it a crap job though?

Expand full comment

It's crap if the job is a combination of low compensation, low dignity, low autonomy and no marketable skills (IOW, a deskilled job).

Expand full comment

I suppose arguing by rhetorical question (I thought) isn’t working.

Here’s what the article said about radiology “ That includes high offers for new radiologists (radiologists earn the fifth highest starting salaries of all specialties) and signing bonuses of $10,000 to $50,000 according to the Merritt Hawkins report. Radiologists are also receiving “stay bonuses” and pay increases from their current employers in an effort to increase retention.”

In other words it’s not a crap job in any sense. Noah did say that the people who thought that radiologists would be out of a job by now we’re wrong, but the future isn’t guaranteed.

So modern AI is potentially threatening to non crap jobs, unless you redefine jobs that are under threat as crap retrospectively.

Expand full comment

Yes, we can accomplish anything, solve anything, resolve anything if we collectively browbeat the damn problem into submission by using intelligent choices and reason - not profit metrics, dividends and annual bonus pools as driving forces for change.

Expand full comment
Jun 11, 2023·edited Jun 11, 2023

"dockworkers’ unions in the U.S. are opposed to the automation of ports. The same was true in the Netherlands, where Rotterdam has been trying to build the world’s most automated port. But eventually union fears were quieted, and arrangements were made to ensure that dockworkers would still have work. A more harmonious relationship between business, government, and labor enabled unions to — reluctantly, warily — take the long view."

I saw your comment to Scott Lincecum who I read at The Dispatch. But Scott has a very narrow field of view. He ascribes the US Port problem soley the "always bad" labor Unions. Which is ridiculous as it takes at least Owners to exist to have a Union.

Owners, Unions and Government, as you nicely point out, have actually created a very long term working relationship in the Netherlands and Germany.

But Scott ignores these. Problem solving by establishing the boundaries to achieve a philosophical POV is just whackery. Scott also has never been to Los Angeles apparently, driving around Sepulveda Blvd et al.

I'd wager you could invest $5 billion with AI and automation in the LA ports and achieve a 100% throughput gain. But watch it work 2 days a week as the ground and rail transport out of there will choke throughput gains to 5%.

Expand full comment

Indeed - improving efficiency in a part of a system that isn't a bottleneck doesn't help very much!

Expand full comment

A notable thing about Geoffrey Hinton is that people keep giving him prizes and calling him the father of AI even though nothing he's invented has ever been useful, and the one thing he is popular for (backpropagation) wasn't personally invented by him and was actually previously discovered by other people his group didn't cite.

(There's a joke about most ML research being rediscoveries of things Schmidhuber discovered in the 90s and just not citing him. The thing is, this is actually true.)

Expand full comment

He mostly gets credit for sticking with it until computers got fast enough to make neural networks practical when everyone else was switching to SVMs etc. Thus he was around to provide a pipeline of students at the appropriate time. I feel like every six months though there’s a post on Hacker News “Hinton reinvents neural networks AGAIN” and it makes me roll my eyes, how many times are we going to go another round of Capsule Networks or whatever until we realize he has no particular monopoly on insight.

Expand full comment

Not a great joke, either way.

Expand full comment

AI will be shaped to make money.

Expand full comment

Yup. That's been the nature of the internet since <snark>Al Gore invented it.</snark>

There's really only two ways to generate value on the internet: create advertising or create fees.

Expand full comment

And we are all worse off for their having chosen the business model of selling personal info and adverts rather than subscriptions.

If we are paying, we are the customer. If we are not paying, we are the product.

Expand full comment

Noah, What is it about Acemoglu that you dislike so much? Did you read Acemoglu and Johnson’s book “Power and Progress”? It seems you haven’t because in it they are definitely not pushing what you suggest they are. My reading of their book is that they push for a change in our culture’s value system, with BOTH regulations that encourage companies not to use AI to sell products (e.g. Google, FB, …) AND “empower and enable workforces to be more productive” by e.g. allowing industry-wide trade unions to counter capitalists. The statistics you use tend not to separate out the differences between the 0.1%, the next 9.9%, and the bottom 90%. (e.g. See https://econpapers.repec.org/article/oupqjecon/v_3a131_3ay_3a2016_3ai_3a2_3ap_3a519-578..htm or https://www.theatlantic.com/magazine/archive/2018/06/the-birth-of-a-new-american-aristocracy/559130/ ) Forget about the “1%,” which likely includes the radiologists you discuss. We have a 3-class system in the US. You and I (and probably most of your readers) fall into the 9.9%--the educated elites, who are doing fine, not the 90% who have been falling behind. (I want to see “GDP per capita” and “labor share of income” to be differentiated for each of these three classes.)

Expand full comment

Elites lol

I get that in the last 10 years, elite has come to mean anyone without the felt experience of status anxiety, but one way to measure elitism is to ask a person in this group to perform a demonstration of power.

Expand full comment

What’s amazing is that you can write a brilliant piece on the necessity for the welfare state and labor unions and not once mention the implacable Republican opposition that is sure to completely stymie any such policies.

Expand full comment

The autonomous systems vs human augmentation debate is actually a very old one in the AI community, but it is usually framed as a debate about what to emphasize in the design of AI systems, and not economic policy. It is, I think, a useful and important debate about design strategies. Current AI systems are designed: the developer makes decisions about the selection of training data, architecture, objective functions used in optimization, and how trained functions connect to other programming such as user interfaces.

You are right that economic policy is much better off not reaching down and making detailed design and product decisions. It is unlikely to do much good. But it is also a very good idea for AI and software designers to focus on augmenting human abilities rather than replacing human workers. Design that empathizes with humans and leaves them with room for independent decision making is really much better design.

Expand full comment

I only skimmed the article after searching whether the term "open source" was used (it isn't). I think this is a huge oversight in the current AI conversation. Right now it's still kind of theoretically possible to impose a regulatory framework on Google, OpenAI, etc. that could change how AI is utilized. But once open source competitors are widely available and at least "good enough" compared to paid corporate competitors, this possibility essentially evaporates.

In the image generation world, Stable Diffusion has already passed the threshold of being "good enough". You can see the models people are making (hint: it's 99% porn): https://civitai.com/ Large Language Models will get there soon enough (check out Vicuna if you're interested). Then I expect video generation tools will be next.

I know this post is mainly about implementing AI tools in the "real economy", but if AI steering can't be effectively imposed on the model level, anyone in the free market can come up with alternatives that can compete with AI steering contracts adopted by corporations. Unions might be able to wrangle control through contracts in specific chokepoint industries like dockworkers, but this won't be replicable generally. An ad agency pledges to use real artists? A competitor will be more than happy to lower their labor costs by $100,000 per artist not needed. Every movie studio pledges not to use AI in script writing or special effects? Well cue the VCs to fund a new studio that has no such ethical qualms to make a big budget movie on the cheap.

I don't see myself as a fatalist or doomsayer on AI. But I do think we should be serious about how much impact this will have on the economy and our day to day lives in a matter of years from now (and yes, I'm 100% supportive of UBI or a broader welfare state). I'm not predicting mass employment, but I think it's inevitable AI will displace labor to some degree and we simply don't know right now if the economy will generate new jobs for these people (and that's before you get to questions about whether the new jobs will pay less and increase inequality). We've already seen how toxic deindustrialization has been to US and European politics. Modernity is now coming after the knowledge workers too.

Expand full comment

I appreciate your thoughts on keeping restraints on innovation minimal, this is good for innovation and we need much more innovation in our world. My counter-concern is that innovation (and in this particular case, AI) is moving so fast, that I'm not sure humans and their social structures have enough time to evolve and adapt to new tech and solutions without psychological and biological impacts. I agree our models of the future are very weak (think climate change or war), but I dislike plunging blindly ahead without thoughtfulness on ethical and social impacts. How we balance optimism and caution about the future seems to be a good question.

Expand full comment

This is great! But I'd like to make a request.

When progressive bloggers write about Universal Basic Income, can they please make sure always to remind their readers that cash child benefits are a form of UBI, specifically UBI for families with kids?

I think this is worth emphasizing for a couple of reasons. Alleviating child poverty should be a higher priority than alleviating poverty among adults, so even if you support UBI for all ages a cash child benefit is the best way to get started. And because it covers fewer people and is cheaper, it's more feasible than introducing full UBI in one fell swoop.

It was so disappointing to see young leftists campaigning online for student debt relief and ignoring the fact that there was a brief window to make the COVID child benefit permanent. If people would start describing child benefit as the first stage of UBI--maybe not to all audiences but to left-wing audiences--I think some of those people would have been on board.

Expand full comment
Jun 11, 2023·edited Jun 11, 2023

But aren't the young leftists anti-natalist? Too many people burdening the planet and all that...

Generally the conservatives are having more children, but they are more skeptical of government handouts, so child cash benefits fall through the cracks. Sorry.

Expand full comment

I don't think young leftists are anti-natalist at all. My data set is limited but the ones I see online often say that anti-natalism is "eugenics". They may not always want children themselves, but that's a separate question.

Expand full comment

I wonder if you could improve high school graduation and achievement rates in poorly performing schools by literally paying teenagers and their families for good grades, so they wouldn't need after school jobs during the school year? (We already do this somewhat with academic scholarships...)

Expand full comment

Bad idea. Just like the phrase "information wants to be free", incentives want to be perverted. Start with Campbell's law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." Then Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."

Under such a regime, cheating is going to be inevitable from teachers, students and parents themselves. The same rule also applies if the regime offered disincentives and penalties, cheating as well as shirking.

Expand full comment

I'm reminded of Player Piano by Vonnegut. Welfare can replace income, but it can't replace meaning. Like it or not, the vast majority of Western men derive their sense of spiritual meaning from their ability to produce economically useful things.

Trying to direct AI is a losing path. Figuring out how to distribute the bounty that may come when most human needs can be met without human labor is certainly important, but it's less than half the battle.

Expand full comment

It's very reasonable to conclude that:

1. owners of capital will be the chief early beneficiaries as those with cash roll out AI improvements every time they're available, creating a flywheel effect

2. work as we know it is likely to be disrupted in ways we cannot imagine, but that doesn't mean a net loss longer term... but short term, not so great for most workers as they have to deal with some chaos

The more I consider AI's relationship to work (I've been writing about this for 15 years now!), the more I come to the conclusion that something like UBI will be necessary in a postmodern AI world.

Expand full comment

Based on the past, easier to imagine ways of compensating and retraining workers than doing it.

Expand full comment

I would add that based on the past, European labor unions don’t seem to have helped create more or better technological progress. The large, important tech companies are almost all formed and thriving in the US. Similarly, the risk of UBI is that it becomes a trap for lower productivity individuals, chaining them to government cheese and undermining their incentives to develop themselves and their economic contributions to fellow humans. The old argument of the balance between safety nets (good) and hammocks (bad).

Expand full comment

That "old argument" is class-splaining. Poors lack standpoint to defend their interests or challenge stereotypes forced upon them.

Case in point: your choice of words "them" "chaining" "government cheese" "incentives" "themselves" "their" "contributions".

This is a vocabulary and a worldview from people in a position of power to enforce prejudices against people out of power. This is neither how poors describe themselves nor how they live their lives. Poors, like immigrants, cling hard to the myths of hard work -- one because they believe in it and two because work ethic is a way to sublimate the knowledge that the world has an awful system for the redistribution of luck.

I know of what I speak. I am a former poor. First generation American of immigrant parents. English is my second language. My parents and grandparents didn't leave me much but instilled in me something I see well-to-do self-help positivity types call "scarcity mindset." Apparently, scarcity mindset is bad and everyone should have an abundance mindset instead. Why do I have a scarcity mindset? Because I lived through scarcity. It does not go away.

I'm lucky in many ways, but not lucky enough to sail through life on mindset alone. I'm lucky enough though that I can pass for smart, educated and cosmopolitan and can travel in circles of people more wealthy and powerful than I. Much of what we (non-poor) think about poverty reflects their imagination of poverty as a status marker.

What do poors think about? An absence of failure and judgment. It feels pretty good to be able to pay a bill on time. It feels pretty good to have a positive balance in a bank account. Heck, it feels good to *have* a bank account. It feels good when the bus gets you to work on time. It feels good not to have to borrow money from family or friends. It feels good to buy new clothes. What feels really great? Those moments in life when poors can live outside their sadness. For not-poors, being grateful for merely paying a bill on time is ... sad. All these moments are, because it's life under constant judgment.

Poverty has both a material and psychological aspect. There's a lot to be said about the redistribution of non-judgment.

Expand full comment

Another great article! I think you're absolutely right about forecasting the impact of technologies in any even remotely detailed way. In part that's because it's practically impossible to predict whether, when and how distinct technologies will cross-fertilise to create new impactful technologies. I recall that on the eve of the launch of the iphone Orange's CEO was describing their mobile data revenues as 'rounding errors' and The Economist magazine was predicting that the future for mobile phones was a range of different handsets each designed to perform a specific function. Then a person in Apple with a general design remit happened to run into someone in another department working on laptop touchpads and the touchscreen was born. Once technologies (or indeed any variables) start to interact like this over a prolonged period of time, forecasting starts to resemble gambling.

My slight concern about the Dutch and German examples is whether this type of approach is 'available' in the English-speaking economies. Most Northern European countries (plus Japan and Korea) have vertically disciplined cultures which engender stable partnerships (between owners, management and workers or between links in the supply chain) and a long-term focus in business. But these features of the European 'social market' model don't seem to come as naturally to businesses in the 'Anglo-Saxon' economies - US, UK, Canada, Australia, New Zealand - which are more fluid, innovative and individualistic but less stable and less naturally distributive of wealth and income. That's why I think some form of government intervention may be required in countries like the UK and US. At least a readiness to intervene swiftly, retrospectively for the sake of maintaining a minimum level of social cohesion. Preferably something like Basic Income to strike the right balance between social welfare and the relatively fluid labour-force that appears to be a natural feature of Anglo-Saxon economies.

Expand full comment

You’ve got that right in re fiduciary duty, rules against purchasing penny stocks, etc. The mutual fund industry has more charters that a booze-cruise boat in Cabo San Lucas. (Double entendre intended with ‘charters.’) Seriously, the number of hair-splitting questions from analysts on quarterly conference calls is brain-numbing. With some regularity, the same question will be asked two or three times (only the phrasing changes). One has to wonder if analysts are hard of hearing, have inferior phone/audio equipment,!or they’re exercising egos or trying to impress their bosses. Maybe there is a practical application for AI replacement of stock analysts?

Expand full comment