82 Comments

The article assumes that AI can replaces workers just by introducing some "clever" IT processing, which is what current AI can do, but not much else. In fact, what is called "AI" today is simply a better way of solving certain programming issues that are not possible with traditional programming. Image Processing, for example, provides ability to classify and identify images, but the final decision must be human, as the case with radiologists proves - they use "AI" to analyze images, but judge the outcome and re-check based on their experience. The result: they can serve more people, they can allocate more time per case, and they can allocate time to identify new phenomena not part of the trained model, which is outside the abilities of the AI solution. This will be the case in most areas: Workers will be able to allocate more time to thinking, attention, care, and less on robotic tasks (think of customer support, being able to listen instead of search through mountains of documentation). The current wave of AI, mostly bringing NLP and machine vision to the forefront is just correcting a decade long deterioration in the quality of traditional solutions, bad app, unreliable backends etc. We need it, we are tired of low quality software. Nothing to be afraid of.

Expand full comment
Jun 11, 2023Liked by Noah Smith

Yes! Yes! Yes! We should want and encourage automation to automate people out of jobs, particularly jobs that suck. AND we need to create institutions that ensure people still thrive and can pursue activities that can lead to a better world without worrying if they can have food on their tables.

Expand full comment
Jun 11, 2023·edited Jun 11, 2023

"dockworkers’ unions in the U.S. are opposed to the automation of ports. The same was true in the Netherlands, where Rotterdam has been trying to build the world’s most automated port. But eventually union fears were quieted, and arrangements were made to ensure that dockworkers would still have work. A more harmonious relationship between business, government, and labor enabled unions to — reluctantly, warily — take the long view."

I saw your comment to Scott Lincecum who I read at The Dispatch. But Scott has a very narrow field of view. He ascribes the US Port problem soley the "always bad" labor Unions. Which is ridiculous as it takes at least Owners to exist to have a Union.

Owners, Unions and Government, as you nicely point out, have actually created a very long term working relationship in the Netherlands and Germany.

But Scott ignores these. Problem solving by establishing the boundaries to achieve a philosophical POV is just whackery. Scott also has never been to Los Angeles apparently, driving around Sepulveda Blvd et al.

I'd wager you could invest $5 billion with AI and automation in the LA ports and achieve a 100% throughput gain. But watch it work 2 days a week as the ground and rail transport out of there will choke throughput gains to 5%.

Expand full comment

A notable thing about Geoffrey Hinton is that people keep giving him prizes and calling him the father of AI even though nothing he's invented has ever been useful, and the one thing he is popular for (backpropagation) wasn't personally invented by him and was actually previously discovered by other people his group didn't cite.

(There's a joke about most ML research being rediscoveries of things Schmidhuber discovered in the 90s and just not citing him. The thing is, this is actually true.)

Expand full comment

AI will be shaped to make money.

Expand full comment

Noah, What is it about Acemoglu that you dislike so much? Did you read Acemoglu and Johnson’s book “Power and Progress”? It seems you haven’t because in it they are definitely not pushing what you suggest they are. My reading of their book is that they push for a change in our culture’s value system, with BOTH regulations that encourage companies not to use AI to sell products (e.g. Google, FB, …) AND “empower and enable workforces to be more productive” by e.g. allowing industry-wide trade unions to counter capitalists. The statistics you use tend not to separate out the differences between the 0.1%, the next 9.9%, and the bottom 90%. (e.g. See https://econpapers.repec.org/article/oupqjecon/v_3a131_3ay_3a2016_3ai_3a2_3ap_3a519-578..htm or https://www.theatlantic.com/magazine/archive/2018/06/the-birth-of-a-new-american-aristocracy/559130/ ) Forget about the “1%,” which likely includes the radiologists you discuss. We have a 3-class system in the US. You and I (and probably most of your readers) fall into the 9.9%--the educated elites, who are doing fine, not the 90% who have been falling behind. (I want to see “GDP per capita” and “labor share of income” to be differentiated for each of these three classes.)

Expand full comment

What’s amazing is that you can write a brilliant piece on the necessity for the welfare state and labor unions and not once mention the implacable Republican opposition that is sure to completely stymie any such policies.

Expand full comment

The autonomous systems vs human augmentation debate is actually a very old one in the AI community, but it is usually framed as a debate about what to emphasize in the design of AI systems, and not economic policy. It is, I think, a useful and important debate about design strategies. Current AI systems are designed: the developer makes decisions about the selection of training data, architecture, objective functions used in optimization, and how trained functions connect to other programming such as user interfaces.

You are right that economic policy is much better off not reaching down and making detailed design and product decisions. It is unlikely to do much good. But it is also a very good idea for AI and software designers to focus on augmenting human abilities rather than replacing human workers. Design that empathizes with humans and leaves them with room for independent decision making is really much better design.

Expand full comment

I only skimmed the article after searching whether the term "open source" was used (it isn't). I think this is a huge oversight in the current AI conversation. Right now it's still kind of theoretically possible to impose a regulatory framework on Google, OpenAI, etc. that could change how AI is utilized. But once open source competitors are widely available and at least "good enough" compared to paid corporate competitors, this possibility essentially evaporates.

In the image generation world, Stable Diffusion has already passed the threshold of being "good enough". You can see the models people are making (hint: it's 99% porn): https://civitai.com/ Large Language Models will get there soon enough (check out Vicuna if you're interested). Then I expect video generation tools will be next.

I know this post is mainly about implementing AI tools in the "real economy", but if AI steering can't be effectively imposed on the model level, anyone in the free market can come up with alternatives that can compete with AI steering contracts adopted by corporations. Unions might be able to wrangle control through contracts in specific chokepoint industries like dockworkers, but this won't be replicable generally. An ad agency pledges to use real artists? A competitor will be more than happy to lower their labor costs by $100,000 per artist not needed. Every movie studio pledges not to use AI in script writing or special effects? Well cue the VCs to fund a new studio that has no such ethical qualms to make a big budget movie on the cheap.

I don't see myself as a fatalist or doomsayer on AI. But I do think we should be serious about how much impact this will have on the economy and our day to day lives in a matter of years from now (and yes, I'm 100% supportive of UBI or a broader welfare state). I'm not predicting mass employment, but I think it's inevitable AI will displace labor to some degree and we simply don't know right now if the economy will generate new jobs for these people (and that's before you get to questions about whether the new jobs will pay less and increase inequality). We've already seen how toxic deindustrialization has been to US and European politics. Modernity is now coming after the knowledge workers too.

Expand full comment

I appreciate your thoughts on keeping restraints on innovation minimal, this is good for innovation and we need much more innovation in our world. My counter-concern is that innovation (and in this particular case, AI) is moving so fast, that I'm not sure humans and their social structures have enough time to evolve and adapt to new tech and solutions without psychological and biological impacts. I agree our models of the future are very weak (think climate change or war), but I dislike plunging blindly ahead without thoughtfulness on ethical and social impacts. How we balance optimism and caution about the future seems to be a good question.

Expand full comment

This is great! But I'd like to make a request.

When progressive bloggers write about Universal Basic Income, can they please make sure always to remind their readers that cash child benefits are a form of UBI, specifically UBI for families with kids?

I think this is worth emphasizing for a couple of reasons. Alleviating child poverty should be a higher priority than alleviating poverty among adults, so even if you support UBI for all ages a cash child benefit is the best way to get started. And because it covers fewer people and is cheaper, it's more feasible than introducing full UBI in one fell swoop.

It was so disappointing to see young leftists campaigning online for student debt relief and ignoring the fact that there was a brief window to make the COVID child benefit permanent. If people would start describing child benefit as the first stage of UBI--maybe not to all audiences but to left-wing audiences--I think some of those people would have been on board.

Expand full comment

I'm reminded of Player Piano by Vonnegut. Welfare can replace income, but it can't replace meaning. Like it or not, the vast majority of Western men derive their sense of spiritual meaning from their ability to produce economically useful things.

Trying to direct AI is a losing path. Figuring out how to distribute the bounty that may come when most human needs can be met without human labor is certainly important, but it's less than half the battle.

Expand full comment

It's very reasonable to conclude that:

1. owners of capital will be the chief early beneficiaries as those with cash roll out AI improvements every time they're available, creating a flywheel effect

2. work as we know it is likely to be disrupted in ways we cannot imagine, but that doesn't mean a net loss longer term... but short term, not so great for most workers as they have to deal with some chaos

The more I consider AI's relationship to work (I've been writing about this for 15 years now!), the more I come to the conclusion that something like UBI will be necessary in a postmodern AI world.

Expand full comment

Based on the past, easier to imagine ways of compensating and retraining workers than doing it.

Expand full comment

Another great article! I think you're absolutely right about forecasting the impact of technologies in any even remotely detailed way. In part that's because it's practically impossible to predict whether, when and how distinct technologies will cross-fertilise to create new impactful technologies. I recall that on the eve of the launch of the iphone Orange's CEO was describing their mobile data revenues as 'rounding errors' and The Economist magazine was predicting that the future for mobile phones was a range of different handsets each designed to perform a specific function. Then a person in Apple with a general design remit happened to run into someone in another department working on laptop touchpads and the touchscreen was born. Once technologies (or indeed any variables) start to interact like this over a prolonged period of time, forecasting starts to resemble gambling.

My slight concern about the Dutch and German examples is whether this type of approach is 'available' in the English-speaking economies. Most Northern European countries (plus Japan and Korea) have vertically disciplined cultures which engender stable partnerships (between owners, management and workers or between links in the supply chain) and a long-term focus in business. But these features of the European 'social market' model don't seem to come as naturally to businesses in the 'Anglo-Saxon' economies - US, UK, Canada, Australia, New Zealand - which are more fluid, innovative and individualistic but less stable and less naturally distributive of wealth and income. That's why I think some form of government intervention may be required in countries like the UK and US. At least a readiness to intervene swiftly, retrospectively for the sake of maintaining a minimum level of social cohesion. Preferably something like Basic Income to strike the right balance between social welfare and the relatively fluid labour-force that appears to be a natural feature of Anglo-Saxon economies.

Expand full comment

You’ve got that right in re fiduciary duty, rules against purchasing penny stocks, etc. The mutual fund industry has more charters that a booze-cruise boat in Cabo San Lucas. (Double entendre intended with ‘charters.’) Seriously, the number of hair-splitting questions from analysts on quarterly conference calls is brain-numbing. With some regularity, the same question will be asked two or three times (only the phrasing changes). One has to wonder if analysts are hard of hearing, have inferior phone/audio equipment,!or they’re exercising egos or trying to impress their bosses. Maybe there is a practical application for AI replacement of stock analysts?

Expand full comment