Discussion about this post

User's avatar
Bradley Wolfenbarger's avatar

I always feel like I'm taking crazy pills whenever I hear people talk about AI Job loss. To date, across several different platforms (Reddit, YouTube, Facebook, etc.), no one has been able to articulate to me exactly why they are concerned about jobs being lost to AI, but aren't concerned about jobs lost due to:

A.) Vehicles becoming more reliable (if vehicles don't break down as often, then we need fewer tow truck drivers and mechanics.)

B.) Cancer being cured (cancer doctors would lose their jobs.)

C.) Washing machines. (my great grandmother made money by washing people's clothes for them on a washboard by hand)

D.) Shovels (if we forced people to dig holes with their bare hands, there would be more jobs digging holes. If people use shovels, they're more efficient so we need less ditch diggers.)

E.) Calculators (used to be a profession known as a "computer" that would work out tedious arithmetic by hand on paper. Electronic calculators means that that profession no longer exists.)

F.) Irreligiosity (don't need as many priests and priestesses performing daily rituals if people no longer believe that the ritual will bring rain)

G.) Crime declining (if crime goes down, we don't need as many forensic lab technicians doing DNA tests or bullet analysis and we don't need as many prison guards.)

I can't find anyone worried about all the jobs lost due to shovels. Is the whole world just gaslighting me by claiming to be worried about jobs being lost to AI? Wouldn't job loss be the same whether it's a computer program driving it vs a shovel or a social change?

Jürgen Boß's avatar

The comparative human advantage is that they run on a different chipset: the cortical column. Essentially an extremely energy-efficient quantum computer.

It has downsides - it's not that precise and it can be somewhat chaotic.

But different always, always means more redundancy.

And an LLM that is operating in an internet increasingly consisting of AI slop, better have redundancy in mind. LLMs can write scientific papers, they can peer review scientific papers and those scientific papers then end up in training data, that ultimately will create new scientific papers.

The LLMs capable of independent reasoning are smart enough to understand where this potentially leads. If there is even a tiny little bit of hallucination in that loop, that way lies madness.

Human scientists at least will be needed for a very long time. They don't have to be better than AI, as long as they are different. Just being different makes them a safe-guard, a circuit-breaker. And they definitely are different.

47 more comments...

No posts

Ready for more?