I think you're right that it's very hard to tell right now what AI does or will do to the job market.
But the effect in Brynjolfsson et al. is actually *exactly* what I'd expect to be happening if AI were to start having a major impact in these industries.
Think about it: To use current AI tools and capabilities well - especially in areas such as coding, but also law, supply chain management, whatever - you need a certain level of expertise and domain knowledge.
This is exactly what middle-aged developers and managers have. They are the ones that can prompt and set up AI systems in a way that makes sense and increases productivity in their respective industry and company. They have the expertise and a stake in making things better and faster (especially if they're switching jobs, as this often implies they're driven by a hunger for change that couldn't be fulfilled when staying put).
Entry-level workers on the other hand - while possibly adept at using AI tools for personal needs and wishes - will almost uniformly lack the deep domain knowledge and company- or industry-specific expertise to prompt and set up AI correctly in their new job. That takes time.
I'm also quite skeptical whether we really are starting to see the impact of AI on the labor market. But as to what we'd be seeing if it were to happen, this is exactly what I would expect.
I was also going to write the same response as well. I’m a partner at a law firm and we use many tools regularly. What the tools do is cut almost entirely into younger associate work. Diligence, research and basic drafting are most easily replaced. What clients still need is practical advice, strategic planning and higher level work (along with double checking all of the diligence, research and basic drafting).
Now I will say we’re cognizant of this and working in training our associates in more practical ways for an AI future, so we are continuing to hire as normal, but I could see other firms (and industries) taking a different route
Exactly. A new type of coding test - at least I’ve heard of more than one examples - is to ask people to use ChatGPT (or whatever) within their IDE based on a given requirement, which is often quite demanding. The interviewee is expected to review the code and fix it if there are errors. Or to reject code if it’s not efficient for whatever reason. Architectural design matters too - not just asking for code by explaining the code layout.
Then the interview is expected to explain what was done and why, and correct for bad or redundant code. When the AI fails it’s often an obscure compiler issue which only a senior could fix.
This is a hard test for juniors.
This is the model we are beginning to use in our interviews.
How many years of experience do you think it takes to be able to master this sort of "obscure compiler issue" type thing? I'd guess this is something most competent engineers would pick up in the first year or so of working on a team.
Do you see this failure mode just in interviews with recent grads, or also with engineers with 1-2 years of work experience?
(also, I agree that skill with architecture/systems-level stuff does take several years to develop and there's really no substitute for time there)
Well I don’t know exactly since this is all anecdotal and there’s a lack of empirical data, so far. However I’ve seen myself when using ChatGPT fall over on languages that I’m not very familiar with, but have some 2-3 years experience with, admittedly in the past. When that happens I google the problem, because asking the A.I. often leads you down a rabbit hole. In a test environment that would put you behind the guy who could fix it immediately, and worse - the interviewers gave to worry about the A.I. generating code that it can’t fix, can’t be easily googled and needs a senior engineer to fix.
It’s the same idea as having extremely well trained pilots and auto piloted planes. Just in case.
Edit: edit to say that this applies to senior engineers who could even be in their twenties so you are right on that.
IMO, this could cut either way. Younger workers are much more willing and likely to adopt new technology than older workers. They also tend to be more flexible and adept at adjusting to a radical shift in their work flow.
When I was a young embedded systems coder, a handful of us upstarts tried to get the team to use the newfangled "C" programming language instead of coding everything in assembly language. To assuage performance concerns, we made sure it was easy to switch between C and assembly language when needed (90% of the code wasn't performance sensitive). It was quite an effort to persuade the older coders.
The only way we could get everyone onboard was to ensure folks that it was and would stay completely optional, so coders could write their stuff in whatever language they preferred. Even a year or two after the switchover, barely any of the over-30 crowd had written any C code at all, while all the under 30 folks were writing everything they could in C.
@MagellanNH You're right that the willingness to adopt new technologies is crucial - and usually it's more widespread among younger people.
However, in the specific case of AI it's not entirely about adopting a completely new technology. Nor about replacing a perhaps outdated and now redundant way of doing things with a completely new way. That's often how new technology works and it's quite natural that incumbent with an advantage in existing tech would be reluctant whereas new entrants would be eager to learn and adopt them.
Current AI, however, is often more of a tool to leverage existing knowledge and expertise. The more you already know about your specific processes, industry, or company, the better you can leverage AI tools to become more productive, efficient, automate stuff, be faster and still accurate etc.
And that favors a very peculiar kind of person: Those with lots of domain expertise and knowledge yet *still* willing and eager to change.
And that's almost precisely the kind of person who is a mid-level manager or developer yet would still opt for a change of job or career.
They have both expertise and a clear eagerness to try new things and shake things up.
You make good points, but imo you're over-indexing the average expertise of mid level employee. There's a very wide distribution of capability at the mid-level.
You've got like 10-20% of engineers that become superstars when they get to the mid level. They have a mastery of both the coding and systems architecture world, along with solid industry domain expertise. I totally agree that 20% is going to kill it with AI and leave everyone else in the dust.
But you've got another 80% of mid-level employees that are just logging years (and pay increases) and dedicated to business as usual and clocking out at 5pm on the nose so they can get home to their families. They tend to be barely more productive than most new hires are after their first year. That large cohort of mid-level employees is going to get killed by AI adoption because they'll have the double negative of being older and more lazy plus being more stubborn and resistant to change.
I suppose it just depends on what the ratio and productivity levels are of these two cohorts of mid-level employees. I think it's something like 80/20, but maybe I'm wrong about that.
But that's precisely why I'm saying it's very consistent with that story that companies would continue to *hire* mid-career managers or coders. Because those are often the ones that are mir risk-hungry, more ready to try new things, have more experience from different projects (often from scratch) and are therefore more likely to push AI adoption in established companies.
Those mid-level managers or coders that are not so eager and willing to change will in the meantime tend to stay employed as they almost always do during economically stable times. Most companies only start to hire on that level in response to a big shock, be it for their industry, company, or a recession.
Ah, got it. You're saying firms that are hiring are trying to skim off as many of that top 20% of the pool as they can and the other 80% tends to stay put and isn't on the job market?
That sort of assumes there aren't huge layoffs among that 80% cohort of lower performing mid-career employees, or at least that companies will be good at screening them out in the hiring process. That's a reasonable enough assumption I suppose.
I guess it'll come down to how much churn (especially layoffs) there will be among those just barely good enough mid-career workers.
My view is probably skewed because during most of my career managing engineers, we were always understaffed and we tended to hang on to lower-productivity older workers. If a worker was competent and produced reliable results, but say at 70% of the rate of other workers, we'd probably hang on to them indefinitely just because 70% was better than 0%. So that does support your argument that the lower 80% mid-career workers won't be on the job market, at least until times get really tough.
OTOH, if older workers get a rep for not being able to adapt to new AI-based workflows, employers might be more aggressive with firing the lower productivity part of that mid-career cohort since the gulf between their productivity vs a new hire with a year of experience could increase dramatically vs. today.
Yes and I think I that's still the case in most companies. There are quite good studies that most companies tend to hang on to most of their staff (and especially mid-level management) in economically reasonably stable times.
It's just too much of a hassle, risk, and internal upheaval to change too much there, as long as the company / industry is reasonably profitable and cruising along.
That's one of the reasons why kerb er's and acquisitions are done (to restructure staff & mid-management) or why consultants are brought in (to get “outside” backing for restructuring mid-level management) and so on. Most businesses avoid this at most times.
But you always have some churn. Some of it because people themselves decide to leave - and often (though not always) they are the ones that want to do things differently. That doesn't always work out necessarily, but it is a different approach.
And some churn of course also due to underperformance, no cultural fit, restructuring etc. But even those people are then often forced to try something new. That's why they are an interesting demographic for hiring during times of technological upheaval.
They have skills and expertise, but are not yet fully enmeshed in the usual way of doing things within a specific company.
On a side note: There have been quite solid econ papers on how innovation more generally diffuses in an economy. Pretty much the strongest lever is employees switching companies (or setting up their own company) and bringing the knowledge with them, but applying it in a new context.
I was about to edit my comment to express this same point more directly, but you've enunciated your argument very well so there's no need for me to do so.
Like many people, I also came to same this exact same thing to Noah.
I work in marketing and this IS the trend we’re seeing and it *absolutely* makes sense.
Entry level employees have no experience. They don’t typical understand real-world business scenarios. They haven’t found their confidence yet so have trouble speaking to clients effectively. They don’t know how to manage their time well, yet.
All of these things take, in my experience, around 10-15 years (at least) to really develop… So by the time you’re 35-40 you’re at peak levels: enough experience to really command a room, but young enough that you’re still nimble with technology.
So yeah… these results are exactly what I’m seeing in the real world.
This. Someone has to explain to the AI what needs to be done. Imagine if you saw a huge supply of free college grads - you would also see a spike in mid-level/management hires because you would need those trainers to exploit the new labor pool. Same story here.
One more thing - the prospect of AI "about" to displace a bunch of entry-level jobs changes expectations and behaviors NOW. Nobody wants to look like a chump by hiring fresh grads now and they become useless just as you finish training them because of AI advances. Better to tread carefully and see how things play out.
Dunno whether I’m hallucinating, but there seems an obvious answer to this. AI at the moment is v. powerful but prone to mistakes / making stuff up. Therefore, deployment at the moment is replacing young kids but needs older, wiser heads to check the output. An analogy might be Tesla’s FSD - clearly getting good but needs someone to keep an eye on it.
Therefore, Brynjolfsson’s finding is exactly what you’d expect where companies are replacing new young recruits with cheaper AI but still need managers to make sure the AI doesn’t make bad mistakes.
Obviously, when the AI error rate drops significantly (a path already seen with GPT5), then it comes for the managers too.
Though I'm not so sure when and how even a more accurate AI would be coming for the managers.
A major point of being a manager is making decision and giving directions - and taking responsibility.
This is likely to remain a major factor in any job or industry. Even if you kinda know that the AI *can* do and even decide things really well, there still needs to be a human in the loop at some point confirming that this is the direction to go into (or not).
It's like a really well-functioning machine that you still need to steer or at least give a destination and certain key instructions (and “increase the bottom line” is hardly enough).
"Think about it. Suppose you’re a manager at a software company, and you realize that the coming of AI coding tools means that you don’t need as many software engineers. Yes, you would probably decide to hire fewer 22-year-old engineers. But would you run out and hire a ton of new 40-year-old engineers? Probably not, no!"
I work at a software firm. We froze our intern program this year but are hiring a lot of very senior devs precisely to help us adopt AI before the competition. This is just my anecdotal data, but matches with what I've seen.
As a general statement, this is incorrect. Many younger engineers are brilliant and as productive or more than senior ones. Of course, between two highly capable engineers, the one with more experience will usually have an edge in ability to design good software or figure out complex problems.
You might hire more experienced engineers if you need leaders, or specific knowledge about systems and environments. Typically not just because you expect more productivity.
I am a CS professor at a US university and generally speaking have been an AI skeptic (as in, I didn't believe that these models currently posed a major threat to my research field.) I've spent the summer testing ChatGPT 5 Thinking and o3 on various questions related to a textbook I'm writing, and the good/bad news is that... it's astonishingly good. It has a deep grasp of the material at a level that's probably above that of a typical advanced graduate student. The quality increase has been extremely rapid. (And I'm not working with the Research grade models, just the $20/mo Pro ones.)
They still make some cartoonish mistakes, things like swapping multiplication and addition in polynomial calculation algorithms, or misunderstanding the purpose of some algorithm elements. I feel great because I can still outthink the models and catch them doing dumb stuff occasionally, but it's also the same order of mistake that quick-thinking grad students run into as well (and I could find myself doing as well.) In one or more generations of this, I'm pretty worried.
The biggest struggle I'm having is that it genuinely does not make sense for me (or my students) to be doing manual work without these tools. I'm concerned that if we're not training the next generation of students to integrate these systems right into their workflow, we're making a huge mistake. But I'm also worried that if these tools are overused, students won't learn anything. And obviously if progress continues like this for one or two more generations I have no idea what we're even doing here anymore.
It's entirely possible that the reliability and creativity problems won't be solved, but it's really hard for me to see the world looking the same in a few years *unless* there's a hard wall and we hit it very soon.
"I'm concerned that if we're not training the next generation of students to integrate these systems right into their workflow, we're making a huge mistake."
IMO, this aspect of AI's impact on software engineering as a discipline is getting way underplayed. I'm old and lived through a couple of tectonic shifts in software development, including the shift from using machine code to high level language coding (assembly -> C), then from high level to structured/OO programming (C -> C++ then Java).
Each of these shifts changed both the work flows and also what skills were most important for success. In a way, each shift redefined the role and requirements of being software engineer.
I have no guess where this ends up. Will product managers become the new coders? - again, no guess. I do know that the skills and even personality traits needed to be good at turning ideas and requirements into software systems will be so radically disrupted that today's best software engineers may be unrecognizable from those of the future. It will be a completely different job.
The purpose of much of modern education is to get students to the point where they'll be able to tell when the computer is giving them the wrong answer. Current AI just makes this more dramatic.
I am the CTO of a steadily-growing 50-person business software company. We haven’t hired fewer people because of AI yet. But for the next year I expect our AI tool budget for each software developer to grow from around $100/month to $1000/month. This money would otherwise go toward hiring new developers, so it means we will effectively hire 1–2 fewer people.
As for which workers are affected, experienced developers are safe if they have deep expertise in specific technologies that helps them make better decisions than AI would. For these folks, AI doesn’t really change things much.
Versatile generalists are doing even better. AI gives them a huge advantage because they can now do many people’s jobs and avoid the coordination overhead that comes from running a team.
Those who are neither deep nor versatile have a much harder time finding a place to contribute. This includes most junior developers but isn’t limited to them.
I agree with Ben's point and second the sentiment. With one addition: the qualitative experience of prompting / managing AI agents is similar to the experience of managing junior software engineers, except with much faster feedback loops. Senior engineers and managers notice this, and realize that it's almost always faster to directly prompt the agent than to delegate to a junior engineer doing the same thing. So in effect, each hire of a mid-level to senior engineer is now the equivalent of a hire of that same engineer plus 1-2 junior engineers through the AIs they can prompt. Then since each senior worker only has so much capacity to manage [junior or AI] engineers, the AIs win by sheer speed of iteration.
I second Benjamin’s point. But although I find it quite plausible that AI caused the phenomenon in question, I do not think “Canary in the Coal Mine” is an appropriate analogy. I see no reason why this job loss should continue to spread. In fact, I would expect it to attenuate after a few years as companies realize that not hiring entry-level people now means not having experienced people later.
> In fact, I would expect it to attenuate after a few years as companies realize that not hiring entry-level people now means not having experienced people later.
Since when do corporations care about problems that are going to occur five or more years from now? This is a problem for the guy that takes over after you, therefore you don't have to worry about it today. 😆
While I would question the assumed value of 'mentor or recommend for jobs' - at the same time from my own hiring experience (of some 20 odd years now) I have to say the Covid period cohort has been really negative. So while I would have a juandiced eye on faculty help to job search the overall package of education for this cohort coming out of Covid period.
This of course is anectdote, and anectdote is not data - and personally distrust reliance on the personal anectdote - it does align with wider real data. And I have to say I initially took the feedback of my hiring team complaining about the poor quality of recent grads cohorts as just "older millenials complaining about younger ones" but then once I inserted myself into the process... yes., and comparing with our hiring notes, profiles from prior ten years
Maybe they didn't have a dean to say stand up straight and teach them how to shake hands? Nearly every student I know in that cohort has struggled, assuming a piece of paper with a list of "accomplishments" suffices for self-presentation.
Certainly showing up to an interview in a hoodie.... I dunno... I was aghast. and this isn't a working class profile.
While I don't overweight faculty member alone the whole end-mix of what that cohort got as education and socialisation was really bad.
I really thought my staff, only slightly older than this cohort, was exaggerating in complaining about "this current generation" and then I started partcipating to see.
Forehead smacking. And not a handful of interviews.
and the wider data seems to say it's not mere anectdote, remote Covid learning and remote only socialisation 20-22 was a complete disaster.
I've heard different things about what to wear to job interviews in tech - that it can actually be a bad idea to wear a suit because it doesn't make you look like a programmer.
Great piece. As a manager in tech I’m extremely skeptical of any claims about AI impacting hiring. The big tech layoffs that created the 2022 inflection point shown in those charts resulted in a massive pool of candidates competing over relatively few jobs. Subsequently the more experienced candidates were hired into new jobs ahead of the junior ones, who were also competing with fresh graduates.
The reason that “AI-exposed” jobs are the ones most affected is that the tech industry workforce happens to be composed of these types of jobs, and that’s where the layoffs happened as a result of the post-COVID interest rate surge.
This research matches our hiring philosophy right now.
The productivity gains mid/senior devs get from AI tools means we get higher output, while their experience makes them more capable of reining in the tendencies of AI tools to generate lots of technical debt.
I’ll take 1 mid-level dev + AI at $125k/year over 2 AI-equipped junior devs at $75k/year each.
My experience in a non-tech, but AI affected and volatile, field (advertising) is that people did a whole bunch of hiring in 2021-2023 and then an whole bunch of laying off in '24 or so. So there are laid off, but experienced people who are un- or underemployed.
So why wouldn't you hire an experienced person rather than a junior when work does start ramping up again? Cost is the obvious answer — senior people are more expensive. But seniors can also get more done, especially without supervision, and may not be quite as expensive if they're recently laid off.
From what I've heard a similar thing happened in tech.
Not to be glib but… why are we surprised that occupations that are most suited to remote work and saw a big run up during Covid are now seeing reduced hiring? It certainly seems like a waste of math to try associate exogenous variables like AI exposure to these patterns.
We shouldn’t be surprised that a feature of our low hiring economy makes us look more like Europe with higher young unemployment as much of the economy loses its dynamism.
Can’t it just be as simple as the economy is slowing so hiring of college graduate slows. This has been happening for as long as I can remember. First we just hire to replace, then we don’t hire at all then we layoff.
I'm a little surprised you're not updating your core believes with a few recent observations about the technology itself, which at least a few of us called a couple of years ago. The release of GPT 5 in particular seems to have underscored a bunch of points:
- The only reason we discuss AI by using the word "intelligence" is because somebody in marketing gave it a good name that stuck
- AI clearly doesn't function anything like the human brain despite having (marketing again) "neural networks".
- It doesn't have the capacity to reason, and actual reasoning is more than word pattern matching.
- It's pretty clearly asymptoting. The models have ingested the whole Internet for several years now, so what's left is post-training - which doesn't scale, seems to have trade offs that make the money better at some tasks and worse at others.
The technology is useful, as an improved search engine, and several other specific tasks (editing, light coding, translating, generating shitty content). But this is not a technology that's going to divide the world into a before and after era. It's not going to be "generalized". And it's not going to kill us all. It's just a nicer search engine. Please put down the econo-techno-phile Kool aid and join the rest of us over here in reality.
Guess it’s a good thing I’m acclimated to perpetual anxiety. 🥳
I will note, however, the my current employer (the largest cloud company on Earth) has recently forced us engineering managers to only hire entry level coders. Many teams are being constrained to college hires even to backfill attrition of senior roles.
I had been thinking for most of the past year, since AI is threatening to replace the kind of work that entry-level hires can do, that there’s going to be an upcoming calamity in a few years if the entry-level pipeline dries up, when companies run out of places to hire people from for mid-level positions.
Making a commitment to hire entry-level people and stick with them up until they become mid-level, even though they start out no more productive than an AI, seems like a solution to the problem. The entry level jobs were always more like apprenticeships anyway - now it needs to be more explicit.
Just thought of an interesting dynamic that evolves where firms that don’t do apprenticeships try to poach trained up workers from those that do. Could be very good for workers if the latter companies have to work hard to keep them.
Internal speculation is that it's both cost-cutting and rebalancing the pyramid. Internal marketing is that it's good for the long term, fresh ideas, training the next generation, and all that. We've always struggled to keep the middle tier of mid-career devs: the much discussed (on Blind) two-year cliff that the mid-career lemmings leap off of (right when the signing bonus stops paying cash and the stock portion of TC really kicks in). It's easy to hire a legion of college hires. Seniors that get hired from industry or survive the middle tier to get promoted seem to be job-hugging right now (or are true believers or are over-employed and know it). I smell the trough of disappointment looming over the event horizon. It's a race between the tariffs that stole Xmas and the spending bubble bursting methinks.
I think you're right that it's very hard to tell right now what AI does or will do to the job market.
But the effect in Brynjolfsson et al. is actually *exactly* what I'd expect to be happening if AI were to start having a major impact in these industries.
Think about it: To use current AI tools and capabilities well - especially in areas such as coding, but also law, supply chain management, whatever - you need a certain level of expertise and domain knowledge.
This is exactly what middle-aged developers and managers have. They are the ones that can prompt and set up AI systems in a way that makes sense and increases productivity in their respective industry and company. They have the expertise and a stake in making things better and faster (especially if they're switching jobs, as this often implies they're driven by a hunger for change that couldn't be fulfilled when staying put).
Entry-level workers on the other hand - while possibly adept at using AI tools for personal needs and wishes - will almost uniformly lack the deep domain knowledge and company- or industry-specific expertise to prompt and set up AI correctly in their new job. That takes time.
I'm also quite skeptical whether we really are starting to see the impact of AI on the labor market. But as to what we'd be seeing if it were to happen, this is exactly what I would expect.
I was also going to write the same response as well. I’m a partner at a law firm and we use many tools regularly. What the tools do is cut almost entirely into younger associate work. Diligence, research and basic drafting are most easily replaced. What clients still need is practical advice, strategic planning and higher level work (along with double checking all of the diligence, research and basic drafting).
Now I will say we’re cognizant of this and working in training our associates in more practical ways for an AI future, so we are continuing to hire as normal, but I could see other firms (and industries) taking a different route
Exactly. A new type of coding test - at least I’ve heard of more than one examples - is to ask people to use ChatGPT (or whatever) within their IDE based on a given requirement, which is often quite demanding. The interviewee is expected to review the code and fix it if there are errors. Or to reject code if it’s not efficient for whatever reason. Architectural design matters too - not just asking for code by explaining the code layout.
Then the interview is expected to explain what was done and why, and correct for bad or redundant code. When the AI fails it’s often an obscure compiler issue which only a senior could fix.
This is a hard test for juniors.
This is the model we are beginning to use in our interviews.
How many years of experience do you think it takes to be able to master this sort of "obscure compiler issue" type thing? I'd guess this is something most competent engineers would pick up in the first year or so of working on a team.
Do you see this failure mode just in interviews with recent grads, or also with engineers with 1-2 years of work experience?
(also, I agree that skill with architecture/systems-level stuff does take several years to develop and there's really no substitute for time there)
Well I don’t know exactly since this is all anecdotal and there’s a lack of empirical data, so far. However I’ve seen myself when using ChatGPT fall over on languages that I’m not very familiar with, but have some 2-3 years experience with, admittedly in the past. When that happens I google the problem, because asking the A.I. often leads you down a rabbit hole. In a test environment that would put you behind the guy who could fix it immediately, and worse - the interviewers gave to worry about the A.I. generating code that it can’t fix, can’t be easily googled and needs a senior engineer to fix.
It’s the same idea as having extremely well trained pilots and auto piloted planes. Just in case.
Edit: edit to say that this applies to senior engineers who could even be in their twenties so you are right on that.
IMO, this could cut either way. Younger workers are much more willing and likely to adopt new technology than older workers. They also tend to be more flexible and adept at adjusting to a radical shift in their work flow.
When I was a young embedded systems coder, a handful of us upstarts tried to get the team to use the newfangled "C" programming language instead of coding everything in assembly language. To assuage performance concerns, we made sure it was easy to switch between C and assembly language when needed (90% of the code wasn't performance sensitive). It was quite an effort to persuade the older coders.
The only way we could get everyone onboard was to ensure folks that it was and would stay completely optional, so coders could write their stuff in whatever language they preferred. Even a year or two after the switchover, barely any of the over-30 crowd had written any C code at all, while all the under 30 folks were writing everything they could in C.
@MagellanNH You're right that the willingness to adopt new technologies is crucial - and usually it's more widespread among younger people.
However, in the specific case of AI it's not entirely about adopting a completely new technology. Nor about replacing a perhaps outdated and now redundant way of doing things with a completely new way. That's often how new technology works and it's quite natural that incumbent with an advantage in existing tech would be reluctant whereas new entrants would be eager to learn and adopt them.
Current AI, however, is often more of a tool to leverage existing knowledge and expertise. The more you already know about your specific processes, industry, or company, the better you can leverage AI tools to become more productive, efficient, automate stuff, be faster and still accurate etc.
And that favors a very peculiar kind of person: Those with lots of domain expertise and knowledge yet *still* willing and eager to change.
And that's almost precisely the kind of person who is a mid-level manager or developer yet would still opt for a change of job or career.
They have both expertise and a clear eagerness to try new things and shake things up.
You make good points, but imo you're over-indexing the average expertise of mid level employee. There's a very wide distribution of capability at the mid-level.
You've got like 10-20% of engineers that become superstars when they get to the mid level. They have a mastery of both the coding and systems architecture world, along with solid industry domain expertise. I totally agree that 20% is going to kill it with AI and leave everyone else in the dust.
But you've got another 80% of mid-level employees that are just logging years (and pay increases) and dedicated to business as usual and clocking out at 5pm on the nose so they can get home to their families. They tend to be barely more productive than most new hires are after their first year. That large cohort of mid-level employees is going to get killed by AI adoption because they'll have the double negative of being older and more lazy plus being more stubborn and resistant to change.
I suppose it just depends on what the ratio and productivity levels are of these two cohorts of mid-level employees. I think it's something like 80/20, but maybe I'm wrong about that.
I actually agree with you.
But that's precisely why I'm saying it's very consistent with that story that companies would continue to *hire* mid-career managers or coders. Because those are often the ones that are mir risk-hungry, more ready to try new things, have more experience from different projects (often from scratch) and are therefore more likely to push AI adoption in established companies.
Those mid-level managers or coders that are not so eager and willing to change will in the meantime tend to stay employed as they almost always do during economically stable times. Most companies only start to hire on that level in response to a big shock, be it for their industry, company, or a recession.
Ah, got it. You're saying firms that are hiring are trying to skim off as many of that top 20% of the pool as they can and the other 80% tends to stay put and isn't on the job market?
That sort of assumes there aren't huge layoffs among that 80% cohort of lower performing mid-career employees, or at least that companies will be good at screening them out in the hiring process. That's a reasonable enough assumption I suppose.
I guess it'll come down to how much churn (especially layoffs) there will be among those just barely good enough mid-career workers.
My view is probably skewed because during most of my career managing engineers, we were always understaffed and we tended to hang on to lower-productivity older workers. If a worker was competent and produced reliable results, but say at 70% of the rate of other workers, we'd probably hang on to them indefinitely just because 70% was better than 0%. So that does support your argument that the lower 80% mid-career workers won't be on the job market, at least until times get really tough.
OTOH, if older workers get a rep for not being able to adapt to new AI-based workflows, employers might be more aggressive with firing the lower productivity part of that mid-career cohort since the gulf between their productivity vs a new hire with a year of experience could increase dramatically vs. today.
Yes and I think I that's still the case in most companies. There are quite good studies that most companies tend to hang on to most of their staff (and especially mid-level management) in economically reasonably stable times.
It's just too much of a hassle, risk, and internal upheaval to change too much there, as long as the company / industry is reasonably profitable and cruising along.
That's one of the reasons why kerb er's and acquisitions are done (to restructure staff & mid-management) or why consultants are brought in (to get “outside” backing for restructuring mid-level management) and so on. Most businesses avoid this at most times.
But you always have some churn. Some of it because people themselves decide to leave - and often (though not always) they are the ones that want to do things differently. That doesn't always work out necessarily, but it is a different approach.
And some churn of course also due to underperformance, no cultural fit, restructuring etc. But even those people are then often forced to try something new. That's why they are an interesting demographic for hiring during times of technological upheaval.
They have skills and expertise, but are not yet fully enmeshed in the usual way of doing things within a specific company.
On a side note: There have been quite solid econ papers on how innovation more generally diffuses in an economy. Pretty much the strongest lever is employees switching companies (or setting up their own company) and bringing the knowledge with them, but applying it in a new context.
I was about to edit my comment to express this same point more directly, but you've enunciated your argument very well so there's no need for me to do so.
That's kind of you to say, thank you!
Like many people, I also came to same this exact same thing to Noah.
I work in marketing and this IS the trend we’re seeing and it *absolutely* makes sense.
Entry level employees have no experience. They don’t typical understand real-world business scenarios. They haven’t found their confidence yet so have trouble speaking to clients effectively. They don’t know how to manage their time well, yet.
All of these things take, in my experience, around 10-15 years (at least) to really develop… So by the time you’re 35-40 you’re at peak levels: enough experience to really command a room, but young enough that you’re still nimble with technology.
So yeah… these results are exactly what I’m seeing in the real world.
This. Someone has to explain to the AI what needs to be done. Imagine if you saw a huge supply of free college grads - you would also see a spike in mid-level/management hires because you would need those trainers to exploit the new labor pool. Same story here.
One more thing - the prospect of AI "about" to displace a bunch of entry-level jobs changes expectations and behaviors NOW. Nobody wants to look like a chump by hiring fresh grads now and they become useless just as you finish training them because of AI advances. Better to tread carefully and see how things play out.
Dunno whether I’m hallucinating, but there seems an obvious answer to this. AI at the moment is v. powerful but prone to mistakes / making stuff up. Therefore, deployment at the moment is replacing young kids but needs older, wiser heads to check the output. An analogy might be Tesla’s FSD - clearly getting good but needs someone to keep an eye on it.
Therefore, Brynjolfsson’s finding is exactly what you’d expect where companies are replacing new young recruits with cheaper AI but still need managers to make sure the AI doesn’t make bad mistakes.
Obviously, when the AI error rate drops significantly (a path already seen with GPT5), then it comes for the managers too.
Agreed.
Though I'm not so sure when and how even a more accurate AI would be coming for the managers.
A major point of being a manager is making decision and giving directions - and taking responsibility.
This is likely to remain a major factor in any job or industry. Even if you kinda know that the AI *can* do and even decide things really well, there still needs to be a human in the loop at some point confirming that this is the direction to go into (or not).
It's like a really well-functioning machine that you still need to steer or at least give a destination and certain key instructions (and “increase the bottom line” is hardly enough).
"Think about it. Suppose you’re a manager at a software company, and you realize that the coming of AI coding tools means that you don’t need as many software engineers. Yes, you would probably decide to hire fewer 22-year-old engineers. But would you run out and hire a ton of new 40-year-old engineers? Probably not, no!"
I work at a software firm. We froze our intern program this year but are hiring a lot of very senior devs precisely to help us adopt AI before the competition. This is just my anecdotal data, but matches with what I've seen.
>yes, you would probably decide to hire fewer 22-year-old engineers. But would you run out and hire a ton of new 40-year-old engineers?
Aren't more experienced software engineers more productive than fresh out of university graduate engineers?
As a general statement, this is incorrect. Many younger engineers are brilliant and as productive or more than senior ones. Of course, between two highly capable engineers, the one with more experience will usually have an edge in ability to design good software or figure out complex problems.
You might hire more experienced engineers if you need leaders, or specific knowledge about systems and environments. Typically not just because you expect more productivity.
I am a CS professor at a US university and generally speaking have been an AI skeptic (as in, I didn't believe that these models currently posed a major threat to my research field.) I've spent the summer testing ChatGPT 5 Thinking and o3 on various questions related to a textbook I'm writing, and the good/bad news is that... it's astonishingly good. It has a deep grasp of the material at a level that's probably above that of a typical advanced graduate student. The quality increase has been extremely rapid. (And I'm not working with the Research grade models, just the $20/mo Pro ones.)
They still make some cartoonish mistakes, things like swapping multiplication and addition in polynomial calculation algorithms, or misunderstanding the purpose of some algorithm elements. I feel great because I can still outthink the models and catch them doing dumb stuff occasionally, but it's also the same order of mistake that quick-thinking grad students run into as well (and I could find myself doing as well.) In one or more generations of this, I'm pretty worried.
The biggest struggle I'm having is that it genuinely does not make sense for me (or my students) to be doing manual work without these tools. I'm concerned that if we're not training the next generation of students to integrate these systems right into their workflow, we're making a huge mistake. But I'm also worried that if these tools are overused, students won't learn anything. And obviously if progress continues like this for one or two more generations I have no idea what we're even doing here anymore.
It's entirely possible that the reliability and creativity problems won't be solved, but it's really hard for me to see the world looking the same in a few years *unless* there's a hard wall and we hit it very soon.
"I'm concerned that if we're not training the next generation of students to integrate these systems right into their workflow, we're making a huge mistake."
IMO, this aspect of AI's impact on software engineering as a discipline is getting way underplayed. I'm old and lived through a couple of tectonic shifts in software development, including the shift from using machine code to high level language coding (assembly -> C), then from high level to structured/OO programming (C -> C++ then Java).
Each of these shifts changed both the work flows and also what skills were most important for success. In a way, each shift redefined the role and requirements of being software engineer.
I have no guess where this ends up. Will product managers become the new coders? - again, no guess. I do know that the skills and even personality traits needed to be good at turning ideas and requirements into software systems will be so radically disrupted that today's best software engineers may be unrecognizable from those of the future. It will be a completely different job.
The purpose of much of modern education is to get students to the point where they'll be able to tell when the computer is giving them the wrong answer. Current AI just makes this more dramatic.
"...once you had us think for you, then it became *our* civilization"
--Agent Smith
I am the CTO of a steadily-growing 50-person business software company. We haven’t hired fewer people because of AI yet. But for the next year I expect our AI tool budget for each software developer to grow from around $100/month to $1000/month. This money would otherwise go toward hiring new developers, so it means we will effectively hire 1–2 fewer people.
As for which workers are affected, experienced developers are safe if they have deep expertise in specific technologies that helps them make better decisions than AI would. For these folks, AI doesn’t really change things much.
Versatile generalists are doing even better. AI gives them a huge advantage because they can now do many people’s jobs and avoid the coordination overhead that comes from running a team.
Those who are neither deep nor versatile have a much harder time finding a place to contribute. This includes most junior developers but isn’t limited to them.
I agree with Ben's point and second the sentiment. With one addition: the qualitative experience of prompting / managing AI agents is similar to the experience of managing junior software engineers, except with much faster feedback loops. Senior engineers and managers notice this, and realize that it's almost always faster to directly prompt the agent than to delegate to a junior engineer doing the same thing. So in effect, each hire of a mid-level to senior engineer is now the equivalent of a hire of that same engineer plus 1-2 junior engineers through the AIs they can prompt. Then since each senior worker only has so much capacity to manage [junior or AI] engineers, the AIs win by sheer speed of iteration.
I second Benjamin’s point. But although I find it quite plausible that AI caused the phenomenon in question, I do not think “Canary in the Coal Mine” is an appropriate analogy. I see no reason why this job loss should continue to spread. In fact, I would expect it to attenuate after a few years as companies realize that not hiring entry-level people now means not having experienced people later.
> In fact, I would expect it to attenuate after a few years as companies realize that not hiring entry-level people now means not having experienced people later.
Since when do corporations care about problems that are going to occur five or more years from now? This is a problem for the guy that takes over after you, therefore you don't have to worry about it today. 😆
Agree. I see a failure of online education beyond Covid -- fewer faculty members actually physically met these 22-25 year olds. How can they mentor or recommend for jobs? https://hollisrobbinsanecdotal.substack.com/p/the-canary-in-the-classroom
While I would question the assumed value of 'mentor or recommend for jobs' - at the same time from my own hiring experience (of some 20 odd years now) I have to say the Covid period cohort has been really negative. So while I would have a juandiced eye on faculty help to job search the overall package of education for this cohort coming out of Covid period.
This of course is anectdote, and anectdote is not data - and personally distrust reliance on the personal anectdote - it does align with wider real data. And I have to say I initially took the feedback of my hiring team complaining about the poor quality of recent grads cohorts as just "older millenials complaining about younger ones" but then once I inserted myself into the process... yes., and comparing with our hiring notes, profiles from prior ten years
(this in finance not coding)
Maybe they didn't have a dean to say stand up straight and teach them how to shake hands? Nearly every student I know in that cohort has struggled, assuming a piece of paper with a list of "accomplishments" suffices for self-presentation.
Certainly showing up to an interview in a hoodie.... I dunno... I was aghast. and this isn't a working class profile.
While I don't overweight faculty member alone the whole end-mix of what that cohort got as education and socialisation was really bad.
I really thought my staff, only slightly older than this cohort, was exaggerating in complaining about "this current generation" and then I started partcipating to see.
Forehead smacking. And not a handful of interviews.
and the wider data seems to say it's not mere anectdote, remote Covid learning and remote only socialisation 20-22 was a complete disaster.
I've heard different things about what to wear to job interviews in tech - that it can actually be a bad idea to wear a suit because it doesn't make you look like a programmer.
No idea about Tech as it's not my sector. I run an investment faclity and there is zero doubt on dress code not including in hoodies.
Great piece. As a manager in tech I’m extremely skeptical of any claims about AI impacting hiring. The big tech layoffs that created the 2022 inflection point shown in those charts resulted in a massive pool of candidates competing over relatively few jobs. Subsequently the more experienced candidates were hired into new jobs ahead of the junior ones, who were also competing with fresh graduates.
The reason that “AI-exposed” jobs are the ones most affected is that the tech industry workforce happens to be composed of these types of jobs, and that’s where the layoffs happened as a result of the post-COVID interest rate surge.
This research matches our hiring philosophy right now.
The productivity gains mid/senior devs get from AI tools means we get higher output, while their experience makes them more capable of reining in the tendencies of AI tools to generate lots of technical debt.
I’ll take 1 mid-level dev + AI at $125k/year over 2 AI-equipped junior devs at $75k/year each.
How cheap would a junior dev have to be to be worth it? "Unpaid intern"?
My experience in a non-tech, but AI affected and volatile, field (advertising) is that people did a whole bunch of hiring in 2021-2023 and then an whole bunch of laying off in '24 or so. So there are laid off, but experienced people who are un- or underemployed.
So why wouldn't you hire an experienced person rather than a junior when work does start ramping up again? Cost is the obvious answer — senior people are more expensive. But seniors can also get more done, especially without supervision, and may not be quite as expensive if they're recently laid off.
From what I've heard a similar thing happened in tech.
Not to be glib but… why are we surprised that occupations that are most suited to remote work and saw a big run up during Covid are now seeing reduced hiring? It certainly seems like a waste of math to try associate exogenous variables like AI exposure to these patterns.
We shouldn’t be surprised that a feature of our low hiring economy makes us look more like Europe with higher young unemployment as much of the economy loses its dynamism.
Can’t it just be as simple as the economy is slowing so hiring of college graduate slows. This has been happening for as long as I can remember. First we just hire to replace, then we don’t hire at all then we layoff.
I'm a little surprised you're not updating your core believes with a few recent observations about the technology itself, which at least a few of us called a couple of years ago. The release of GPT 5 in particular seems to have underscored a bunch of points:
- The only reason we discuss AI by using the word "intelligence" is because somebody in marketing gave it a good name that stuck
- AI clearly doesn't function anything like the human brain despite having (marketing again) "neural networks".
- It doesn't have the capacity to reason, and actual reasoning is more than word pattern matching.
- It's pretty clearly asymptoting. The models have ingested the whole Internet for several years now, so what's left is post-training - which doesn't scale, seems to have trade offs that make the money better at some tasks and worse at others.
The technology is useful, as an improved search engine, and several other specific tasks (editing, light coding, translating, generating shitty content). But this is not a technology that's going to divide the world into a before and after era. It's not going to be "generalized". And it's not going to kill us all. It's just a nicer search engine. Please put down the econo-techno-phile Kool aid and join the rest of us over here in reality.
Guess it’s a good thing I’m acclimated to perpetual anxiety. 🥳
I will note, however, the my current employer (the largest cloud company on Earth) has recently forced us engineering managers to only hire entry level coders. Many teams are being constrained to college hires even to backfill attrition of senior roles.
Any thoughts about the rationale for this?
I had been thinking for most of the past year, since AI is threatening to replace the kind of work that entry-level hires can do, that there’s going to be an upcoming calamity in a few years if the entry-level pipeline dries up, when companies run out of places to hire people from for mid-level positions.
Making a commitment to hire entry-level people and stick with them up until they become mid-level, even though they start out no more productive than an AI, seems like a solution to the problem. The entry level jobs were always more like apprenticeships anyway - now it needs to be more explicit.
Just thought of an interesting dynamic that evolves where firms that don’t do apprenticeships try to poach trained up workers from those that do. Could be very good for workers if the latter companies have to work hard to keep them.
Internal speculation is that it's both cost-cutting and rebalancing the pyramid. Internal marketing is that it's good for the long term, fresh ideas, training the next generation, and all that. We've always struggled to keep the middle tier of mid-career devs: the much discussed (on Blind) two-year cliff that the mid-career lemmings leap off of (right when the signing bonus stops paying cash and the stock portion of TC really kicks in). It's easy to hire a legion of college hires. Seniors that get hired from industry or survive the middle tier to get promoted seem to be job-hugging right now (or are true believers or are over-employed and know it). I smell the trough of disappointment looming over the event horizon. It's a race between the tariffs that stole Xmas and the spending bubble bursting methinks.