Tuesday, 21 April 2026

Your Job Is Not Your Task

Jensen Huang told a story at Stanford last week that should be required listening for anyone planning their career or running a company through the AI transition. It's about radiologists, and it's the cleanest mental model I've heard for thinking about which jobs AI eliminates and which it doesn't.

Ten years ago, one of the most influential computer scientists of his generation - and one of the actual founders of modern AI - told the world that radiology was the worst career a young doctor could pick. AI was about to read scans better than humans within a decade. He was completely right. AI now permeates every aspect of radiology. Almost every scan is assisted by AI. The volume of scans being read has gone through the roof.

The number of radiologists also went up. Not down. Up.

Huang's framing for why is the line worth tattooing on the inside of every founder's eyelid: the purpose of your job and the tasks that you do in your job are related, but they are not the same thing.

The radiologist's task is to read scans. That got automated. The radiologist's purpose is to diagnose disease, work with patients, and partner with doctors. That demand only grew - more patients can be admitted, more conditions caught, more revenue per department, so hospitals hire more radiologists. The flywheel only collapses if the people who confused the task with the purpose start steering young doctors away from the field. Which is exactly what happened. There is now a shortage of radiologists in the United States, caused largely by the warning that the field would die.

This same trap is being set right now in software, design, marketing, sales, and law.

Huang volunteered the second example on himself. "What I do for a living is typing and talking. Both have been automated to superhuman level by AI. And I'm busier than ever." His engineers tell the same story. NVIDIA's coders all use agentic AI. The good ones - the ones being promoted and poached - are the ones who are best at working with the agents. The bottleneck used to be writing the code. Now the bottleneck is having the next idea, because the agents have already finished what you asked them to do and they're "perpetually harassing you in text" asking what's next.

Then Huang says something that explains why the productivity gain doesn't compress headcount the way most pundits assume. Pundits assume NVIDIA needs to ship a fixed amount of code per year - say a billion lines - and if AI lets a thousand engineers do what ten thousand used to, then nine thousand are out. But that's not how it works. A billion lines of code was the most they could do with that many people in that much time. The cap was always human bandwidth, not ambition. Huang wants to write a trillion lines of code. He'd hire more people to write a trillion lines, not fewer to write a billion.

This is the practical version of the same point: task automation doesn't shrink the org if the org's purpose is bottlenecked by ambition rather than by hours. The companies that contract are the ones whose purpose really was just to do the task at fixed throughput. The companies that grow are the ones whose ambition was always being constrained by the throughput, and now isn't.

The single quote founders should put above their desk:
"It is unlikely that most people will lose a job to AI. It is most likely that most people will lose their job to somebody who uses AI."

Two practical reads of that:

1. If you're hiring, the test isn't "have they used AI?" It's "are they faster than the humans who don't?" Treat AI fluency the way you treated Excel fluency in 2002 or English fluency in 1995 - non-negotiable for anyone in a role where the agents are now reachable.

2. If you're working, separate your task from your purpose. Then ruthlessly delegate the task to the agents and reinvest the saved hours into the purpose. The radiologist who learned to use AI now reads more scans, catches more disease, and is the most valuable hire in the department. The radiologist who refused is being told the job is being restructured.

Congressman Ro Khanna's contribution at the same panel sat alongside this and is worth taking seriously: the productivity gains will not be evenly distributed unless someone makes them so. Past industrial revolutions ended with more jobs but spent twenty miserable years getting there. Workers' bargaining position during the adoption phase determines whether the gains end up only with capital. That's a policy question, but it's also a culture-of-the-company question for any founder reading this.

The radiologist parable doesn't close the gap.

Source: U.S. Leadership in AI with Jensen Huang and Congressman Ro Khanna - Stanford GSB

No comments:

Post a Comment