A leaderboard went up inside Meta a few months ago. It measured, per engineer, the number of AI tokens consumed. It was supposedly "one data point among many" for performance reviews. Within weeks, Meta engineers were asking Claude to summarize docs they could read faster themselves. Salesforce set a minimum $175 monthly AI spend per engineer; people started racing to hit the floor. Microsoft engineers ran autonomous agents overnight to build junk so the number would climb. Meta eventually pulled the leaderboard — but people kept token maxing anyway, because in big tech, you don't forget that the metric ever existed.
Gergely Orosz (The Pragmatic Engineer) described all of this on the AI Engineer Summit stage with Swyx, and the historical rhyme is perfect. Ten years ago, early developer productivity tools measured lines of code and PR count. That was stupid, and everyone knew it was stupid, and people optimized for it anyway. The same thing is happening now — just dressed up as "AI adoption."
Why the push exists, though
The uncomfortable truth is that leadership didn't invent token maxing out of stupidity. Six months ago, CTOs were genuinely worried their engineers weren't using AI tools at all. One CTO at a Netherlands e-commerce company told a room of peers: "My engineers are skeptical. They're not using it." On existing codebases with older models, they had a point — the tool didn't find the bug, didn't refactor well, didn't earn its keep.
At the same time, Anthropic kept publicly saying a huge share of their own code was written with Claude Code — and their revenue line went vertical. So leadership, unable to tell correlation from causation, decided the safer bet was: force adoption. Coinbase's CEO Brian Armstrong literally emailed the company that anyone not using AI tools within a week would "have a conversation," then fired an engineer that Saturday. On $300–400K base salaries, the message lands.
Token maxing is the downstream result. It's leetcode reborn — an absurd ritual that selects for people willing to perform it to keep the job. The people who actually get value out of AI coding are the ones who ignore the metric entirely and just use the tools to ship.
The mech suit, not the manager
The other framing that's getting big tech wrong: "you're not an engineer anymore, you're an engineering manager." Gergely and DHH both think this is bullshit. The whole reason people resist becoming engineering managers is the stuff they'd have to give up: the product, the feedback loop, the hands-on craft. Agents don't give you any of that pain. You don't have to mediate conflict between agents. You don't do 1:1s with them.
DHH's metaphor: it's a mech suit. You're still the pilot. You're just doing seven things at once. The feedback loop is days, not quarters. The decisions you make compound in weeks instead of in six months. If anything, the role looks more like "tech lead" than "manager" — you're orchestrating without being removed from the work.
The role is compressing
Pre-AI, venture-funded startups were already running a compressed version of engineering. Dedicated QA teams disappeared into "every engineer writes tests." Dedicated devops teams disappeared into "every engineer owns their deploys." Product engineering emerged as a real title. AI is pushing the same compression one more notch. Junior engineers are now expected to reason about the business, plan at the architecture level, ship end-to-end.
One concrete signal: a VP at John Deere — a 200-year-old tractor company — told Gergely their "two-pizza teams" are now one-pizza teams. The smallest units in a company that has no reason to move fast are getting smaller. Everywhere else, it's already happened.
The real action is internal infra
The most underrated observation in the whole conversation: the biggest AI investment at large tech companies is not the product you see. Uber looks like it isn't shipping features. Inside, they are rebuilding the entire engineering stack — custom background coding agents integrated into their monorepo, an MCP gateway wired into service discovery, on-call tooling re-tooled around AI, code review systems that auto-categorize changes by risk. Airbnb, Intercom, Meta, Microsoft — every one of them is doing a version of this.
Three reasons it's rational:
1. It's the low-risk way to get hands-on with AI. You don't want your first AI feature to be customer-facing slop. Internal tooling is a safe training ground.<br>2. Their codebases will never fit in any context window. Off-the-shelf vendors (Cursor, Claude Code, Copilot) are built for typical codebases. The hyperscalers have code that is an order of magnitude larger and messier. Custom tooling + basic agents will beat the vendor stack on their own codebase.<br>3. Anything with "AI" in the name gets funded. Ask for two engineers for the platform team and get nowhere. Ask for two engineers for "agent experience" and it's done.
If you're at a large tech company and you're not building an internal MCP gateway, Gergely's line was: "what are you even doing?"
Why Shopify was right
The Shopify story is the one to remember. In 2021 — before Copilot was a product — Shopify's head of engineering heard it was being developed internally at GitHub. He DM'd Thomas Dohmke, the GitHub CEO, and said: "I'd like to get this rolled out to all of Shopify. In exchange you get feedback from 3,000 engineers, honestly, forever." It wasn't for sale. Shopify got it anyway, a full year before anyone else.
The tool wasn't great initially. Shopify burned real money and ate real engineering churn. They kept iterating. They became the first company onboarded to every major AI tool that followed, with unlimited budget.
Gergely's read: Shopify is trading churn + expense for being six months ahead. That trade is not rational for most companies — if your business is a physical product or a legacy vertical, wait it out, the tools will catch up. But if your company competes in technology, paying for the turn is worth it. Plus, at that point, AI adoption is a recruiting signal: "Come work here, you'll have every tool before your friends do."
The weird thing is that every tech company is doing this at the same time. So it looks performative. It isn't. It's rational individually. It just happens to be universal.
The takeaway
Three things to remember, regardless of whether you're writing code or running a company:
- Don't measure token count. Measure output. Every time a company makes a metric the goal, smart people game it.
- Treat AI tooling as a mech suit, not a manager. The value is in what you can now ship alone, not in "managing" anything.
- If you're in tech, eat the churn to be six months ahead. Shopify's trade is available to most of us. The cost is real. The alternative is worse.
Source: How AI Is Changing Software Engineering — Gergely Orosz, The Pragmatic Engineer