Sunday, 3 May 2026

Diffusion Is The Product

Diffusion is cope, Dwarkesh tells Dario. The word's become a buzzword — a way to wave off AI progress when the model can't do the thing yet. Dario, who you'd expect to agree, doesn't. His position is sharper.

The model being smart isn't the constraint anymore. Procurement is the constraint. Legal review is the constraint. The 3,000 developers a CIO has to roll it out to — that's the constraint. Claude Code is the easiest enterprise sale Anthropic has ever made. It's still slower than selling to a Series A.

The capability curve and the diffusion curve are two different exponentials, and the second one is the one your business actually lives inside. This is why implementation is the moat, not the model. Anyone with an API key has the same intelligence. What they don't have is the workflow, the buy-in, the change management, the person who can tell the model exactly what to do by Thursday.

If the model is the raw material, diffusion is the product. Sell that.

Source: https://youtube.com/watch?v=n1E9IZfvGMA

If She Likes You, There Are No Rules


The Two States

There are only two states a woman can be in with you.

She likes you. Or she doesn't.

There is no third option. There is no "working on it." There is no "she's coming around." There is no "give it time." Time is not a strategy. Time is what you spend while the answer is already decided.


State One — She Likes You

If she likes you, there are no rules.

  • She'll make time when she has none.
  • She'll cancel plans she shouldn't cancel.
  • She'll reply at 2 AM.
  • She'll drive across the city for you.
  • She'll forgive the things she swore she'd never forgive.
  • She'll bend in ways her own friends will tell her not to bend.

The same woman who is hard, busy, principled, unreachable to the rest of the world — soft, available, almost obedient with you.

Not because she's weak. Not because you tricked her. Not because of some "technique" you read on the internet.

Because she likes you. That's the only reason. That's the whole reason.


State Two — She Doesn't Like You

If she doesn't like you, there is no access.

  • The texts go unread.
  • The calls go unanswered.
  • The plans never land.
  • Every small ask becomes a negotiation.
  • Every gesture is read in the worst possible light.
  • Every effort you make is filed under "trying too hard."

She will be polite, maybe. She will be civil, maybe. But behind the civility, she will quietly make your life a living hell.

And the cruelest part — she won't even feel cruel doing it.

To her, it's just admin. To you, it's the whole year.


What It Is Not About

This is not about charm.
Not about looks.
Not about money, status, gym, gifts.
Not about a six-figure job, six-foot height, six-pack.

None of it.

Those are inputs. The output is binary.

She likes you, or she doesn't.


What Men Do When They've Already Lost

Stop trying to win the argument.
Stop trying to be reasonable.
Stop trying to "communicate better."

Reasonable doesn't unlock the door.

Reasonable is what men do when they've already lost.


The Whole Thing

Either she likes you —

— or you're managing a problem that has no solution.

That's it. That's the whole thing.

Friday, 1 May 2026

Claude Code Was An Accident

Claude Code is now the most-copied product in AI. Every frontier lab has built one. Every VC has funded three. Anthropic didn't set out to build it.

What happened, per Dario: early 2025, he told the team "models are good enough now — experiment with using them on your own work." Someone wired up a CLI. Internally, everyone started using it. The name was Claude CLI before it was Claude Code. At some point Dario looked around and said: this has product-market fit inside the building. Let's launch it.

The interesting thing isn't the origin story. It's the filter. Anthropic only ships products where they're the customer. "We didn't launch a pharma company because we don't have the resources to know what we'd need." That's the whole test.

Most product decks answer the wrong question. They ask: is there a market? The better question is: are you in it? Because if you're not, you're guessing at everything a real user would feel in five minutes.

Build what you use. Launch what you can't stop using yourself.

Source: https://youtube.com/watch?v=n1E9IZfvGMA

Wednesday, 29 April 2026

The Diffusion Gap: Why AI Won't Reach the Enterprise on Silicon Valley's Timeline

There's a gap opening up between what a San Francisco startup looks like in 2026 and what a JP Morgan looks like in 2026, and it's going to get worse before it gets better. Box CEO Aaron Levie had a conversation with Martin Casado and Steven Sinofsky on The a16z Show that named the three forces pulling the tech industry apart from the rest of the economy — and the biggest mistake most executives are making about the shape of the next five years.

The diffusion gap

Silicon Valley assumes that because startups can now automate a marketing team with one person and Claude Code, every enterprise is a year away from the same leverage. That's wrong, and it's wrong for a reason that doesn't get discussed often: algorithmic thinking is rare.

Sinofsky's framing: walk into a marketing team of 50 people at a global brand. Ask each one to draw a flowchart of their own job. Maybe one person can do it. The others know how to do the job — they don't know how to describe it as a system. That's the bottleneck for AI adoption inside large companies. It's not GPU access, it's not budget, it's not even resistance to change. It's that most people in most jobs were never trained to think in terms of process, branches, and feedback loops. The training they got was apprenticeship — watch someone, copy them, get good. That doesn't translate into directing an agent.

The viral Anthropic example (one growth marketer replacing five or ten people via Claude Code) works because that person was already a systems thinker. Put the same tool in front of most marketing teams and they freeze.

The optimistic version of this, which Levie believes: the abstraction layer just moves up. Sinofsky's cousin joined a bank right before spreadsheets arrived. She couldn't use Excel, so she supervised a room of interns who could. Two years later she was the spreadsheet person, and the interns were doing something higher. That's the pattern. Today's "rocket scientist coordinating 42 agents" is temporary. In a year or two there's just a skill-level domain agent — "marketing-ish" — and the average marketer can ask it for things.

Fine for the ten-year outcome. Ugly for the two-year transition. And the gap in capability between a startup using agents from day one and a 200,000-person bank trying to retrofit them is widening, not narrowing.

Build for agents, not for humans

But there's a subtle mistake in the discourse that Martin Casado called out and it's worth internalizing: you are not "marketing to agents." Agents don't care about your documentation, your positioning, your interface polish. The one thing agents are genuinely great at is finding the right backend for the job. They pick on cost, durability, semantics, the actual properties of the system.

This is a gift and a threat. The gift: merit actually wins. If your product is technically the best for the job, the agent will find it and use it, and you don't need to buy Gartner quadrants or run bigger trade-show booths. The threat: if your product is mediocre but has dominated because of brand or sales, the agent will skip you. "Which database should I use for this?" is a question the agent is going to answer from first principles now. Gartner is going to matter less.

(Casado's dry caveat: Silicon Valley will ruin the meritocracy quickly once the incumbents figure out how to pay-to-influence agent selection. Give it two years.)

The enterprise control problem is brutal

Every big tech company is confronting the same scenario right now: 5,000 employees, each one running Claude Code with access to the Box CLI or its internal equivalent. That's potentially 10,000 write operations per hour against the shared system of record — with agents creating nested directories without limit, conflicting with each other, racing each other on file moves, and accidentally leaking M&A data room contents because a prompt injection slipped in from a shared document.

The intuition that "treat the agent like a human" doesn't work cleanly. Humans have a right to privacy. Agents don't. You can log in as the agent and audit its entire output; you can't do that with an employee. But if you can log in as the agent, the agent can't really operate as a separate identity at all — any agent it talks to could be routed back to you. So the mental model collapses.

Sinofsky drew the parallel that fits best: the open source era. For years, engineers debated in conference rooms how much open source code could be pulled into Windows or Office, what the licensing constraints were, how to manage the security posture. None of that debate happened in public. It took a decade to build the norms. The same debate is happening now — except it's happening on podcasts and X in real time, and everyone expects the end state to arrive in six months.

It won't. The enterprise lockdown is coming first. Startups will pull further ahead in that window.

The engineering compute budget — the most consequential line item of the next two years

Levie's final warning deserves its own highlighted paragraph. R&D spend for a typical tech company is 14–30% of revenue. If tokens are 2× your engineering team cost, that's your entire EPS eaten. If they're 3%, you're fine. The delta between those two numbers is being debated today with effectively zero data.

CFOs are going to be forced to pick a number. Wall Street will hold them to it. Some will be spectacularly wrong. Some will get fired. Most of the economics people are using to model this right now are off by an order of magnitude — in the same way the PC market was underestimated (nobody predicted a thousandfold increase in MIPS-per-desk, or that software would sell separately from hardware), and the cloud was underestimated (nobody predicted that giving every engineer elastic compute would lead to a thousandfold increase in consumption, not a lateral migration of 60,000 servers).

The IBM analogy closes the loop. For years IBM was selling more MIPS for fewer dollars every year. They were pricing mainframes on MIPS anyway, and didn't notice they were on a decreasing curve — making MIPS faster than they could charge for them. Today's AI pricing is on the same trajectory. The companies pricing by token are going to be the ones pointing at their own decreasing curve in three years.

Three things to actually do about all this

1. If you run a SaaS business: stop treating your API like a compliance afterthought. The agent is your new main user. Your monetization model, your identity system, your rate limits — all of it gets redesigned around agent volume. The companies that get this right become infrastructure. The ones that don't become line items.

2. If you run an enterprise: resist the instinct to build a new governance layer for agents. Use the identity systems you already have — give the agent its own Gmail, its own phone number, its own RBAC role, its own payment method. Treat it like a separate identity with tight scopes. Adding a fresh policy plane on top of your existing mess just slows everything down.

3. If you're doing financial planning: assume your compute budget assumption is off by at least 10×. In both directions. Plan scenarios, not point estimates. CFOs who commit now to a precise number will be the ones getting fired later.

The line that closed the conversation: "They thought the Dakotas would be covered in vacuum-tube warehouses to fight World War II. Then someone invented the transistor." AI is in its vacuum-tube-warehouse phase. Act accordingly.

Source: Box CEO Aaron Levie on the AI Adoption Gap — The a16z Show

Tuesday, 28 April 2026

Token Maxing, Mech Suits, and Why Shopify Was Right About AI

A leaderboard went up inside Meta a few months ago. It measured, per engineer, the number of AI tokens consumed. It was supposedly "one data point among many" for performance reviews. Within weeks, Meta engineers were asking Claude to summarize docs they could read faster themselves. Salesforce set a minimum $175 monthly AI spend per engineer; people started racing to hit the floor. Microsoft engineers ran autonomous agents overnight to build junk so the number would climb. Meta eventually pulled the leaderboard — but people kept token maxing anyway, because in big tech, you don't forget that the metric ever existed.

Gergely Orosz (The Pragmatic Engineer) described all of this on the AI Engineer Summit stage with Swyx, and the historical rhyme is perfect. Ten years ago, early developer productivity tools measured lines of code and PR count. That was stupid, and everyone knew it was stupid, and people optimized for it anyway. The same thing is happening now — just dressed up as "AI adoption."

Why the push exists, though

The uncomfortable truth is that leadership didn't invent token maxing out of stupidity. Six months ago, CTOs were genuinely worried their engineers weren't using AI tools at all. One CTO at a Netherlands e-commerce company told a room of peers: "My engineers are skeptical. They're not using it." On existing codebases with older models, they had a point — the tool didn't find the bug, didn't refactor well, didn't earn its keep.

At the same time, Anthropic kept publicly saying a huge share of their own code was written with Claude Code — and their revenue line went vertical. So leadership, unable to tell correlation from causation, decided the safer bet was: force adoption. Coinbase's CEO Brian Armstrong literally emailed the company that anyone not using AI tools within a week would "have a conversation," then fired an engineer that Saturday. On $300–400K base salaries, the message lands.

Token maxing is the downstream result. It's leetcode reborn — an absurd ritual that selects for people willing to perform it to keep the job. The people who actually get value out of AI coding are the ones who ignore the metric entirely and just use the tools to ship.

The mech suit, not the manager

The other framing that's getting big tech wrong: "you're not an engineer anymore, you're an engineering manager." Gergely and DHH both think this is bullshit. The whole reason people resist becoming engineering managers is the stuff they'd have to give up: the product, the feedback loop, the hands-on craft. Agents don't give you any of that pain. You don't have to mediate conflict between agents. You don't do 1:1s with them.

DHH's metaphor: it's a mech suit. You're still the pilot. You're just doing seven things at once. The feedback loop is days, not quarters. The decisions you make compound in weeks instead of in six months. If anything, the role looks more like "tech lead" than "manager" — you're orchestrating without being removed from the work.

The role is compressing

Pre-AI, venture-funded startups were already running a compressed version of engineering. Dedicated QA teams disappeared into "every engineer writes tests." Dedicated devops teams disappeared into "every engineer owns their deploys." Product engineering emerged as a real title. AI is pushing the same compression one more notch. Junior engineers are now expected to reason about the business, plan at the architecture level, ship end-to-end.

One concrete signal: a VP at John Deere — a 200-year-old tractor company — told Gergely their "two-pizza teams" are now one-pizza teams. The smallest units in a company that has no reason to move fast are getting smaller. Everywhere else, it's already happened.

The real action is internal infra

The most underrated observation in the whole conversation: the biggest AI investment at large tech companies is not the product you see. Uber looks like it isn't shipping features. Inside, they are rebuilding the entire engineering stack — custom background coding agents integrated into their monorepo, an MCP gateway wired into service discovery, on-call tooling re-tooled around AI, code review systems that auto-categorize changes by risk. Airbnb, Intercom, Meta, Microsoft — every one of them is doing a version of this.

Three reasons it's rational:

1. It's the low-risk way to get hands-on with AI. You don't want your first AI feature to be customer-facing slop. Internal tooling is a safe training ground.<br>2. Their codebases will never fit in any context window. Off-the-shelf vendors (Cursor, Claude Code, Copilot) are built for typical codebases. The hyperscalers have code that is an order of magnitude larger and messier. Custom tooling + basic agents will beat the vendor stack on their own codebase.<br>3. Anything with "AI" in the name gets funded. Ask for two engineers for the platform team and get nowhere. Ask for two engineers for "agent experience" and it's done.

If you're at a large tech company and you're not building an internal MCP gateway, Gergely's line was: "what are you even doing?"

Why Shopify was right

The Shopify story is the one to remember. In 2021 — before Copilot was a product — Shopify's head of engineering heard it was being developed internally at GitHub. He DM'd Thomas Dohmke, the GitHub CEO, and said: "I'd like to get this rolled out to all of Shopify. In exchange you get feedback from 3,000 engineers, honestly, forever." It wasn't for sale. Shopify got it anyway, a full year before anyone else.

The tool wasn't great initially. Shopify burned real money and ate real engineering churn. They kept iterating. They became the first company onboarded to every major AI tool that followed, with unlimited budget.

Gergely's read: Shopify is trading churn + expense for being six months ahead. That trade is not rational for most companies — if your business is a physical product or a legacy vertical, wait it out, the tools will catch up. But if your company competes in technology, paying for the turn is worth it. Plus, at that point, AI adoption is a recruiting signal: "Come work here, you'll have every tool before your friends do."

The weird thing is that every tech company is doing this at the same time. So it looks performative. It isn't. It's rational individually. It just happens to be universal.

The takeaway

Three things to remember, regardless of whether you're writing code or running a company:

  1. Don't measure token count. Measure output. Every time a company makes a metric the goal, smart people game it.
  2. Treat AI tooling as a mech suit, not a manager. The value is in what you can now ship alone, not in "managing" anything.
  3. If you're in tech, eat the churn to be six months ahead. Shopify's trade is available to most of us. The cost is real. The alternative is worse.

Source: How AI Is Changing Software Engineering — Gergely Orosz, The Pragmatic Engineer

Monday, 27 April 2026

Ship With Friction: Why The Slow Part Is The Only Part That's Still Yours

A security incident went up on a company's forum last week. The config change shipped by mistake. The auto-generated social preview rendered the company's tagline right next to it: Ship without friction. Armin Ronacher — the guy who wrote Flask — used that screenshot to open a conference talk, and the joke landed because everybody in the room had made the same mistake recently.

The uncomfortable argument Armin and his co-founder Christina Poncela Cubeiro made: the friction engineers have spent a decade trying to remove is the friction that was doing the thinking. Remove it and you don't get speed. You get a codebase nobody can steer.

The psychological trap

The first few months of coding with Claude Code or Cursor feel like cheating. You prompt, the machine writes, you ship. Then everyone on your team is using it. Then your team's baseline expectation resets. Then the ambient pressure becomes: more output, faster cycles, shorter PRs. The gift becomes the tax. You no longer have the quiet moments to stop and ask whether this is the best way to implement the thing — because you're one prompt away from shipping it.

Armin calls this the gambler's loop. You don't know if the next prompt is the one that makes the product work, or the one last drop of slop that tips the whole thing into an outage. You keep pulling the lever.

The more interesting part is the illusion underneath it. Because you're producing a lot of output very fast, you feel more efficient. You're not. You've just stopped doing the part of the work where you design.

The team composition shift nobody warns you about

Before agents, engineering teams were supply-constrained on the creation side. The balance between writing code and reviewing code was roughly okay. Now every engineer has 5–10× the production power, and nobody got 5–10× the review power.

Two downstream effects:

1. Pull requests pile up. The ones that aren't reviewed carefully get rubber-stamped.
2. The set of people shipping code expands. Marketing people ship code. Former-engineer CEOs ship code again. The number of entities — humans and machines — participating in code creation now vastly outnumbers the ones that can carry responsibility for it. And the machine can't carry responsibility.

The engineering team is still on the hook. But the production volume hitting them is no longer something they authored.

Why agents rot products faster than libraries

The single most useful technical observation in the talk: agents are excellent at libraries and mediocre at products.

Libraries have a tightly defined problem, a clear API surface, and a simple core. The agent can fit the whole thing in its context window, reason about it globally, and add features cleanly. That's why open-source maintainers are getting real leverage from these tools.

Products are the opposite. UI, API responses, permissions, feature flags, billing, background jobs — every change touches three other concerns. The agent cannot fit the global structure in its context. Locally it looks reasonable. Globally it's incoherent.

And the agent's failure mode is specific: it's been trained to write code that runs. That reward function is exactly what you don't want in a product. A human engineer writing a config loader feels bad when they write if config missing, silently load defaults. The agent feels nothing. So the agent ships it. Two hours later you have database records written against the default config, and you don't know it yet.

Humans build up revulsion toward bad code. Agents don't. The codebase accumulates entropy until the agent itself can no longer navigate it — it starts missing files, writing duplicates, forgetting what already exists. You've built a system neither you nor the agent can reason about.

The agent-legible codebase

Armin's prescription: your codebase is now infrastructure. Design it for the agent the way you'd design infrastructure for operators.

Concrete rules Earendil is enforcing through lint:

  • Modularize the code flow, not just the components. The agent does its worst damage between the clearly defined steps — parsing types it shouldn't, stuffing things into state. Name the steps.
  • Don't fight the RL. If there's a canonical way to do a thing in this language, use it. The agent is trained on the canonical version.
  • No hidden magic. Raw SQL hides intent. An ORM shows it. If the agent can't see it, it can't respect it.
  • No bare catch-alls. Silent failure is how products rot.
  • One query interface for SQL. Don't make the agent grep the codebase to find where queries live.
  • Unique function names. Not for readability. For token efficiency — when the agent greps, it wants one hit, not twelve.
  • One UI primitives library, no raw inputs. Consistent styling, consistent behavior.
  • No dynamic imports. Source of truth should be static.
  • Erasable-syntax-only TypeScript. No transpile step. One source of truth between your code and the compiler.

Every one of these is friction. Every one of them is the point.

The part where your judgment gets woken up

The piece that made the whole talk click for me: Earendil built a PR extension that separates the review inputs. Mechanical bugs and style violations go straight back to the agent — those don't need a human. But a database migration, a permissioning change, a new dependency — those explicitly route to a human call-out that says "your brain should be on now."

Because if you miss them, you will regret them. And you will miss them. The machine's job, in this model, is to notice the moments your judgment is actually required, and to make sure you don't sleep through them.

Why "friction is bad" is the wrong slogan

Large engineering organizations have long used SLOs — service-level objectives — as deliberately inserted friction. The point of an SLO is to force the team to stop and ask: Do I actually need this reliability? Do I have the headcount to run this? Should I ship this service at all?

The AI-coding era has encouraged us to treat all friction as waste. In physical systems, friction is what lets you steer. Without it, you don't go faster. You just stop being the one driving.

The single line to take from Armin and Christina: the friction is where your judgment lives. The shift isn't to stop using agents — it's to stop pretending the remaining ten percent of the work, the slow part, is the disposable part. It's the only part that's still yours.

Source: The Friction is Your Judgment — Armin Ronacher & Cristina Poncela Cubeiro, Earendil

Sunday, 26 April 2026

One-Chart Businesses: The Boring Way to Pick What to Build Next

Most people pick a business the wrong way. They start from what's trending on X, or what their friends are building, or what an AI demo made them feel. The better starting point is almost never that interesting: pull up a single chart, squint, and ask whether the line on it is going to bend in the next ten years. Sam Parr and Steph Smith on My First Million call these "one-chart businesses." The framing is simple — if a demographic, behavioral, or physical trend is already locked in and the chart makes it obvious, you've found a tailwind you don't have to fight.

Here's the chart that matters most right now: the global population curve split by age. The under-15 line is flat. The working-age line is flat. The 65-plus line goes from under 1 billion today to 2.5 billion. That's the tailwind. Everything that touches elder care rides it.

What the silver tsunami actually unlocks

The US Bureau of Labor Statistics already calls nursing the fastest-growing occupation between 2020 and 2030 — 275,000 new jobs. Assisted-living prices in the US have grown 31% faster than inflation and hit $54,000 a year on average. There are 31,000 facilities; four out of five are for-profit; half of the operators clear 20%+ annual returns on operating cost. That's not a tech margin, but on a real-estate-backed operating business, it's staggering.

Japan ran the experiment ten years earlier. Their silver tsunami produced akiya — over 8 million abandoned houses the government now hands out nearly free. It also produced nursing-home construction up 50% in a decade. Every country is running the same play on a delay.

The gap worth noticing: most assisted-living options are terrible. People already pay $20,000–$30,000 a month for the "good ones." Imagine the premium version — the place you'd actually feel good about sending your parent. That product doesn't really exist at scale. Build it and you own a category that's growing faster than anything AI is disrupting.

The physical-world businesses that don't fit in a pitch deck

A few more one-chart candidates from Steph Smith's database worth stealing:

Air quality. About half the world is exposed to roughly 5x the safe limit for PM2.5 particles. Delhi routinely hits an AQI of 450 — the equivalent of smoking 25 cigarettes a day. People notice water quality because someone showed them a glass of filtered-vs-unfiltered water. Nobody has done that for air yet. The company that turns "invisible threat" into "visible dashboard with a clear product answer" owns the category. Amazon data already shows AC furnace filters + air-quality monitors clearing $40M+ a month in revenue — and that's before anyone markets it seriously.

Sports that aren't pickleball. Pickleball is #1 on the fastest-growing-sports list. The more interesting names underneath are alpine touring, winter fat biking, off-course golf, and trail running. All of them have one thing in common: they bend a traditional sport toward something you can do socially, outdoors, in a smaller time window, without elite fitness. There's a whole "suburban triathlon" waiting to be branded — walk half a mile to a bar, drink two beers, play nine holes of golf. Out-of-shape middle-aged guys will buy anything with a finisher medal. The brand is already funny; the product design is the easy part.

Nerd neck. An entire generation is hunched over laptops and phones. Brian Johnson made a video about it, Tim Ferriss keeps talking about Egoscue, Roger Frampton's "why sitting destroys you" TED talk has millions of views. Right now the product landscape is a few dorky straps (BetterBack) and expensive sports bras (Form). There's a lot of room for a posture product that doesn't look like a medical device.

The less-obvious lens: Ask Nature

Sean Puri's favorite new rabbit hole from the episode is asknature.org — a database of how animals solve engineering problems. African darter feathers are radically water-resistant. Camel fur cools during the day and insulates at night. Otter fur is the blueprint half the wetsuit industry quietly stole. Biomimicry isn't a product category — it's a cheat code for brand stories. If you're building a clothing company and your marketing doesn't punch, the origin story is sitting on Ask Nature for free.

The breakup economy

A random stat from The Hustle: the average person spends $15,000 after a breakup. Divorce parties, breakup cakes, and "revenge body" kits are already getting organic search volume. If you already run a consumer meme account — F*Jerry, Lad Bible, anything with 5M+ followers — you have free distribution for a viral physical product. Breakup vodka. A "send us your ex's stuff in this box and we'll burn it on camera" service. Products like this usually top out at $2–10M a year, but they run themselves on the meme tailwind.

The rule the whole conversation rests on

Every idea in that episode sits on top of a chart that is already committed. Demographics don't reverse. Pollution doesn't un-compound. Posture doesn't fix itself while screens get more engaging. The only real question is whether a marketer shows up to translate the chart into a product the average person can buy.

If you're choosing what to build in 2026, don't start from the newest model or the sharpest framework. Start from the dullest possible chart. The more inevitable the line, the less competition you'll fight for the next ten years.

Source: 9 Killer Business Ideas the Internet Hasn't Caught Up To in 2026 — My First Million with Steph Smith