Back to Blog
7 min read

The AI Flippening Is Here

The AI Flippening Is Here

The AI Flippening

I work in AI-driven advertising. Specifically, I'm a Generative AI Engineer at Liftoff, where we build systems that generate and optimize ad creatives using AI. I mention this because the thing I'm about to describe isn't quite a prediction - It already happened in my industry years ago, and now it's happening in yours.

I'm calling it the AI Flippening: the point where AI stops being a tool you direct and starts being the system that directs you.

This already happened in some industries

In the early days of stock trading, a human decided what to buy, when to sell, and at what price. Today, 60 to 75% of all trading volume in US, European, and Asian equity markets is generated algorithmically, with zero direct human intervention. The machines are making the decisions, and humans are basically supervisors at this point.

In my world, advertising, the trajectory is the same. In 2013, about 24% of digital display ads were bought programmatically. By 2025, that number is approaching 90%. Now, nearly 97% of all new display ad dollars are programmatic. These are AI systems deciding, in real time, which ad to show to which person, at what price, billions of times per day. Soon, AI will not only be placing creatives, but creating them in real time, personalized for every single user.

People talk about "AI agents talking to AI agents" like it's a future thing. It's not. Ad exchanges have been doing this for over a decade. Algorithmic trading has been doing it for even longer. The flippening already happened in these domains. We just didn't give it a name because it was buried in infrastructure nobody sees.

The "Who's the Manager" test

Here's a simple framework you can use to figure out whether the flippening has happened in your workflow. Ask yourself three questions:

  1. Who sets the agenda? Do you decide what to work on, or does a system suggest/assign it?
  2. Who reviews whose output? Are you creating things and having them checked, or are you reviewing what AI created?
  3. Who has veto power? Can you override the system, or does the system's recommendation effectively become the default?

Let's look at software engineering, since I think it's the most immediate example for the a lot of people.

A 2025 survey by Sonar found that 42% of committed code is now AI-assisted, and 72% of developers use AI coding tools daily. Developers now spend more time reviewing AI-generated code than writing code themselves.

For me personally, it’s approaching 100%

So the AI is writing the code, and the human is reviewing it. The engineer went from being the author to being the person who checks the author's work. Two years ago, you wrote code and occasionally asked an AI for help. Now, for a growing number of teams, the AI writes the first draft and your job is to approve, reject, or tweak. That's the flip.

Two numbers that tell the story

If the "who's the manager" test is the qualitative signal, there are two quantitative signals that I think are even more telling.

Signal 1: Decision volume

In any given domain, you can count how many decisions are made by humans versus how many are made by AI. In advertising, this crossed over years ago. Billions of ad placement decisions per day, made by algorithms. And on the human side, perhaps a few hundred decisions per media buyer, per day.

In software engineering, it's getting close. If 42% of committed code is AI-assisted, and developers are reviewing rather than writing, then the AI is originating more "micro-decisions" (what function to write, what variable to name, what pattern to follow) than the human is.

The pattern is the same everywhere: human-directed, then human-supervised, then human-out-of-the-loop. The transition between step two and step three is what I'm calling the flippening.

Signal 2: Dollar volume

Axios reported recently that some companies are now spending more on AI computers than on the employees using those tools.

Bryan Catanzaro, Nvidia's VP of Applied Deep Learning, told Axios: "For my team, the cost of compute is far beyond the costs of the employees."

Uber's CTO reportedly blew through the company's entire 2026 AI budget on token costs before the year was even halfway done. Jensen Huang has proposed giving engineers AI tokens equal to roughly half their base salary as a recruiting perk. One software engineer in Stockholm told the New York Times that he "probably spends more than his salary on Claude."

I think of this ratio as the flippening index: token spend divided by headcount spend, per team or function. When that number crosses 1.0, something fundamental has shifted. At that point, the team isn't really using a tool anymore. They're the human layer that signs off on what the AI produces. And the budget tells you that story before anyone in the org admits it out loud.

At a macro level, global AI cloud infrastructure spending is projected to hit $37.5 billion in 2026, with 55% going to inference (running models in production) rather than training. Inference surpassed training spend for the first time, which tells you that the money is flowing to using AI in production, not building it in the lab. The experimentation phase is over for a lot of these companies.

Corporate first, personal second

Corporations will cross the flippening threshold before individuals do, because they optimize for throughput and have the infrastructure to integrate AI deeply into workflows. The IDC projects that 85% of executives expect employees to rely on AI agent recommendations for real-time decisions by 2026. Agentic AI systems are moving from handling individual tasks to running entire workflows.

But I actually think that the personal flippening is more interesting. Because corporations don't care about your sense of agency. You do (I hope).

Your phone is already a soft manager. It tells you what to look at (notifications), what to think about (algorithmic feeds), where to go (maps), and what to buy (recommendations). You technically have veto power over all of these. But in practice, how often do you override the suggestion? How often do you choose a restaurant without checking the algorithm's rating first?

The hard version of the personal flippening is when your AI agent books your calendar, triages your inbox, drafts your responses, and plans your day. And you just... show up where it tells you. At that point, the "who's the manager" test has a pretty clear answer.

The thing worth paying attention to

The defining feature of the flippening is that most people won't notice it happening. It's not going to be a dramatic moment or a headline. It's gradual. One day you realize that you haven't actually originated a decision in a while. You've just been approving or rejecting options that an AI puts in front of you.

And that's what makes it different from the sci-fi version of this conversation. The inversion is boring. It's already underway. And the people in the middle of it mostly don't see it.

I'm not going to tell you whether this is good or bad. I think it's just what's happening, and I think it's worth seeing clearly.

So, run the "who's the manager" test on your own workflow this week. Look at how you spent your day. Who set your agenda? Who created the first draft? Who decided what was worth your attention? Was it you, or was it your tools?

If you're in a leadership position, go check what your org spent on inference last quarter versus the team using those tools. That ratio is your flippening index. If it's approaching 1.0, the flip might already be underway, whether anyone's named it yet or not.

You might not like what you find.

P.S. I'm planning on moving to Substack, so subscribe for future posts!

© 2026 Maxime Peabody. Built with Next.js and Tailwind CSS.Twitter