“Management has been vibe coding since forever.”
The line is funny because it lands. And it lands because most of us have seen the pattern for years without naming it.
For decades, a surprising amount of management work operated on the same basic mechanic:
- vague intent
- polished language
- diagrams full of confidence
- handoff to someone else for contact with reality
That was not always irrational. When implementation was expensive, the cost of building acted as a filter. Bad ideas died in budget rounds, planning loops, or delivery friction before they ever became visible artifacts.
AI changes that math.
This Is Not a New Problem
People now talk about vibe coding as if AI had invented a dangerous new class of behavior: fuzzy thinking, weak specifications, lots of output, not enough understanding.
But look at how large organizations built software for the last thirty years.
When they wanted more output, they usually did not get radically clearer. They hired more people.
- more developers
- more layers
- more handoffs
- more tickets
- more translation
And yes, that produced more software.
It also produced more nonsense.
This is the part the current discourse often skips. We already had a system that could amplify unclear thinking into large volumes of expensive output. We called it scaling. We built entire delivery organizations around it.
That “someone downstream” used to be analysts, product managers, architects, engineers, agencies, or some unlucky combination of all five. A bit of market instinct at the top. A bit of PowerPoint. A bit of organizational theater. Then the real work started somewhere lower in the stack.
The expensive translation layer hid a lot of weakness.
Why So Much Enterprise Software Feels Dead
One reason this matters: many of the people building software in large organizations do not really know why they are building what they are building.
They move from company A to company B to company C. Different domain, different product, different customers, same basic routine: take requirements, ship tickets, move on.
That is not a moral failure. It is an industrial model.
But it has a cost. If the people doing the implementation have little connection to the user, the product, or the strategic goal, then scaling headcount mostly scales execution capacity. It does not automatically scale understanding. Which means it also scales waste.
This is why so much enterprise software feels like it was built by people who were never invited into the actual problem.
Why AI Feels Different
The current debate keeps getting stuck on a trivial frame: “Typing speed was never the bottleneck.”
True. But irrelevant.
AI is not a better keyboard. It is a reality accelerator.
You can turn fuzzy intent into artifacts almost immediately now:
- a prototype
- a product spec
- a workflow
- a UI
- code that either survives contact with reality or doesn’t
That speed does two things at once.
First, it exposes what was already weak. A vague product idea no longer hides for months behind planning. A hand-wavy requirement reaches a prototype before the meeting series is even over. The time between idea and embarrassment collapses.
Second, it creates absurd leverage for the people who already know how to think clearly.
That is the real multiplication effect. Not everyone gets transformed equally. Multipliers do not work like that.
5 x 1 = 5.
5 x 10 = 50.
5 x 100 = 500.
That is simple math, but it explains a lot.
Teams that were mediocre before do not become brilliant because a model can write code faster. They often just become mediocre at higher speed.
But the people who could already cut through noise, find the right problem, make pragmatic tradeoffs, and move things to a real outcome now get a brutal force multiplier.
This is why the shift feels larger than “productivity.” It is not only compressing execution time. It is compressing the time between idea and truth.
Vibe Coding Can Work
I do not even think the right reaction is to sneer at vibe coding itself.
Direction-first building has always existed. Plenty of successful products started with instinct, roughness, and strong people figuring things out in motion. The problem was never that ambiguity exists. The problem was always who could turn ambiguity into something real and whether the system helped or hindered them.
That is why “you only make mistakes ten times faster” is such an incomplete criticism.
Yes, of course you can make mistakes faster with AI.
You could also make mistakes faster by hiring twice as many people. Enterprises have been doing exactly that for years.
The relevant question is not whether AI can amplify mistakes.
The relevant question is: what else does it amplify, and for whom?
Stripe Shows the Better Answer
This is why Stripe’s Minions post matters so much.
Not because “wow, AI writes code now.” That part is already table stakes.
What is interesting is what they are actually optimizing for.
They are not optimizing for more code. They are optimizing for less wasted developer attention. And Part 2 makes the operating model even clearer.
- less routing
- less manual triage
- less human energy spent on deterministic work
- more attention reserved for judgment, approval, product sense, and tradeoffs
That is a fundamentally different organizational model.
Old scaling logic said: if you want more output, add people.
The new logic says: if attention is the scarce resource, remove everything that burns it unnecessarily and give the highest-leverage people systems that multiply their impact.
That is not a small tooling change. That is a different view of how value gets created.
The Part That Still Stays Human
This does not make human judgment less important. It makes it more concentrated.
The bottleneck is still:
- finding the right problem
- choosing a pragmatic path
- making tradeoffs that hold up under pressure
- fitting a solution into the larger system
That has always been the real work.
What changes now is the distribution of leverage. The few people who can do that well gain an absurd amount of force. The people who mainly existed to route, soften, translate, and repackage lose relative importance fast.
That is why the resistance is so emotional. Not because nothing is changing. Because a lot is changing, and some of it cuts straight through established status.
The New Test
The old move was to sound convincing long enough for organizational complexity to cover the gap.
The new test is simpler:
Can your idea survive contact with reality when the cost of testing it collapses?
Can you give direction that is clear enough to produce something usable?
Can you tell the difference between substance and polished intuition before a team burns attention on it?
That is the real shift.
Not faster typing. Faster truth.
And if you want to understand what that looks like when a serious company actually embraces it, read Stripe’s Minions post and Part 2 carefully.
That is not hype. That is a blueprint for where the leverage moved.
Stripe Resources
If you want the practical version, start here:
- Minions: Stripe’s one-shot, end-to-end coding agents — the core post on how Stripe runs more than 1,000 minion-produced pull requests per week: stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents
- Minions: Part 2 — the follow-up with more of the operating model, constraints, and engineering details: stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2
- Configuring Stripe using Terraform and AI agents — useful because it shows the same broader pattern: use agents inside auditable, reviewable systems instead of treating them like magic: stripe.dev/blog/ai-agents-terraform-stripe-infrastructure
Related: Denkleistung: Die knappste Ressource.