Skip to content
LEWIS C. LIN AMAZON.COM BESTSELLING AUTHOR
Go back

The Hidden Cost of AI Speed

Edit page

Siddhant Khare, who builds AI agent infrastructure at scale, wrote something that stopped me cold:

“AI reduces the cost of production but increases the cost of coordination, review, and decision-making. And those costs fall entirely on the human.”

He’s mostly right. And understanding why matters if you want to use AI without burning out.

The Asymmetry Problem

AI generates fast. Humans evaluate slow.

When an AI writes 200 lines of code in 30 seconds, you haven’t saved 2 hours of work. You’ve shifted it. Now instead of writing code while building a mental model, you’re reverse-engineering someone else’s mental model—except there isn’t one. Just statistical patterns.

Creating code and reviewing code use different mental resources. Research shows evaluative tasks cause decision fatigue while generative tasks enable flow states. Six hours of evaluation will drain you more than six hours of creation, even if you “produce” more.

Where the Hidden Costs Land

Context switching at scale: Before AI, you spent a day on one problem. Deep focus. With AI, you touch six problems because each “only takes an hour.” But context switching costs 15-20 minutes of cognitive overhead per switch. The AI doesn’t pay this tax. You do.

Review asymmetry: You can skim a colleague’s code because you know their patterns. AI code requires line-by-line scrutiny. Every pattern could be hallucinated. Every confident solution could fail in production. This isn’t paranoia—it’s rational risk management.

Micro-decision accumulation: Is this variable name acceptable? Is this error handling good enough? Should I regenerate or fix manually? Each decision is small. Make 200 in a day and you’re mentally fried despite “not doing anything hard.”

But It’s Not One-Sided

The costs don’t fall entirely on humans:

AI-generated tests reduce review burden significantly. You evaluate through executable specs, not code inspection.

Rapid prototyping reduces coordination overhead. Show stakeholders a working prototype in an hour instead of debating architecture for three meetings.

AI reviewing AI catches obvious issues. Have Claude review Copilot’s output for security holes and style issues before you look at it.

The coordination costs already existed. Code review, architectural alignment, quality assurance—these aren’t new. The question is whether AI increases them or redistributes them.

Consider:

Yes, review burden doubled. But total time halved. The cost shifted, not increased.

Your skill level matters. Experts find AI output frustrating because it’s below their standards. Juniors often learn from AI output. The coordination cost varies by expertise.

What Actually Works

Risk-adjusted review

Separate generation from evaluation Don’t prompt and review simultaneously. Generate AI output in one session, review in another. The mode-switching between “creative” and “critical” is cognitively expensive.

Accept 70% and ship If you can’t fix it in 20% of the time it would take to write from scratch, rewrite it manually.

Build prompt libraries Stop prompting from scratch. Version control your best prompts and reuse them.

Track how you feel, not just output Log tasks, AI usage, time spent, and energy level (1-10). After two weeks, you’ll see patterns. Use AI where it genuinely helps, skip it where it drains you.


Edit page
Share this post on:

Previous Post
The Lewis C. Lin Question Bank Hits 3,300 Real PM Interview Questions
Next Post
The Margin Migration: Why Software Profits Don't Disappear—They Move