Skip to content
LEWIS C. LIN AMAZON BESTSELLING AUTHOR
Go back

4 Levels of AI Fluency in Product

Most product teams have a problem they don’t talk about: everyone uses AI, and almost no one can show you what changed because of it.

Ask a PM how AI has shaped their work and you’ll hear a version of the same answer: it saves me time. Ask them to be specific and things get vague fast. No before and after. No measurable outcome. Just a vague sense that writing feels easier.

That’s not AI fluency. That’s AI theater.

The rubric most teams are missing

Zapier built a framework that maps AI usage across four levels — from “AI as spell-check” to “AI re-engineers how work happens.” Applied to product, it’s the sharpest leveling tool I’ve come across.

The difference between each level isn’t intelligence or effort. It’s whether someone treated AI as a faster version of the old workflow — or as an invitation to redesign it entirely.

Here’s what each level actually looks like, with real examples and the interview question that surfaces each one.

Level 01

Unacceptable

The work looks the same before and after AI entered the picture.

The Vanishing Edit

Writes a PRD, pastes it into Claude, asks "does this look good?" — accepts whatever comes back. The PRD isn't actually better. AI was a mirror, not a thought partner.

Summary as Analysis

Uses AI to summarize user research transcripts. Doesn't notice when the summary misses the key insight. Presents it as their own analysis in the strategy review.

One Tool, Every Job

Can't tell you which model is better for which task. Uses the same one for everything, the same way, every time. No iteration, no refinement, no awareness that the tool has limits.

The interview tell

"Walk me through the last time AI changed a decision you made."

Long pause. A vague answer about saving time on writing. No specific outcome. No before and after. AI is a faster keyboard.

Level 02

Capable

Real, repeatable systems. They can teach another PM their workflow.

The Prompt Library

Has structured prompts for recurring work: one for spec drafts from a problem statement, one for stress-testing assumptions, one for release notes from a changelog. Reused and refined over time.

Insight at Scale

Feeds 200 NPS responses into a model and asks it to surface tension between power users and new users — something that would've taken days manually. Finds what the spreadsheet couldn't.

5 Directions by Friday

Generates five working solution directions from a single brief and gives stakeholders something to react to in the same week — not the same sprint. Speed of divergence is a real skill.

The interview tell

"Can you walk me through your AI workflow — specifically enough that I could copy it?"

They can. They explain tool choice, why they structure prompts a certain way, and where it breaks down. They've thought about this. It's a system, not a habit.

Level 03

Adoptive

They're not just using AI — they're building infrastructure other people rely on.

The Ticket-to-Roadmap Pipeline

Built a pipeline where support tickets flow into a model, get tagged by feature area and churn risk, and feed directly into roadmap prioritization. The team ships in response to real-time signal, not quarterly surveys.

New Skills Unlocked

Runs their own SQL to validate hypotheses. Builds functional dashboards to demo concepts before engineering touches them. Six months ago, both required someone else.

Always-On Intelligence

An automated digest monitors feature adoption metrics daily and surfaces anomalies before the weekly review meeting. Their team stops being surprised by the data.

The interview tell

"What can your team do today that it couldn't do six months ago — specifically because of how you use AI?"

They give you a clear before/after. They name the timeline. And someone else on their team uses something they built. That last part is the signal.

Level 04

Transformative

The org works differently because of them. You could swap out every tool and the thinking would remain.

PMs Shipping to Production

Not just prototypes — real features in the product. They've made the judgment that certain low-risk, high-frequency features don't need the full engineering queue. They can articulate the tradeoffs of that decision.

Discovery Replaced, Not Sped Up

Quarterly research cycles are gone. A continuous AI-driven feedback loop — behavior, support tickets, in-app surveys — updates a living opportunity map the team reviews weekly. Decisions happen faster and with more signal.

Judgment, Not Information

They're no longer the person who writes specs. They're the person who sets the judgment framework the AI drafts against. Their role looks materially different than 12 months ago — and they can show you the diff.

The interview tell

"How has your team's structure or process changed because of how you work — not just gotten faster?"

They name specific structural changes: resourcing, sprint rhythm, what "done" means, who owns what. And they name the tradeoffs they accepted to get there. They're not claiming transformation — they're describing it.

Why this matters for hiring

The traditional PM interview probes judgment, communication, influence. Those still matter. But they don’t tell you whether someone will compound over time in an AI-native environment — or plateau.

A candidate who sits at Level 01 today won’t move to Level 03 just because the tools get better. The gap is mindset, not access. They’d need to fundamentally change how they relate to their own workflow — and that rarely happens without deliberate pressure.

The candidate at Level 03 or 04 has already done that work. They broke their own habits, rebuilt their process, and made something other people use. That instinct doesn’t stop when they join your team.

The question to add to every PM loop: “What can your team do today that it couldn’t do six months ago — specifically because of how you use AI?”

The quality of that answer places people faster than almost anything else. No before and after, no examples, no clear mechanism? You’re looking at a Level 01. A clean timeline, a named artifact, and someone else who uses it? You’re looking at someone who’s going to keep compounding.

Hire for that. Evaluate for that. The gap only widens from here.


Share this post on:

Previous Post
How the PM Role Has Changed in the Post-Agentic Era
Next Post
The Only PM Interview Resources I Recommend for 2026