If you've been following my content, you've probably noticed I've barely mentioned AI until now. There's a good reason for that: people still matter more than any tool. Great product work comes from clear thinking, strong teams, and solid processes—not the latest tech hype.
But AI is the elephant in the room. Everyone's talking about it, clients keep asking about it, and ignoring it isn't realistic. So let's be honest: when AI works, it's genuinely helpful. When it doesn't—and that's more often than the evangelists admit—it's a frustrating time sink that creates more work than it saves.
I'm not anti-AI at all. In fact, I make heavy use of it in my day-to-day work—yes, even in drafting this very article you're reading now.
I'm tired of watching good teams get pulled into the vortex: hours spent prompting, debugging outputs, or cleaning up hallucinations, all while the real work stalls.
The Biggest Danger: Garbage In, Garbage Out—On Steroids
AI makes it ridiculously easy to generate volume. But if your inputs are fuzzy or your thinking isn't sharp, you just end up with polished garbage faster. No tool fixes bad judgment, and AI is no exception. All tools are only as good as the person wielding them.
Dan Maccarone, a fellow fractional CPO in my network at Go Fractional, just wrote a spot-on piece that aligns perfectly with how I think about this: Dan Maccarone's Seven Rules of AI Hygiene
(Definitely worth a read—Dan's rules are a great reminder to treat AI with discipline, not blind enthusiasm. Shout out to Dan for saying what needs to be said.)
My Pragmatic 5-Step Filter for AI in Product Work
Here's my take, building on that spirit: a straightforward 5-step filter I use with clients to figure out where AI actually adds value—and where it's better to skip it entirely.
1. Audit your current workflow first
Map out your core processes (discovery, prioritization, user research, etc.). Ask: Where are the real bottlenecks? If it's unclear goals or misaligned teams, AI won't help—it'll just amplify the mess.
2. Define clear success criteria
Before touching AI, decide what "better" looks like. Faster? More accurate? Be specific. Vague prompts lead to vague outputs.
3. Start small and contained
Test in low-risk areas—like summarizing interview notes or brainstorming variants. Set boundaries: e.g., always human-review outputs, limit to 20% of a task.
4. Measure the ROI ruthlessly
Track time saved vs. time spent fixing. If it's not a net positive after two weeks, kill it. No ego.
5. Default to human judgment
AI for acceleration, never replacement. People spot nuances, ethics, and context that tools miss.
In my client work, this filter has delivered real wins—like speeding up research analysis without losing insight—and quick cuts, like dropping AI for spec writing. The output is overly verbose, and by the time I craft a good prompt, 80% of the spec is already done.
Who knows what the future holds? AI will get better, and I might eat some of these words later. But right now, my focus is on getting things done with the resources we have today—people first, tools second.

