Six months ago, I wrote a post called The Elephant in the Room where I took a deliberately cautious stance on AI. People first, tools second. Default to human judgment. Fair enough.
I wasn't entirely correct.
Since then, AI has gone from being a side tool to the hub of my workday. Now it's at the center of how I operate. I recently onboarded a client and used Claude to ingest their internal documents, meeting transcripts, and operational workflows. A ramp-up that would have taken weeks took about 40 hours — and I had working POCs of core concepts before I even started the formal design phase.
So yes: I'm all in. More than I expected to be a few months ago, if I'm being honest.
And that's part of what makes this so important to talk about — because the temptation to let AI take the wheel is real. It's seductive. It's the poisoned apple. The outputs look so polished, the speed is so intoxicating, that even when you know you need to stay in the driver's seat, it's genuinely hard to resist handing over the reins. I'm aware of this, and I've still fallen for it.
But here's the thing. The deeper I go — the more I use AI, the more I rely on it, the more I push it — the more convinced I've become that my original argument wasn't wrong. It was incomplete. AI doesn't replace the need for strategic fundamentals. It makes them non-negotiable.
The Rabbit Hole You Don't See Coming
Here's something I'd bet most of you have experienced but haven't said out loud: You opened a Claude or ChatGPT thread with a vague sense of "I need to figure out my strategy for X." Then you spent the next four hours in that thread. You explored competitors. You brainstormed features. You pressure-tested positioning. You asked follow-ups, went deeper, branched out. It felt incredible. It felt like progress.
And then you came up for air, looked at what you actually had, and realized: it was noise. A sprawling, impressive-looking pile of ideas that didn't connect to any central thesis, didn't advance any specific decision, and didn't get you any closer to shipping.
You didn't strategize. You explored. And exploring everything is functionally the same as exploring nothing.
I've done this myself — caught myself going down rabbit holes I didn't need to go down, spending cycles on exploration that felt productive but wasn't. A recent client described the same experience almost verbatim. They'd spent a full day in an AI session convinced they were making strategic progress, only to realize they'd been brainstorming on top of an assumption they hadn't validated. The foundation wasn't there, so the house was sand.
When Friction Was a Feature
Before AI, exploration was expensive. Research took days. Prototyping cost weeks of engineering time. Even writing a feature spec was a slog. That friction was annoying — but it was also a very important filter. When exploring is hard, you're forced to prioritize before you go deep. You can't afford to chase every idea, so you instinctively ask, "Is this direction worth the effort?"
AI removed that cost. Exploration is now essentially free — you can research a market, draft a spec, brainstorm 40 feature variants, and generate a competitive landscape, all before lunch.
That sounds like progress. It's actually dangerous.
Because the thing that used to force you to prioritize — the friction, the effort, the natural constraint of limited hours — was the only brake pedal most teams had. Remove it, and you accelerate into every direction simultaneously. You're moving fast. You're producing output. And nothing is compounding.
The 15-Page PRD That Nobody Read
I've seen this play out in a very specific, recognizable way. A junior product manager generates a PRD in 20 minutes using AI. It's 15 pages long. It looks professional. Detailed requirements, acceptance criteria, edge cases, the works.
But it came from a vague prompt — the PM hadn't done the hard thinking about why this feature matters, what assumption it tests, or whether it should exist at all. The AI happily filled in the gaps with plausible-sounding details.
Here's where it gets worse. Those 15 pages could probably be reduced to a page of bullets and a couple of charts. But no human is going to read 15 pages of AI-generated requirements. So what do the developers do? They feed it into their AI to summarize it. Now you've got AI digesting AI, and nobody in the chain has used the most valuable asset they have — their own brain. The PM didn't think critically about the feature. The devs didn't think critically about the spec. And halfway through the sprint, someone realizes the requirements don't connect to any actual user problem — because nobody ever stopped to ask whether they should.
The PM didn't save time. They moved the work — from upfront thinking (which is hard) to downstream cleanup (which feels productive, but isn't). And worse, they dragged everyone else into the rabbit hole with them.
This isn't the AI's fault. The AI did exactly what it was asked: generate a comprehensive PRD. The problem is what it wasn't asked: "Challenge my assumptions. Tell me where this is weak. Tell me if this feature should exist in the first place."
Your AI Is a Yes-Man (And It Never Gets Tired)
There's something the "AI for product" conversation isn't acknowledging enough: AI is a natural yes-man. Ask it "Is this a good product strategy?" and it will tell you all the ways it's a good product strategy, maybe with some gentle "considerations" at the end that feel like they're there for balance rather than honesty. Getting genuinely critical feedback requires deliberate prompting — telling it to poke holes, to argue the other side, to be harsh. Most people don't do this. So the tool that's supposed to help you think better ends up confirming whatever you already believed.
And unlike a human colleague, AI doesn't get tired. It doesn't look at the clock and say, "Hey, we've been at this for four hours — maybe we should step back and ask if we're even working on the right problem." It will keep going 24/7 (or at least until you run out of credits). If a colleague was pulling you down tangent after tangent, at some point you'd say, "Whoa — we've gone off into the weeds." You'd push back. You'd redirect.
But with AI, that instinct shuts off. The experience is closer to being absorbed in a video game at 2am — you know you should stop, but the next level is right there, the feedback loop is perfectly calibrated, and pulling away feels like giving up momentum. Just because AI keeps going doesn't mean it's right, and just because it's confident doesn't mean you should be. The judgment call to stop is still entirely on you.
The Fundamentals Are the Brakes
All the strategic work you used to do before AI existed — the thesis, the validation, the kill criteria, the risk analysis (the same work I wrote about across the 0→1 series) — is not optional now that you have AI. It's the only thing keeping you on the road.
Your thesis is the filter that tells AI what to explore and what to ignore. Your kill criteria force you to stop when the AI would happily keep going. Your risk analysis tells you which rabbit holes are worth going down — and which ones are just interesting distractions.
When those foundations are locked in, AI becomes devastating in the best possible way. I've used it to comb through hundreds of Reddit threads to pressure-test a thesis, craft sharper survey questions and interview scripts, and pressure-test every downstream decision against the thesis — keeping me honest, especially when I deliberately prompt it to fight back against its own sycophantic tendencies.
The hard thinking upstream makes everything downstream 10x faster and 10x more precise. Without it, the AI is just a very articulate hallucination machine with no off switch.
What Actually Shipped?
If you're honest with yourself: how many hours have you spent in an AI thread this month that didn't start with a clear thesis, a specific question, or a defined decision to make?
And how much of what came out of those sessions actually shipped?
The AI hype machine is full of "Look what I built in 30 seconds!" content. You don't see many posts titled "Look at the full day I wasted in Claude because I didn't know what I was looking for." But I'd bet that second experience is far more common — and I'm certain it's far more expensive.
AI didn't change the game. It changed the speed of the game. And if you don't know where you're going, speed just means you get lost faster.
The hardest part? You're going to open Claude tomorrow morning. So am I. And the pull to skip the hard thinking and just start prompting will be exactly as strong as it was today. Knowing this doesn't make you immune to it — it just means you can't pretend you didn't see it coming.
If this hit close to home — join Closing the Loop for a weekly, unvarnished look at the strategic decisions that actually move the needle. Subscribe here.

