In the first post of this series, I made a blunt claim: if you can’t articulate a thesis statement, you don’t have a strategy — you have a guess.
Who it’s for.
What urgent pain it solves.
Why that pain matters now.
Why your approach is meaningfully different.
When founders realize they can’t fill in those blanks confidently, they almost always make the same next move:
“Fine. We’ll talk to customers.”
And then, two weeks later, they say:
“We did. People liked it.”
This is where teams feel licensed to move fast. So what’s wrong with that approach?
Later — when things don’t land — those challenges get labeled as execution or alignment problems.
The problem started earlier.
Anecdotes ≠ Evidence
A few conversations can do something useful. They can tell a team whether an idea is coherent, whether your language lands, whether the problem sounds familiar.
That’s an important preliminary step — emphasis on preliminary. It doesn’t get a team to a defensible product thesis.
What anecdotes can’t do (even though founders and stakeholders routinely ask them to) is carry the weight of high-stakes decisions.
Anecdotes are inherently slippery:
- They’re easy to over-interpret
- They’re easy to select for (without realizing you’re doing it)
- And they’re unusually good at confirming prior beliefs
That last part is the real problem.
Early on, the founder’s instincts are too present in the room. The framing, assumptions, and enthusiasm all quietly shape what comes back.
So teams leave conversations thinking:
“They validated it.”
What often happened is simpler:
“They were willing to agree with the interpretation they were given.”
That’s not the same thing.
What “validation” actually has to do at this stage
Validation isn’t a feeling. It’s a standard.
If a team is going to build — hire, invest runway, commit architecture, set expectations — it needs signal that is:
- Representative (from the right audience, not the nearest people)
- Consistent (collected in a way that allows patterns to emerge)
- Disconfirming-capable (able to prove assumptions wrong, not just right)
- Specific (about a concrete pain, not generic enthusiasm)
- Defensible (something that can be explained to a skeptical stakeholder without hand-waving)
If the research doesn’t meet that bar, it isn’t reducing risk, it’s producing confidence. And confidence is not the goal.
Decision quality is.
Your Opinion Doesn’t Matter
This is the point where the lens shifts.
At this stage, your opinion about what the product should be is not the signal.
What matters is whether what you’re proposing actually resonates with the people you’re building for — in their language, in their context, under their constraints.
When you proceed on low-grade signal, the product that gets built reflects your assumptions about the solution — not the market’s reality.
It shows up later in very familiar ways:
- MVP scope bloats because every feature “might matter”
- positioning stays vague because no one ever pinned down who feels the pain most
- differentiation becomes story-based instead of constraint-based
- early traction is hard to interpret because success was never defined
You don’t notice the mistake when you’re still in slide decks and prototypes.
You notice it when the first build doesn’t land, the second build feels like a rebuild, and your team starts shipping faster in circles
At that point the narrative becomes:
“We have an execution problem.”
That’s how it usually gets labeled.
But what actually broke first was the foundation those decisions were sitting on.
Bounded value: a decision test for evidence quality
Here’s the forcing question I use when a founder tells me they’ve “validated” an idea:
What did you learn that you could confidently defend in a skeptical room?
And the companion test:
What can you now confidently rule out in your product definition?
If your customer work didn’t materially constrain your product's:
- audience & pain points
- scope
- differentiation
…it didn’t reduce risk.
It just made it feel safer to proceed.
Why good research feels expensive
Rigorous research has a property casual conversations don’t:
It forces teams to set their own assumptions aside long enough for the problem space to push back.
That’s uncomfortable — especially for founders, whose job is to have conviction.
But early conviction without defensible signal is ultimately just a house of cards.
And yes — doing this well takes more effort than talking to a few friendly people.
That’s the point.
If “validation” is cheap, it probably isn’t doing its job.
Where this gets harder next
Once teams start demanding decision-grade evidence, the next constraint shows up quickly:
You can’t evaluate signal without some notion of who would pay — and how this could realistically reach them.
Without that, even rigorous research can validate interest while quietly avoiding viability.
Next: Why early product bets need a rough economic shape — even before the details are clear..
This is the first post in a longer series on early-stage product bets — how teams decide what to build, what to stop, and when the cost of being wrong starts to matter.
If that’s useful, you can subscribe below.
