Large Language Models (LLMs) don’t just hallucinate — they reflect us. They mirror the same shortcuts, errors, and distortions that plague human cognition. When a model confidently gives a wrong answer, it’s easy to scoff. But if you’ve ever made a product decision based on gut feel, a half-remembered data point, or the loudest voice in the room… you’ve hallucinated too.

This article explores the behavioural overlap between product managers and the systems we increasingly rely on. Understanding this uncanny symmetry doesn’t just make us better prompt writers — it makes us better decision-makers.

Incomplete Priors, Overconfidence, and the Bias We Share

At their core, LLMs are statistical guessers trained on incomplete priors. They predict what’s likely, not what’s true. But here’s the twist — so do we.

Every product manager is a walking bundle of heuristics: anchoring on the first user quote, overfitting to a single stakeholder’s opinion, converging too early on a feature solution because the sprint clock is ticking. These aren’t just bugs in our brain’s software — they’re evolutionary features that prioritise speed over certainty.

LLMs exhibit the same tendencies. They anchor on prompt phrasing. They hallucinate plausible-sounding facts. They converge prematurely when fine-tuned on narrow datasets. What looks like “AI failure” is often just our own fallibility, scaled and reflected back at us. It’s not just that the model is wrong — it’s that it’s wrong like us.

Prompting from Inside the Bias

When product managers use GenAI, we tend to treat it like a neutral tool. Ask a question, get an answer. But prompting is not neutral — it’s shaped by the same blind spots that lead us astray in stakeholder meetings and roadmap sessions.

We embed assumptions in our prompts. We fall for “availability bias” by over-prioritising recent issues. We load up context with unnecessary constraints or ask leading questions without realising. Then we interpret the model’s output through the same lens — cherry-picking the parts that validate our view and discarding the rest.

There’s a dangerous loop here: human bias feeds model bias, which in turn reinforces human overconfidence. The more articulate the model sounds, the more likely we are to mistake fluency for truth. We’re not just debugging code anymore — we’re debugging cognition.

Practicing Behavioural Hygiene with GenAI

So what’s the fix? Not a technical one — a behavioural one.

Before delegating any decision to a model, we need to pause and check the mirror. What’s the emotional context of the prompt? What assumptions am I baking in? Am I framing this problem out of fear, fatigue, or desire for speed?

Think of it as “pre-prompting hygiene” — a short ritual before engaging with AI:

This doesn’t just produce better prompts. It builds a muscle for meta-cognition — thinking about our thinking. In an age of intelligent tools, that’s the only real leverage we have left.

Conclusion

Working with LLMs is not just a technical task — it’s a behavioural mirror. The same patterns that cause us to overcommit to a flawed MVP or ignore a counterintuitive insight are the ones that lead models to fabricate, overstate, or mislead.

The future isn’t about building models that think better than us. It’s about becoming humans who think better because of them.

If we want to raise the ceiling on what GenAI can offer, we need to raise the floor of our own awareness first.

Leave a Reply

Your email address will not be published. Required fields are marked *