“Garbage in, garbage out” isn’t just a warning about poor inputs. It’s a behavioural truth. When you write with AI, you’re not just prompting a tool — you’re prompting from your mind.
You feed it a prompt, expecting clarity. It gives you noise. You add more words. Still noise. You tweak the tone, drop some jargon, rephrase the ask — but somehow, the result is still a little… off.
It’s tempting to blame the model. But what if the real issue isn’t the output but the thinking behind the input?
We like to imagine prompting as a technical skill — a matter of syntax, specificity, and creative phrasing. But that’s only half the story. Prompting is a behavioural act. Every artefact we feed into the machine — every half-formed story, every padded epic, every vague theme — carries traces of how we think under pressure. How we write when we’re rushing. How we plan when we’re unsure. How we communicate when we’re tired.
And GenAI? It doesn’t correct these habits. It scales them.
It’s a mirror, not a mentor. And that makes it one of the most revealing — and risky — tools we’ve ever introduced into our workflow.
This article isn’t a how-to on prompt engineering. It’s a behavioural diagnosis.
A look at what GenAI is reflecting back to us — and what we can do about it.
The Invisible Echo in Every Prompt
Our tools reflect us. And GenAI, for all its cleverness, is just a polished mirror.
When a prompt fails to deliver — producing soupy summaries, awkward stories, or design suggestions that miss the mark — our first instinct is often to blame the model. But most of the time, it’s not hallucination. It’s exposure.
Consider Nisha. A mid-level PM under pressure to “speed things up,” she pastes an old Jira ticket into ChatGPT:
“As a user, I want login functionality so I can access my account.”
No context. No clarity. No constraints. What comes back is predictably bland — a boilerplate to-do list with no thinking behind it.
She adds it to the sprint. Two weeks later, the engineer is confused.
“What kind of login? Social? MFA? Magic link?”
Nisha can’t remember. The AI didn’t hallucinate. It simply reflected her midnight shortcut — a behavioural pattern of rushing, assuming, and outsourcing clarity.
When you write prompts from default behaviours — procrastination, vagueness, template overuse — the AI doesn’t fix them. It scales them. And teams pay the price later.
The Feedback Loop You Didn’t Know You Built
The real danger isn’t just that AI reflects your habits. It’s that it learns from them.
Oli, a delivery lead, recently set up a slick automation pipeline. His GenAI tool generates user stories directly from themes tagged in Confluence. It’s fast. Efficient. “Smart.”
But there’s a catch: the source artefacts are trash.
Initiatives written in pub-stained PDFs. Bloated epics from old marketing decks. Stories with zero acceptance criteria.
Now, every AI-suggested ticket is bloated, vague, and misaligned — a perfect copy of its chaotic parents.
The team stops trusting the tool. Designers rewrite everything in Figma. Engineers shrug and guess.
Nobody complains, but everyone works around it.
This isn’t a tooling problem. It’s a behavioural one. Bad thinking baked into inputs, now ritualised by AI.
You’re not just prompting — you’re training a loop. And if that loop is built from cognitive clutter, it’ll feed itself forever.
Writing as Design: Reframing the Prompt Ritual
But here’s the flip side: if GenAI reflects patterns, it can also reinforce better ones.
Sam, a lead designer, runs a quiet experiment.
During backlog refinement, his team co-writes three prompt variants for a messy story:
- One copy-pasted from Jira
- One rewritten using structured UX prompts
- One framed as a conversation with a real user
Only the third yields a response that feels useful. But that’s not the point.
The real value is in the conversation that follows.
They realise they’ve been using artefact templates as a crutch — hiding decisions behind boilerplate. They spot phrases no one understands (“optimise flow,” anyone?).
And they start rewriting more stories together — not just for the sprint, but to teach the AI how they think.
Prompting becomes a shared ritual. Not just a task, but a team mirror.
GenAI isn’t a shortcut. It’s a signal amplifier. If your signals are fuzzy, the results will be, too. But with behavioural intention — with clarity, constraints, and conversation — it becomes more than a tool. It becomes a teacher.
Conclusion: It’s Not the Model. It’s You.
GenAI isn’t here to save you from unclear thinking. It’s here to show you where it starts.
If your stories are vague, your prompts will be, too. If your artefacts are rushed, so will the output.
But if your prompts are curious, grounded, and designed to provoke insight — the system responds in kind.
Writing with AI is not a shortcut to clarity. It’s an invitation to find it.
Behavioural Principles
Writing With AI Won’t Fix Broken Thinking
- Prompts are mirrors, not mentorsThey reflect your current mental state — not your best intentions
- AI scales patterns — even the broken onesIf you feed in vague or misaligned artefacts, the model will propagate them at speed
- Templates hide thinking as often as they scaffold itOverused structures can create the illusion of clarity while embedding confusion
- Behaviour leaks into the backlogArtefacts written in haste, stress, or doubt carry those traits into the team’s workflow — and the AI’s output
- Prompting is a behavioural act, not just a technical oneHow we write with AI reveals how we process ambiguity, pressure, and responsibility
One Response
Mirrors not mentors – very thought provoking!