Generative AI is often described as if it were a rival author — a machine waiting to replace creativity with imitation. But that’s a very misguided frame. What’s really at stake is not authorship but legibility. The most valuable role AI plays isn’t in writing for us, but in helping us write to each other more clearly.
Think of prompts the way you’d think of Agile artefacts. A story or a ticket isn’t written to indulge the author; it exists so someone else can pick it up and act on it. In the same way, a prompt isn’t a one-way instruction. It’s a message, and like all messages, its impact depends on how it’s received.
The transition from prompt to partnership begins when we stop seeing AI as a servant and start seeing it as a co-creator that thrives on clarity.
Prompts as Artefacts, Artefacts as Messages
Every written artefact is more than a record of intent — it is a message designed for someone else to interpret. Agile stories, epics, and tickets all exist in this space: they are not private notes but social contracts, written with the expectation that others will act upon them.
The same principle applies to prompts in generative AI. A prompt is not simply an input; it is an artefact that sits between author and interpreter, shaping the quality of collaboration that follows.
When prompts are treated as personal shorthand, they often inherit the same flaws that plague poorly written tickets: vague language, self-referential framing, and ego-driven embellishment that clouds the real intent. What should be a shared foundation of understanding instead becomes a puzzle for others to solve. The relational cost of this ambiguity is wasted time, misinterpretation, and friction.
Now consider the effect when AI is introduced as the reader of that artefact. The prompt becomes not just an instruction but a bridge. Its clarity determines whether the AI generates noise or produces something the whole team can rally around. And just as a clear story empowers engineers, a clear prompt empowers AI to return results that enhance rather than hinder collaboration.
Scenario
A PM pours half an hour into a Jira ticket that read like a stream of consciousness, sprawling across several paragraphs:
- Nested clauses
- Vague intentions
- Caveats stacked inside parentheses
If handed to a team, confusion is almost guaranteed. Instead, the PM pastes it into an AI assistant and asks:
“Can you restructure this into a concise user story with acceptance criteria — and ask me questions where the intent is unclear?”
The AI reformulates the story into three crisp lines: a role, a need, and an outcome. It also flags ambiguities — “What kind of user are you describing here?” and “What should happen if this condition fails?”
The PM’s intent is preserved, the ego-driven clutter is gone, and the missing context is surfaced. The result isn’t just a cleaner story — it’s a collaborative artefact, one that reflects a dialogue rather than a monologue.
Styles of Prompting = Styles of Relationship
The way we write prompts doesn’t just affect outputs — it shapes the relationship we form with AI. A blunt, directive prompt establishes a command-and-control dynamic: the AI becomes a passive order-taker. A collaborative prompt, on the other hand, frames the exchange as an ongoing dialogue.
This distinction mirrors the dynamics of any cross-functional team: one style closes space for interpretation, the other opens it.
Collaboration thrives when we create artefacts that allow multiple interpretations to coexist, then invite refinement. With people, that means leaving room for engineers, designers, or testers to bring their own expertise to the table. With AI, it means structuring prompts so the model can ask for clarification rather than fabricating answers. A command says “do it my way.” A collaborative prompt says “help me make this legible.”
Scenario
A designer and an engineer preparing for a sprint. The designer talks in loose metaphors: “make it flow, make it feel alive.” The engineer counters with specifics: *“three states, error handling, edge cases.”
Each is right, but they’re speaking past one another. They feed their notes into an AI assistant with the instruction:
“Reframe this into a shared story. Keep both perspectives, and ask where context is missing.”
The AI stitches the two voices into a single artefact. It preserves the designer’s sense of flow while embedding the engineer’s structure. Crucially, it also asks: “Which platform is this intended for?” and “What happens if the flow breaks?”
Instead of flattening nuance, the AI exposes gaps and invites clarity. What emerges is not just a compromise but a bridge — an artefact both sides can build on without ego or friction.
Designing for the Reader, Not the Writer
Most artefacts fail not because the idea is weak, but because they are written for the comfort of the author rather than the needs of the reader. In Agile teams, this is the difference between a story that reflects the writer’s shorthand and one that guides a cross-functional team.
In the world of prompting, the same rule applies: clarity comes when we stop thinking about what we want to say and start designing for what the other side needs to understand.
The trap of ego is writing to sound smart, inspiring, or comprehensive. But legibility is the higher goal. For a human reader, that means actionable clarity. For AI, it means structured context and explicit signals that reduce ambiguity. A strong prompt doesn’t just state an intent — it makes space for questions that expose blind spots and force the writer to confront their own assumptions.
Scenario
A founder drops two pages of poetic vision into Slack at 2 a.m. The language is rich, emotional, and sweeping, but no one on the team can turn it into something shippable.
A colleague pastes it into an AI assistant with the instruction:
“Reframe this as a sprint-ready story with context, constraints, and testable outcomes — and ask me questions where details are missing.”
The AI distils the founder’s vision into a concise set of stories, each with clear outcomes and acceptance criteria. It also pushes back: “What’s the priority order? Who is the target user? Which constraints are non-negotiable?”
The founder’s fingerprints are still visible in the reframed artefact, but the indulgent prose has been pared back to what the team actually needs. What remains is not just a reflection of the founder’s voice, but a shared asset that strips away ego and surfaces the essentials for collaboration.
Conclusion: Stripping Ego, Scaling Legibility
Prompts, like Agile stories, live in ecosystems of interpretation. They aren’t fixed instructions; they are evolving artefacts that change hands, contexts, and meaning. Their value lies not in how clever they look when written, but in how clearly they enable others — whether humans or AI — to act.
Treat AI as an order-taker, and you risk reproducing the same dysfunctions that haunt poorly written tickets: vagueness, bias, and silence where questions should be. Treat AI as a collaborator, and the relationship changes. A well-framed prompt not only produces cleaner output but also invites the AI to surface what’s missing — the gaps, the contradictions, the unspoken assumptions. Clarity emerges not from ego, but from dialogue.
The future of work doesn’t hinge on AI replacing creativity. It hinges on AI stripping away ambiguity so that creativity can travel faster, cleaner, and further across a team. From prompt to partnership, what scales is not the ego of the author but the legibility of the artefact. And legibility is what builds trust, accelerates momentum, and turns fragmented voices into collective progress.
The choice is ours: keep writing for ourselves, or start writing for the relationships our words must serve.
Relational Observations
Stripping Ego and Scaling Legibility
- Prompts act as artefacts in shared ecosystemsNot private notes, they're social contracts shaping interpretation across humans & AI alike
- Ego-driven writing undermines collective clarityWhen prompts or tickets reflect the author’s shorthand, others inherit confusion and friction
- Styles of prompting establish relational dynamicsDirective prompts reduce AI to an order-taker; collaborative prompts invite dialogue & context
- Reader-first framing strengthens shared legibilityAnticipate questions to surface missing context & preserve the essentials for collective use