There’s a new kind of stage fright in the job market. It’s not the nerves before an interview, or the anxiety of waiting for feedback — it’s the quiet humiliation of knowing your words might never reach a human being. The first audience for your CV is no longer a hiring manager; it’s a machine, hungry for keywords, indifferent to nuance, and programmed to separate “relevance” from “waste.” To be seen, you have to sound like code.
And so begins the modern paradox of employment: we use AI to outsmart AI. Candidates rewrite their résumés through ChatGPT, optimising phrasing for Applicant Tracking Systems they barely understand. The goal isn’t deception — it’s access. Somewhere on the other side of the filter, a recruiter is equally weary, staring at a dashboard of algorithmically sorted profiles, knowing full well that some of the best people may never make it through. No one wants to read 500 CVs to fill one role. But no one builds a career by gaming pattern-matching software either.
This is not a morality play; it’s an empathy exercise. The machines are only doing what we trained them to do — prioritise efficiency over curiosity. The real challenge is ours: to find a way of using these tools without losing our voice in the process. Because in an age where clarity itself has become performative, staying human might just be the rarest skill of all.
Scenario: The Algorithmic Waiting Room
Situation
He’s been at it for weeks — tailoring cover letters, rewriting CV lines, and tracking applications in a spreadsheet that’s starting to feel like a confessional.
The advice is everywhere: “Optimise for the ATS,” “Use action verbs,” “Mirror the job description.”
So he feeds his words into ChatGPT, watching the machine polish and repackage his experience into the kind of corporate tone he’s always tried to avoid. It reads well. Too well.
Impact
When the first few applications vanish into silence, he tells himself it’s normal — everyone gets ghosted.
But then comes the one role that fits perfectly: the product job that seems written for him. He sends the AI-tuned version and waits. Days turn to weeks. Nothing.
Somewhere, an algorithm decided that his optimised self wasn’t optimised enough.
Tension
He starts to wonder if he’s invisible, or worse — indistinguishable. Every iteration of “tailored” language makes him sound more like everyone else.
He’s playing the game by the rules, but the rules reward conformity. The very tools meant to help him stand out are sanding him down.
Approach
He tries another tactic: honest language, unfiltered tone, human cadence. A risk, maybe even a protest.
He hits submit, knowing full well it might never reach human eyes. He tells himself it’s worth it anyway — that somewhere a recruiter is just as tired of the noise.
Resolution
The rejection email arrives within minutes.
Automated, polite, efficient. He doesn’t take it personally, but he feels it personally — that flicker of futility when you realise the system works, just not for you.
He closes the tab and whispers the thought we’ve all had lately: “Maybe the machine’s doing the hiring now.”
I should probably admit — that story wasn’t invented. It happened to me.
The application, the algorithmic silence, the rejection that landed faster than the page could reload. It’s oddly humbling when a machine rejects you before a person ever gets the chance to. And yet, I can’t bring myself to be angry. No recruiter wants to wade through five hundred CVs to find one human fit; automation is a mercy as much as a filter.
But what unsettles me is how familiar the logic feels. It’s the same pattern I wrote about in The Rise of Productivity Vibe-Coding: performance as proof. We used to optimise for meetings and metrics — now we optimise for algorithms.
Both sides are adapting to survive, not deceive. And somewhere between the candidate’s desperation to be seen and the recruiter’s exhaustion to cope, humanity has been optimised out of the loop.
The Machine as Audience
In Information Theory, Claude Shannon described information as a signal transmitted through noise. The clearer the signal, the less chance of distortion. Somewhere along the way, recruitment flipped that equation: clarity became risk, and noise became strategy. The more a candidate tunes their language for the algorithm, the less human the message sounds — but the more likely it is to pass through the filter. The irony is brutal: authenticity lowers your odds of being heard.
For recruiters, the problem is just as corrosive. They never asked to become radio engineers, deciphering who’s real through static. Automated screeners were meant to save time, not flatten tone. What was once a craft — reading between the lines, spotting potential in imperfection — has become a process of filtering by syntax. The “audience” isn’t a hiring manager anymore; it’s a model trained to recognise patterns that often penalise originality.
And yet, we can’t simply blame the machines. They only amplify what we feed them. We built systems that value pattern compliance over human discernment, and now both sides are speaking in code. Recruiters crave clarity but receive mimicry; candidates crave empathy but get silence. It’s not malice — it’s entropy. The signal of sincerity is still there, just buried under the noise of optimisation.
Ethical Prompting as Translation
The temptation, once you realise the machine is your first reader, is to perform for it. To inflate verbs, sprinkle keywords, polish until there’s nothing left to hold onto. But that’s the same trap that broke productivity culture — mistaking polish for progress. The behavioural correction isn’t to reject AI outright; it’s to learn how to speak through it rather than for it.
Ethical prompting starts from a different question: What truth am I trying to make legible?
Used well, tools like ChatGPT can act as translators — taking your human intent and expressing it in ways a model can parse without distorting your meaning. It’s the linguistic equivalent of subtitles: you’re not rewriting the film, just making sure it’s understood. The prompt becomes an act of empathy — not for the algorithm, but for the exhausted recruiter on the other side of it.
In practice, that means resisting the urge to mimic job-description jargon or inflate achievements until they sound fictional. Instead, you use AI to clarify, to structure, to surface your genuine value in machine-readable form. The difference is intent. Manipulation hides weakness; translation reveals strength. The most ethical prompts are those that make you more visible as you are — not more marketable as someone else.
Alignment as a Shared Problem
In AI research, the alignment problem describes what happens when a system pursues the wrong goal perfectly. The machine does exactly what we asked — but not what we meant. Recruitment isn’t so different. We’ve built systems that optimise for convenience, compliance, and measurable fit, not for curiosity, growth, or potential. The algorithms are aligned to the process, not the purpose.
For candidates, this misalignment feels deeply personal. You do everything “right”: the phrasing, the formatting, the keywords — yet the outcome feels wrong. For recruiters, it’s equally exhausting: dashboards full of near-identical profiles, risk-averse scoring systems, and the nagging sense that great people are slipping through because the metrics can’t see nuance. Both sides experience the same frustration from opposite ends of the same funnel. The system isn’t malicious; it’s merely obedient.
The tragedy of alignment is that it rewards surface accuracy over human truth. We’ve taught our machines to chase correlation, not character — and in doing so, we’ve trained ourselves to do the same. The way out isn’t rebellion; it’s realignment. Candidates can use AI to express truth more clearly; recruiters can use it to interpret context more generously. Alignment, done right, isn’t about control — it’s about compassion encoded as clarity.
Conclusion
Somewhere between the rejections, the rewrites, and the quiet hope that this next application might break through, I realised I didn’t just want to be seen — I wanted to be heard. That’s why I started writing again. The blog became a way to keep my voice intact while the rest of the professional world was being flattened by automation. If the job market had turned into an audition for algorithms, then this space would be my rehearsal room — a place where I could still sound like myself.
It reminded me that clarity is not a luxury; it’s an act of survival. When systems reward polish over truth, the only sustainable countermeasure is integrity — the discipline to remain coherent amid distortion. That applies to job seekers and recruiters alike. Candidates can use technology to express, not disguise. Recruiters can use it to understand, not exclude. Between them lies a fragile but vital compact: empathy as design, clarity as practice.
Authenticity isn’t nostalgia — it’s competence. And in an age where the first impression is often judged by code, retaining your own voice might just be the most tactical decision you ever make.
Behavioural Principles
The Integrity Compact
-
Optimise without erasing yourself.
Use technology to clarify your intent, not to overwrite your identity. -
Help the recruiter hear you faster.
Clarity is kindness — precision is empathy disguised as efficiency. -
Respect the human on both sides of the filter.
Recruiters automate for survival; candidates adapt for access. Compassion bridges the gap. -
Authenticity is efficiency.
Signal beats noise when language serves truth over compliance. -
Integrity scales better than imitation.
The algorithm may choose who’s seen, but character decides who’s remembered.