/
When AI Becomes the New Gatekeeper
/

When AI Becomes the New Gatekeeper

AI is sold as innovation, but too often it just industrialises dysfunction. When hiring filters replace human judgement, both candidates and managers lose.
A woman sits at her kitchen table in the evening, laptop open with a job application on screen. She holds her phone in one hand, face heavy with disappointment after receiving a rejection.

Generative AI is being sold as transformation, but most deployments aren’t rebuilding systems — they’re bolting a new machine onto an old frame. The result is not clarity but acceleration: the same dysfunction, just delivered faster. Institutions frame this as innovation, but to users it feels like a slide from conversation to compliance, from human judgement to machine filtration.  

We’ve been told technology levels the playing field. In practice, the playing field gets paved over, its contours ignored, its stories flattened. AI’s role in relational dynamics isn’t to repair trust — too often, it’s to replicate the same biases under the sheen of modernisation.

Scenario: The AI Recruitment Filter

Situation

A job seeker spends hours tailoring an application, aligning skills and achievements with the advertised role. The company has installed an AI-powered screening tool to “streamline” recruitment. 

Impact

The system rejects the application in seconds — not because of competence, but due to formatting quirks or keyword mismatch. A qualified candidate is invisible. The hiring manager, meanwhile, never sees a profile that could have been a perfect fit. 

Tension

What should be a personal transaction — a candidate’s story judged by a peer — is replaced by a mechanical one. Both sides lose. The candidate is denied a chance to be heard. The manager is denied the best talent. 

Approach

Leadership hails the tool as proof of efficiency: time-to-hire metrics improve, dashboards glow green. But these metrics are a classic case of Goodhart’s Law — once the measure becomes the target, it stops reflecting value. Speed is optimised, substance eroded. 

Resolution

The candidate moves on, disillusioned. The company congratulates itself on filling the role quickly, never realising the deeper cost. Dysfunction is not just preserved — it’s scaled. 

From Personal to Mechanical Transactions

Recruitment is supposed to be relational: a dialogue about value, fit, and potential. But once AI sits at the gate, stories are no longer read — they are parsed. Human narratives turn into compliance exercises. 

This shift is more than inconvenient. It’s a violation of what makes the hiring process meaningful. The automation bias effect ensures hiring teams accept algorithmic decisions as objective, even when they’re trivially flawed. A CV discarded for lack of a keyword is treated as a justified rejection, because the system said so. Responsibility slips, trust evaporates. 

Dysfunction at Scale

When patchwork AI enters a system, it doesn’t fix the cracks — it multiplies them. One recruiter making a hasty judgement is frustrating; thousands of AI-driven rejections, invisible and unaccountable, are corrosive. 

Merton’s self-fulfilling prophecy plays out here: the filter is designed to find “ideal candidates,” and by enforcing its own logic, it ensures only those who mimic the machine’s expectations are ever seen. The institution applauds the efficiency, while quietly narrowing its own horizons. 

This is the Dustpan Delusion in action: symptoms swept away, root causes ignored. Dysfunction doesn’t just persist — it becomes industrialised. 

What Reform Requires

AI isn’t inherently corrosive. The problem is when it’s treated as a bolt-on patch, not a scaffold for upstream redesign. Sociotechnical Aystems (STS) Theory reminds us that technology cannot be divorced from the human contexts it operates within. 

Applied with ethical responsibility and human compassion, AI could support structured interviews, audit for bias, and help managers surface unseen potential. But that requires intent. Without it, institutions fall into institutional isomorphism — adopting the same filters as their peers, performing “innovation” while entrenching exclusion. 

The goal should be simple: keep the human moment alive. A recruiter should spend half an hour with a CV, not outsource empathy to an algorithm. 

Conclusion

The Dustpan Delusion teaches us that patchwork fixes look busy but leave systems broken. Generative AI doesn’t escape that pattern — it accelerates it. Wherever human judgement is replaced by machine filtration, dysfunction is not solved, but scaled. 

The lesson for readers is clear: whether in recruitment, customer service, or public services, ask not what AI does faster, but what it actually improves. If the upstream system is broken, no algorithm can redeem it. If clarity and fairness are the goals, then AI can help scaffold reform. Otherwise, we’re just building better gatekeepers — and locking the wrong people out. 

Relational Observations

AI as the New Gatekeeper

AI that replaces people scales dysfunction, not fairness.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

More