For millions of American workers, artificial intelligence represents an uncertain future—neither an immediate catastrophe nor a distant irrelevance, but a force reshaping the employment landscape in unpredictable ways. While experts debate AI’s long-term impact on jobs, a growing number of workers already face a more immediate crisis: AI-driven hiring tools are trapping qualified candidates in cycles of rejection before they even secure an interview.
This isn’t the automation narrative of robots replacing human labor. Instead, it’s a story of enshittification—a term popularized by writer Cory Doctorow to describe how platforms degrade user experience to prioritize extractive business models. In this case, AI screening software has turned the job application process into a rigged game where merit often loses to algorithmic misinterpretation.
How AI Screening Tools Are Failing Qualified Candidates
The consequences are already visible. Take Chad Markey, a 33-year-old medical graduate from an Ivy League institution who applied to 82 residency programs for the 2025-2026 cycle. Despite holding a degree from a top school, publishing at least 10 research papers, and securing glowing recommendation letters, Markey received an alarming number of outright rejections.
The issue? Three voluntary leaves of absence on his record—medically necessary breaks due to severe flare-ups of ankylosing spondylitis, an autoimmune disease that left him unable to walk for six months. While Markey included a detailed explanation in his applications, the AI screening system categorized these absences as voluntary, a minor technicality that may have triggered automatic rejections.
“I crawled out of a f**king black hole,” Markey told Wired. “I could not walk for six months. I’ve come this far, and this is happening?”
The Rise of Cortex and the AI Hiring Boom
Markey’s case isn’t isolated. Across industries, AI tools like Cortex—a residency application screening system developed by Thalamus—are gaining rapid adoption. Cortex ingests thousands of application documents and converts them into digestible dashboards for hiring committees, theoretically streamlining the process. However, as Thalamus co-founder Paul Weber acknowledged to Wired, the tool’s reliance on rigid keyword matching and superficial data points risks overlooking critical context.
“The system doesn’t understand nuance,” Weber said. “If an applicant’s record includes a gap labeled ‘voluntary,’ the AI sees it as a red flag—regardless of the actual circumstances.”
Why AI Hiring Tools Are Failing the Most Vulnerable Workers
For workers with non-traditional career paths—whether due to health issues, caregiving responsibilities, or economic hardship—AI screening tools are exacerbating existing inequalities. Unlike human recruiters, who can exercise discretion, AI systems operate on binary logic: a gap is either voluntary or involuntary, with no room for explanation.
This rigidity disproportionately affects:
- Workers with chronic illnesses or disabilities
- Parents who took extended leave for childcare
- Job seekers who faced economic instability
- Candidates from underrepresented backgrounds with non-linear career trajectories
As AI tools like Cortex become industry standards, the risk isn’t just unfair rejections—it’s the systematic exclusion of entire segments of the workforce from opportunities they’ve earned.
What’s Next for AI in Hiring?
Critics argue that the current generation of AI hiring tools prioritizes efficiency over equity. Without human oversight, these systems can perpetuate biases embedded in historical hiring data, penalizing candidates for circumstances beyond their control.
For Chad Markey, the damage may already be done. Despite his qualifications, he now faces an uphill battle to prove his case to skeptical residency programs. His story is a cautionary tale: in the race to automate hiring, we risk losing the very human judgment that makes fair employment possible.