I never thought I’d find myself staring at a rejection email for a job I was perfectly qualified for, wondering if it was even read by human eyes. Last month, that’s exactly what happened – I’d applied to a mid-level marketing position that matched my experience almost perfectly. My CV ticked every box in their requirements list, I had glowing references, and I’d even worked in the same industry for six years. The rejection came within 48 hours – suspiciously fast for a role that had received hundreds of applications.
“We’ve decided to pursue candidates who more closely match our requirements,” read the automated email. I nearly spat out my coffee.
My friend Jenna, who works in HR tech, confirmed my suspicions over drinks later that week. “Oh yeah, they’re definitely using an ATS with AI screening,” she said, casually sipping her gin and tonic as if she hadn’t just confirmed that robots were deciding my professional fate. “Most big companies do now. The system probably filtered you out before a human even glanced at your application.”
An ATS, she explained, is an Applicant Tracking System – software that manages the recruitment process. Many now come with AI components that screen CVs against job descriptions, looking for keyword matches and other patterns to identify “promising” candidates. The rest get binned faster than you can say “thanks for your application.”
I’ve been job hunting for three months now, and I’m starting to think these systems hate me personally. But it’s not just me – it’s happening to qualified candidates everywhere.
Take my colleague Simon, who has 15 years of experience as a software developer. He applied to a tech company looking for someone with his exact skill set. Rejection within hours. When he finally managed to speak to someone at the company through a mutual connection, they admitted they were struggling to find candidates with his qualifications. The system had apparently decided his experience wasn’t relevant because he hadn’t used the exact phrasing from their job description.
“It’s like they’ve programmed these things to find reasons to say no rather than yes,” he told me, exasperation evident in his voice. “I’m literally doing the job they’re advertising, just with slightly different terminology.”
The more I looked into this, the more absurd examples I found. There was the nurse who couldn’t get past the AI screeners because her experience was listed as “patient care” instead of “healthcare delivery.” Or the accountant whose decade of QuickBooks experience didn’t register because the job listing specified “proficiency in accounting software” without naming specific programs.
I mean, are we seriously letting algorithms make these decisions? Ones that can’t even recognize that “patient care” is kind of important for a nursing position?
My curiosity piqued (and my job prospects seemingly determined by digital gatekeepers), I decided to dig deeper into how these systems work. What I discovered was equal parts fascinating and horrifying.
Most recruitment AI systems work by creating a sort of digital “ideal candidate” based on the job description and sometimes the CVs of successful employees. They then compare each application against this profile, looking for similarities in keywords, experience, education, and even writing style. Candidates who don’t match closely enough get automatically rejected.
Sounds reasonable in theory, right? The problem is these systems are trained on historical hiring data – data that’s often riddled with human biases and outdated assumptions about what makes someone “qualified.”
I spoke with Dr. Patricia Mendez, an AI ethics researcher at a major university, who explained: “These systems essentially codify all the biases that existed in previous hiring decisions. If a company historically hired people from certain universities or with certain backgrounds, the AI learns that pattern and replicates it – only now with the veneer of technological objectivity.”
She told me about a now-infamous case where a tech giant had to scrap their AI recruitment tool after discovering it was systematically downgrading CVs that included words like “women’s” (as in “women’s chess club captain”) and applications from graduates of all-women colleges. The system had been trained on the company’s historical hiring data, which skewed heavily male.
“The algorithm had essentially learned that being male was a qualification for the job,” Dr. Mendez said with a sigh. “And that’s just one of the more obvious examples. There are countless subtle biases these systems perpetuate that are harder to detect.”
But it’s not just discrimination that’s the problem – it’s plain old incompetence. These systems are remarkably bad at understanding context, synonyms, or transferable skills.
My friend Jake, who transitioned from military service to civilian work, couldn’t get past the AI screeners because they didn’t recognize that “squad leader” meant he had management experience. The system was looking for the exact phrase “team management” and couldn’t make the connection.
“It’s bloody ridiculous,” he told me over coffee last week. “I led twelve people in high-pressure situations where mistakes could literally get someone killed. But apparently that doesn’t count as leadership experience because I didn’t use the right buzzwords.”
I tried an experiment after hearing Jake’s story. I took my CV – the one that had been repeatedly rejected by AI screeners – and rewrote it using exact phrases from job descriptions, even when they weren’t the natural way to describe my experience. My success rate for getting interviews immediately jumped from about 5% to nearly 30%.
I wasn’t a better candidate. I hadn’t gained any new skills or experience. I’d just learned to speak the language of the algorithm.
What’s particularly maddening is how these systems create a ridiculous game that has nothing to do with actual job performance. People are now paying for CV services that specifically optimize for AI screening tools – essentially teaching humans to write for robot readers instead of human ones.
“The whole thing has become completely detached from the actual goal of finding good employees,” said Rahul, a recruitment consultant I contacted for this deep dive. “Companies implement these systems to save time and money, but they’re actually losing incredible candidates and extending their hiring timelines. I’ve seen positions stay open for months because the AI keeps rejecting everyone who doesn’t match some arbitrary pattern.”
The worst part? Many companies have no idea this is happening. They implement these systems assuming they’re getting the cream of the crop, when actually they’re just getting candidates who’ve mastered the art of keyword stuffing.
Last week, I interviewed for a role I’d applied to three times previously and been rejected automatically each time. What changed? I literally cut and pasted sentences from the job description into my application. In the interview, the hiring manager said, “We’re so glad we found you – we’ve been looking for someone with your experience for ages!”
I nearly said, “I’ve been here all along, your robot bouncer just wouldn’t let me in.” Instead, I smiled and said I was excited about the opportunity.
The technical term for what’s happening is “overfitting” – the AI is so fixated on specific patterns that it fails to recognize valuable variations. It’s like training a system to identify cats by showing it only pictures of ginger tabbies, then wondering why it can’t recognize a Siamese.
Some companies are waking up to the problem. IBM recently overhauled their recruitment AI after realizing it was screening out qualified candidates, particularly those with unconventional career paths or from underrepresented backgrounds. They now use the technology to highlight promising candidates rather than eliminate supposedly unpromising ones – a subtle but crucial difference.
Others are implementing “human in the loop” approaches, where the AI makes initial recommendations but a human reviewer checks candidates before they’re rejected. It’s not perfect, but it’s better than letting HAL 9000 decide your career prospects unilaterally.
As for me, I’ve developed a two-pronged approach to job hunting. For companies I really want to work for, I create the algorithm-friendly version of my CV, packed with the exact terminology from their job listing. For others, I try to bypass the AI entirely – reaching out directly to hiring managers on LinkedIn, attending industry events, or getting referrals from current employees.
It shouldn’t have to be this way. The promise of AI in recruitment was that it would make hiring more efficient and meritocratic. Instead, it’s created a bizarre system where qualified candidates need to become SEO experts just to get their CV read by human eyes.
The irony isn’t lost on me that many of these same companies are desperately seeking employees who can “think outside the box” and “bring fresh perspectives,” while using systems specifically designed to filter out anyone who doesn’t fit neatly into predefined categories.
So if you’re job hunting and facing a wall of rejections, remember: it’s not you, it’s the algorithm. And maybe, just maybe, add a few more buzzwords to your CV. You know, just to get past the digital bouncer at the door.