The Hiring Process Is Broken for Real People
Happy Friday, Job Board Doctor friends! I am wishing you well from the beautiful and very happy city of Budapest this week.
Why am I in Budapest? Well of course, enjoying my first (but not last) RecBuzz conference. Kudos to Peter Zollman and the entire AIM Group team!
FUNDING NEWS: HRFlow.ai Pre-Series A
Before I jump into my weekly deep thoughts, a quick share on some TA/HR Tech funding news.
HrFlow.ai announced this week, it has secured $7 million in it’s second round of Pre-Series A funding. The French data and AI infrastructure company founded in 2016 has now raised $10 million in capital. Read the full announcement.
The Hiring Process Is Broken for Real People
At every conference, in every webinar, on every roundtable, there is a version of the deepfake candidate story that is so easy to tell. Fraudsters are weaponizing AI. Fake identities are flooding the funnel.
In a full on, woe is me, employers are the victims, undertone.
Don’t get me wrong, the deepfake fraud problem is real. Recruiters are showing up to video calls and encountering candidates whose lips don’t quite sync, whose reactions arrive a half-beat late, whose polished resumes link to profiles that are just a little too perfect.
When their deepfake detection technology launched in early 2026, InCruiter found fraudulent activity in 25 to 30% of suspicious sessions, nearly double what even experienced human interviewers had previously identified.
Gartner projects that by 2028, one in four candidate profiles worldwide will be fraudulent.
Here is what the conference tale that will sell so many products omits: before the bots arrived, the hiring process was already treating human candidates like an inconvenience, at best.
AI did not create this dynamic. It is scaling something that has been building for decades, and the people absorbing the real cost of this battle are the ones who just need a job.
The employer behavior problem
HR consultant Bryan Driscoll put it plainly to Newsweek companies, “expect professionalism, patience, and prompt replies from candidates, and then vanish without a trace the moment it’s inconvenient for them to respond.”
The data backs him up.
- 53% of job seekers experienced ghosting within the last year, a three-year peak.
- 48% were ignored by employers in 2025, up from 38% the year before.
- Eight in ten hiring managers admit they have ghosted job candidates.
This is not a technology failure. This is a choice.
The ghosting problem sits on top of a longer list of structural indignities. 30% of job seekers say unrealistic role requirements are their greatest challenge, with entry-level jobs routinely requiring three or more years of experience.
Ghost jobs, postings which exist to collect resumes, not fill roles are a documented, widespread practice that consumes real people’s time and hope.
Career Strategies Substack, focused on helping professionals rebuild confidence and execute employment strategies during long periods of unemployment, shares how job seekers are feeling.
“By late 2025, 73% of job seekers reported their searches were more difficult than before. Almost half expected to submit 26 or more applications just to get a single offer, with many bracing to apply to over 100 positions.”
With the reality of this environment and the injection of AI into every point of the hiring process by TA leaders eager to hire “better” and cheaper, it should be no surprise the candidates are also turning to AI to help them land their next role.
To me, it is a rational response to a process that has been communicating, loudly and consistently, that their time does not matter.
AI scaled the arms race. It didn’t start it.
Job seekers have long adapted to the dysfunction of hiring. They have mirrored job description language back in cover letters. They have memorized STAR-method frameworks. They have reformatted resumes for whatever parsing system was necessary for that ATS. None of that is new.
Employers automated screening to manage volume. Candidates automated applications to survive automated screening.
What is new is the speed and scale at which both sides are now operating. A Gartner survey found 39% of candidates use AI in some form during their applications, mostly to proofread, tighten up resumes, and prep for interviews.
Applicant volume is up but quality is not, AI is helping and complicating things at the same time, and recruiters are being asked to hire with far more precision than before. That precision comes at a cost the candidate is forced to absorb. Tighter filters catch more legitimate people. Longer processes burn more of a job seeker’s finite emotional resources. Automated rejections, or no rejections at all, leave people with nothing to learn and nowhere to improve.
And job seekers face their own fraud problem from the other direction. Job scams involving fake recruiters and ghost job postings jumped 37% in the first three quarters of 2025, with a 60% year-over-year increase in FTC-reported job scam claims. 45% of job seekers report that recognizing and avoiding job scams is one of the biggest challenges of their job search. The same person being treated as a fraud risk by a legitimate employer may, in the same week, have their personal information stolen by a fake one.
The verification trap
The obvious systemic answer to fraud on both sides is identity verification. Confirm who everyone is. Build trust into the infrastructure. It is a reasonable idea, right?
The problem is what happens to verified identity data once it is collected.
In late March 2026, Mercor, an AI-powered recruiting platform working with companies including OpenAI and Anthropic, confirmed a massive data breach. Hackers claimed to have walked away with 4 terabytes of data candidate profiles, identity verification documents including passports, and video interviews. The attackers did not target Mercor directly. They came through LiteLLM, a widely used AI middleware tool, via a supply chain compromise that affected thousands of companies simultaneously. Mercor’s infrastructure did exactly what it was supposed to do. That was the problem.
The candidates whose passports sat in that database did everything right. They complied. They trusted the platform. That trust became a liability. This is the part of the identity verification conversation the industry is not reckoning with seriously enough. Asking job seekers to hand over more sensitive data in exchange for safer access to the labor market only works if the infrastructure protecting that data is actually secure — and the Mercor breach is evidence that it is not.
Slings and Arrows From Every Side: Mercor, Adzuna, Talent.com, Aimwel and Jobcase.
Where AI actually helps. and where it does not.
None of this means technology has no role. LinkedIn, Indeed, and ZipRecruiter are all investing in platform-level verification and fraud detection tools. When done well, verification protects both sides. The most resilient recruiting workflows combine automation with human judgment a layered approach that reduces false positives while ensuring fraudulent candidates don’t progress unnoticed, and improves the experience for real candidates who move faster through a cleaner funnel.
But technology cannot fix what is fundamentally a values problem. Decisions in technology purchases, data management techniques, assessments and policies that communicate to a real person that their application, their preparation, and their time were not worth thirty seconds of acknowledgment. No AI tool is going to solve that. TA leaders have to decide that the person on the other side of the process is a human being who deserves basic respect.
The question for RECRUITMENT TECH
Platforms sit directly in the middle of this. Fraudulent job postings erode job seeker trust. Fraudulent candidates erode employer trust. But so does a process that has spent years signaling to candidates that they are interchangeable and disposable. The job boards investing in integrity, on both sides of the transaction, are building something more durable.
Job seekers want respect, honesty, and basic professionalism. These are not unreasonable demands. They are the floor. We are a long way from it.
What do you think? Tell me.
Until Next Time,
Julie “The Doc” Sowash
[Want to get Job Board Doctor posts via email? Subscribe here.]
[Got a tip, document or intel you want to share with the Doc? Tell me. Tip so hot you need it to be encrypted? Use Signal.]


This hits the nail on the head, although the uncomfortable truth is the process was broken long before AI turned up.
What we’re seeing now is an arms race. Employers optimise for volume and filtering, candidates optimise to get through it. The result is more activity, not better hiring.
The core issue is relying on CVs and keyword matching as the front-end filter. That was never a reliable way to identify the right people. Now it’s just easier to game.
A more effective approach is to take CVs out of the front end completely. Start with clear requirements, structured qualification questions, and targeted personality assessment to understand how someone is likely to perform.
Then use the CV later for context.
That shifts hiring away from who presents best on paper, and back to who is actually right for the role.