Fraud, deepfakes, and the tech that’s changing everything.
On this week’s episode of What the FTE?, Certn’s Global Head of Background Screening, Donal Greene, sits down with Doug Beavis, our EMEA Sales & Commercial Director, and a 25-year veteran of the UK background screening industry.
Doug’s seen it all. From the days of faxed reference checks (yes, really) to today’s biometric ID scans, he shares how far hiring tech has come, and why it matters more than ever in the fight against fraud.
Bonus: Donal shares a wild story about a fraudster impersonating Certn’s CEO. Spoiler alert—he didn’t fall for it. But the implications for HR are huge.
If you’re navigating AI risks, hiring remotely, or just want to make sure your process is built on more than good faith, this episode is for you.
Tune into the full episode: certn.co/podcast/
—

Twenty years ago, failing a background check usually meant one of two things: an undisclosed criminal record or a terrible reference. Today? It’s a whole new level.
Today it’s not uncommon to catch candidates lying about job roles, making up jobs, or using diploma mills. In fact, we saw an application the other day with a fabricated university degree.
More concerning, GenAI is like electricity; cheap, accessible, and powerful. Anyone with an internet connection can spin up fake résumés, voices, faces, or entire identities.
Welcome to the era of hiring hallucinations.
If you thought ghosting was bad, try onboarding a candidate who doesn’t exist.
Deepfakes aren’t sci-fi. They’re your next hiring risk.
Deepfakes—AI-generated videos, audio, and images—are hitting HR and TA leaders right where it hurts: your trust, your process, and your people. From faked video interviews to synthetic onboarding fraud, we’re entering a new phase of hiring risk. This isn’t just a cybersecurity problem, it’s an HR crisis.
It’s time to screen smarter, not harder, and definitely not slower.
🚨 Deepfakes Are Infiltrating the Hiring Funnel
In a well-documented example in 2024, cybercriminals used a fake CEO to hop on a Microsoft Teams call with staff at the world’s biggest ad firm, WPP. Fraudsters impersonated WPP’s CEO using a fake WhatsApp account, a voice clone and YouTube footage.
In another case, KnowBe4 onboarded a deepfake posing as a remote hire, who then planted malware. Turns out, it was a North Korean operative. Let that sink in.
HR’s Newest Adversary: Synthetic Candidates
These aren’t sloppy scams. I’m talking about AI-enhanced applications, deepfaked ID photos, and video interviews where candidates are digitally puppeted in real time.
Creepy? Definitely.
Real? You bet.
As I explained to Doug, with the right technology, I could turn up for a virtual interview with a completely different face. I could have a cloned voice.
Deepfakes Targeting Employees
Our usual podcast host, and our CEO, Andrew, features in this as well.
I’ve been getting shady WhatsApp messages from a US number. Allegedly, there’s a “top secret transaction” in play. We’re acquiring a company. Andy’s out of the country. He needs me to act fast. Classic scammy vibes.
Normally I ignore these messages, but sometimes curiosity gets the better of you when you build identity verification tech for a living.
Here’s what shocked me:
They weren’t clueless. These fraudsters had semi-credible intel about my role, our company, and, most alarmingly, Andy. They had details only someone who’d done their homework would know.
Then came the voice note.
It sounded just like him. Same tone, cadence, mannerisms. They’d likely scraped podcast clips or YouTube footage to create a convincing deepfake voice. Hearing it was… unsettling.
To dig deeper, I sent them a decoy webpage embedded with traceable tech to locate their IP. They took the bait, but what I uncovered was even more disturbing.
They’d set up a real bank account. They answered obscure security questions—things that only someone close to Andy might know. This wasn’t a spray-and-pray scam. It was calculated, and dangerously convincing.
This is exactly why HR, TA, and leadership teams can’t treat deepfake threats as “IT’s problem.” They’re coming for your people, your processes, and your trust.
🎧 Hear the full story on this week’s episode of What the FTE?: certn.co/podcast
Stat Check
Gartner predicts that by 2028, one in four job candidates globally will be fake. Meanwhile:
58%: Of HR teams say they struggle to balance speed and thoroughness in hiring (CIPD Webinar Poll, 2024)
17% of hiring managers have already encountered deepfake job candidates during video interviews (ResumeGenius)
A recent study of 2,000 US and UK residents found that the vast majority of participants couldn’t distinguish between real content and deepfake images and video
Roughly one in five consumers (22%) hadn’t heard of deepfakes before the study
The illusion of confidence is the enemy of cybersecurity. When you’re hiring remotely or globally, the attack surface gets even bigger.
Deepfake Tactics You Should Know
Even the best hiring teams can get duped.
While scale-ups are especially vulnerable (onboarding dozens of new hires at once, racing against headcount targets) the truth is, any company can be a mark. All it takes is one fake candidate slipping through the cracks.
Here’s what to watch for in the age of AI-enabled fraud:
- Voice Cloning – “Hey it’s your CFO, just wire that vendor payment…” (Spoiler: It’s not your CFO.) AI-generated voice notes are shockingly realistic. If you’re not validating high-stakes requests with multi-channel verification, you’re playing with fire.
- ID Spoofing with AI-Enhanced Photos – Still accepting static ID images or PDFs uploaded by the candidate? That’s fraudster paradise. Tools exist to manipulate photos, blur out discrepancies, and create hyper-realistic fakes. Only live, biometric ID verification can keep up.
- Synthetic Résumés + Ghost Interviews – Coordinated scripts. AI-written CVs. Fake LinkedIn profiles boosted by engagement pods. “Candidates” can conduct full interviews with deepfaked avatars and audio clones. If your process can’t sniff out digital deception, you risk being played.
The fix? Automate the basics, verify in real time, and stop relying on outdated processes that can’t scale with risk. At Certn, we’re known for background checks and identity verification that don’t slow hiring down. But we’ve been working up something big, and it’s almost here—something built for the era of deepfakes, digital deception, and remote interviews.
For the past few months, our team’s been developing a tool that adds an entirely new layer of trust to the hiring process. It’s built for HR teams, because you’re the ones on the frontlines. You already trust us to verify candidate identities and run fast, compliant checks. Now, we’re going further.
We believe this will be a game-changer for virtual interviews and remote hiring, a way to help teams hire fearlessly, even in a world where not every candidate is who they seem.
Stay tuned, we’re almost ready to share it with the world.

5 Ways to Deepfake-Proof Your Hiring Process
Not all background screening vendors are created equal; it really depends how seriously your vendor takes fraud. I can tell you first-hand of a case from Europe.
Prior to Trustmatic (the ID verification company I founded) being acquired by Certn, I was working with a smaller European background screening vendor. One of their clients was hiring mid-level engineers and almost onboarded a fraudster.
The candidate submitted a Slovak ID with all the right security features. A police clearance. A diploma. Everything looked legitimate. But every document? Fake.
This is the problem with accepting scans and candidate-supplied documents at face value in the vetting process, you risk missing fraud.
At Certn we go to the direct source of truth. We verify data at the source.
Here are 10 ways you can protect yourself:
- Use Biometric Identity Verification – Face match. Liveness detection. Motion prompts. Certn’s identity verification tech (yes, I’m proud of it) uses this to stop imposters before they even start.
- Implement Real-Time ID + Credential Verification – Automated + online > static scans and PDFs. Use platforms that verify IDs and credentials with issuing sources.
- Add Red Team Testing to TA Tech Stack – Have your security team simulate a deepfake attack on your hiring funnel. If they can break it, so can someone else.
- Educate Your Hiring Teams – Your recruiters and coordinators are frontline defenders. Train them on spotting anomalies (laggy interviews, mismatched lip sync, nervous stalling). More specifically:
- Prompt Real-Time Movements – Ask candidates to perform natural, unscripted actions, like touching their nose, placing a hand near their face, or briefly adjusting their glasses. These simple motions often disrupt deepfake overlays that struggle with depth, occlusion, or real-time rendering.
- Shift the Frame – Request the candidate briefly turn their head, look over each shoulder, or reposition themselves in frame. Multi-angle movements expose inconsistencies in AI-generated images, especially around ears, jawlines, and profile contours.
- Introduce Environmental Change – Ask them to hold up a nearby object (e.g., a pen, phone, or coffee mug) or adjust their lighting slightly. Deepfakes can be thrown off by real-world changes that require dynamic adaptation.
- Watch for Human Tells – Pay attention to the subtle stuff. Are their blinks irregular or perfectly timed? Are their expressions too smooth? Authentic humans show micro-expressions, asymmetry, and natural inconsistencies, deepfakes often don’t.
- Dig Beneath the Résumé – Go beyond surface-level questions. If they claim to have worked in a specific city, ask about their commute, favourite local coffee shop, or how the team celebrated wins. For job experience, prompt them to reflect: “Tell me about a time that surprised you at that company.” Real experience includes nuance. GenAI rarely does nuance well.
- Vet Your Vendors – What’s your ATS or background check provider doing to detect synthetic identities? If the answer is “not much,” it might be time to rethink that relationship.
With technology being such a big part of background screening now, the vetting point is even more important. If your buying committee doesn’t already include your CISO, IT, or a security lead, it might be time to loop them in. Their perspective is critical, especially when you’re evaluating vendors and building out your pre-screening safeguards.
HR Leaders: Don’t Just React. Redesign.
A lot of HR leaders I speak with are just now waking up to the deepfake and synthetic fraud threat. While growing awareness is a good thing, I’ll be honest: it’s also alarming.
Fearmongering isn’t my style, but here’s the reality, and it’s uncomfortable, there are likely already bad actors sitting inside organizations today. Yes, even in Canada. In the US. In the UK. Across Europe. These are people hired remotely who do just enough to stay under the radar, they attend the calls, hit the bare minimum, but behind the scenes? They’re gathering intel. Scraping data. Seeding vulnerabilities. And when the breach finally happens, you’re in reactive mode, playing catch up after the damage is already done.
Trust in hiring can’t be built on gut checks anymore. You need smart systems. Fast tech. Frictionless, candidate-friendly verification.
That’s the Certn way: screen fast, hire fearlessly.
When the threat is digital, your defence has to be dynamic. The good news? Many of our clients are already leaning in. They’re asking how to upgrade their screening protocols, what we’re building next, and how to stay ahead of these risks, not just clean up after them.
We’ve made it a priority to be at the forefront of not just background checks, but background intelligence, and we’re here to help forward-thinking HR and TA leaders do the same.
Don’t hesitate to email me if you have any questions about vendor selection or how to deepfake-proof your hiring process. You can also connect with me on LinkedIn.
TL;DR: What You Should Do Next
- Audit your hiring and onboarding process for vulnerability to synthetic identities
- Invest in biometric and real-time verification
- Train TA and HR staff to recognize the signs of deepfake fraud
- Move fast, but don’t cut corners
The future of hiring belongs to teams who treat cybersecurity as part of the candidate experience. Because in 2025, trust is built in seconds, and deepfakes can break it just as fast.
—
Want to see how Certn helps companies fraud-proof hiring? Request a demo.