Fraud, deepfakes, and the tech that’s changing everything.
On this weekâs episode of What the FTE?, Certnâs Global Head of Background Screening, Donal Greene, sits down with Doug Beavis, our EMEA Sales & Commercial Director, and a 25-year veteran of the UK background screening industry.
Dougâs seen it all. From the days of faxed reference checks (yes, really) to todayâs biometric ID scans, he shares how far hiring tech has come, and why it matters more than ever in the fight against fraud.
Bonus: Donal shares a wild story about a fraudster impersonating Certnâs CEO. Spoiler alertâhe didnât fall for it. But the implications for HR are huge.
If youâre navigating AI risks, hiring remotely, or just want to make sure your process is built on more than good faith, this episode is for you.
Tune into the full episode: certn.co/podcast/
â

Twenty years ago, failing a background check usually meant one of two things: an undisclosed criminal record or a terrible reference. Today? Itâs a whole new level.
Today itâs not uncommon to catch candidates lying about job roles, making up jobs, or using diploma mills. In fact, we saw an application the other day with a fabricated university degree.
More concerning, GenAI is like electricity; cheap, accessible, and powerful. Anyone with an internet connection can spin up fake résumés, voices, faces, or entire identities.
Welcome to the era of hiring hallucinations.
If you thought ghosting was bad, try onboarding a candidate who doesnât exist.
Deepfakes arenât sci-fi. Theyâre your next hiring risk.
DeepfakesâAI-generated videos, audio, and imagesâare hitting HR and TA leaders right where it hurts: your trust, your process, and your people. From faked video interviews to synthetic onboarding fraud, weâre entering a new phase of hiring risk. This isnât just a cybersecurity problem, itâs an HR crisis.
It’s time to screen smarter, not harder, and definitely not slower.
đš Deepfakes Are Infiltrating the Hiring Funnel
In a well-documented example in 2024, cybercriminals used a fake CEO to hop on a Microsoft Teams call with staff at the worldâs biggest ad firm, WPP. Fraudsters impersonated WPPâs CEO using a fake WhatsApp account, a voice clone and YouTube footage.
In another case, KnowBe4 onboarded a deepfake posing as a remote hire, who then planted malware. Turns out, it was a North Korean operative. Let that sink in.
HRâs Newest Adversary: Synthetic Candidates
These arenât sloppy scams. Iâm talking about AI-enhanced applications, deepfaked ID photos, and video interviews where candidates are digitally puppeted in real time.
Creepy? Definitely.
Real? You bet.
As I explained to Doug, with the right technology, I could turn up for a virtual interview with a completely different face. I could have a cloned voice.
Deepfakes Targeting Employees
Our usual podcast host, and our CEO, Andrew, features in this as well.
Iâve been getting shady WhatsApp messages from a US number. Allegedly, thereâs a âtop secret transactionâ in play. Weâre acquiring a company. Andyâs out of the country. He needs me to act fast. Classic scammy vibes.
Normally I ignore these messages, but sometimes curiosity gets the better of you when you build identity verification tech for a living.
Hereâs what shocked me:
They werenât clueless. These fraudsters had semi-credible intel about my role, our company, and, most alarmingly, Andy. They had details only someone whoâd done their homework would know.
Then came the voice note.
It sounded just like him. Same tone, cadence, mannerisms. Theyâd likely scraped podcast clips or YouTube footage to create a convincing deepfake voice. Hearing it was… unsettling.
To dig deeper, I sent them a decoy webpage embedded with traceable tech to locate their IP. They took the bait, but what I uncovered was even more disturbing.
Theyâd set up a real bank account. They answered obscure security questionsâthings that only someone close to Andy might know. This wasnât a spray-and-pray scam. It was calculated, and dangerously convincing.
This is exactly why HR, TA, and leadership teams canât treat deepfake threats as âITâs problem.â Theyâre coming for your people, your processes, and your trust.
đ§ Hear the full story on this weekâs episode of What the FTE?: certn.co/podcast
Stat Check
Gartner predicts that by 2028, one in four job candidates globally will be fake. Meanwhile:
58%: Of HR teams say they struggle to balance speed and thoroughness in hiring (CIPD Webinar Poll, 2024)
17% of hiring managers have already encountered deepfake job candidates during video interviews (ResumeGenius)
A recent study of 2,000 US and UK residents found that the vast majority of participants couldnât distinguish between real content and deepfake images and video
Roughly one in five consumers (22%) hadnât heard of deepfakes before the study
The illusion of confidence is the enemy of cybersecurity. When youâre hiring remotely or globally, the attack surface gets even bigger.
Deepfake Tactics You Should Know
Even the best hiring teams can get duped.
While scale-ups are especially vulnerable (onboarding dozens of new hires at once, racing against headcount targets) the truth is, any company can be a mark. All it takes is one fake candidate slipping through the cracks.
Hereâs what to watch for in the age of AI-enabled fraud:
- Voice Cloning – âHey itâs your CFO, just wire that vendor paymentâŠâ (Spoiler: Itâs not your CFO.) AI-generated voice notes are shockingly realistic. If you’re not validating high-stakes requests with multi-channel verification, youâre playing with fire.
- ID Spoofing with AI-Enhanced Photos – Still accepting static ID images or PDFs uploaded by the candidate? Thatâs fraudster paradise. Tools exist to manipulate photos, blur out discrepancies, and create hyper-realistic fakes. Only live, biometric ID verification can keep up.
- Synthetic RĂ©sumĂ©s + Ghost Interviews – Coordinated scripts. AI-written CVs. Fake LinkedIn profiles boosted by engagement pods. âCandidatesâ can conduct full interviews with deepfaked avatars and audio clones. If your process canât sniff out digital deception, you risk being played.
The fix? Automate the basics, verify in real time, and stop relying on outdated processes that canât scale with risk. At Certn, weâre known for background checks and identity verification that donât slow hiring down. But weâve been working up something big, and itâs almost hereâsomething built for the era of deepfakes, digital deception, and remote interviews.
For the past few months, our teamâs been developing a tool that adds an entirely new layer of trust to the hiring process. Itâs built for HR teams, because youâre the ones on the frontlines. You already trust us to verify candidate identities and run fast, compliant checks. Now, weâre going further.
We believe this will be a game-changer for virtual interviews and remote hiring, a way to help teams hire fearlessly, even in a world where not every candidate is who they seem.
Stay tuned, weâre almost ready to share it with the world.

10 Ways to Deepfake-Proof Your Hiring Process
Not all background screening vendors are created equal; it really depends how seriously your vendor takes fraud. I can tell you first-hand of a case from Europe.
Prior to Trustmatic (the ID verification company I founded) being acquired by Certn, I was working with a smaller European background screening vendor. One of their clients was hiring mid-level engineers and almost onboarded a fraudster.
The candidate submitted a Slovak ID with all the right security features. A police clearance. A diploma. Everything looked legitimate. But every document? Fake.
This is the problem with accepting scans and candidate-supplied documents at face value in the vetting process, you risk missing fraud.
At Certn we go to the direct source of truth. We verify data at the source.
Here are 10 ways you can protect yourself:
- Use Biometric Identity Verification – Face match. Liveness detection. Motion prompts. Certnâs identity verification tech (yes, Iâm proud of it) uses this to stop imposters before they even start.
- Implement Real-Time ID + Credential Verification – Automated + online > static scans and PDFs. Use platforms that verify IDs and credentials with issuing sources.
- Add Red Team Testing to TA Tech Stack – Have your security team simulate a deepfake attack on your hiring funnel. If they can break it, so can someone else.
- Educate Your Hiring Teams – Your recruiters and coordinators are frontline defenders. Train them on spotting anomalies (laggy interviews, mismatched lip sync, nervous stalling). More specifically:
- Prompt Real-Time Movements – Ask candidates to perform natural, unscripted actions, like touching their nose, placing a hand near their face, or briefly adjusting their glasses. These simple motions often disrupt deepfake overlays that struggle with depth, occlusion, or real-time rendering.
- Shift the Frame – Request the candidate briefly turn their head, look over each shoulder, or reposition themselves in frame. Multi-angle movements expose inconsistencies in AI-generated images, especially around ears, jawlines, and profile contours.
- Introduce Environmental Change – Ask them to hold up a nearby object (e.g., a pen, phone, or coffee mug) or adjust their lighting slightly. Deepfakes can be thrown off by real-world changes that require dynamic adaptation.
- Watch for Human Tells – Pay attention to the subtle stuff. Are their blinks irregular or perfectly timed? Are their expressions too smooth? Authentic humans show micro-expressions, asymmetry, and natural inconsistencies, deepfakes often donât.
- Dig Beneath the RĂ©sumĂ© – Go beyond surface-level questions. If they claim to have worked in a specific city, ask about their commute, favourite local coffee shop, or how the team celebrated wins. For job experience, prompt them to reflect: âTell me about a time that surprised you at that company.â Real experience includes nuance. GenAI rarely does nuance well.
- Vet Your Vendors – Whatâs your ATS or background check provider doing to detect synthetic identities? If the answer is ânot much,â it might be time to rethink that relationship.
With technology being such a big part of background screening now, the vetting point is even more important. If your buying committee doesnât already include your CISO, IT, or a security lead, it might be time to loop them in. Their perspective is critical, especially when you’re evaluating vendors and building out your pre-screening safeguards.
HR Leaders: Donât Just React. Redesign.
A lot of HR leaders I speak with are just now waking up to the deepfake and synthetic fraud threat. While growing awareness is a good thing, Iâll be honest: itâs also alarming.
Fearmongering isnât my style, but hereâs the reality, and itâs uncomfortable, there are likely already bad actors sitting inside organizations today. Yes, even in Canada. In the US. In the UK. Across Europe. These are people hired remotely who do just enough to stay under the radar, they attend the calls, hit the bare minimum, but behind the scenes? Theyâre gathering intel. Scraping data. Seeding vulnerabilities. And when the breach finally happens, youâre in reactive mode, playing catch up after the damage is already done.
Trust in hiring canât be built on gut checks anymore. You need smart systems. Fast tech. Frictionless, candidate-friendly verification.
Thatâs the Certn way: screen fast, hire fearlessly.
When the threat is digital, your defence has to be dynamic. The good news? Many of our clients are already leaning in. Theyâre asking how to upgrade their screening protocols, what weâre building next, and how to stay ahead of these risks, not just clean up after them.
Weâve made it a priority to be at the forefront of not just background checks, but background intelligence, and weâre here to help forward-thinking HR and TA leaders do the same.
Donât hesitate to email me if you have any questions about vendor selection or how to deepfake-proof your hiring process. You can also connect with me on LinkedIn.
TL;DR: What You Should Do Next
- Audit your hiring and onboarding process for vulnerability to synthetic identities
- Invest in biometric and real-time verification
- Train TA and HR staff to recognize the signs of deepfake fraud
- Move fast, but donât cut corners
The future of hiring belongs to teams who treat cybersecurity as part of the candidate experience. Because in 2025, trust is built in seconds, and deepfakes can break it just as fast.
â
Want to see how Certn helps companies fraud-proof hiring? Request a demo.