Deepfakes—AI-generated fake videos, photos or audio—are increasingly being used to impersonate job candidates and company executives, creating new challenges for business leaders. In 2025, HR and recruiting professionals must consider: Is the person they’re interacting with real or an AI-generated mirage? Given the risks involved, this question is critical, experts say.
In today’s increasingly virtual society, trust is more important—yet more difficult to attain—than ever, particularly in the context of hiring with enterprise systems.
“We are living in a remote world,” says Andrew Bud, CEO of iProov, a global technology company specializing in biometric verification and authentication solutions. He warns businesses of the escalating challenges posed by synthetic identities and AI-generated deepfakes.
The deepfake dilemma
Recent research from iProov reveals that while 57% of 16,000 surveyed people across eight countries believe they can easily distinguish real videos from deepfakes, this confidence is largely misplaced—fewer than one in four participants could accurately detect high-quality deepfakes.
This gap between perception and reality illustrates how difficult it has become to determine the authenticity of digital media, making it increasingly challenging to verify identities and protect sensitive information.
Jon Penland, COO of secure website hosting solution Kinsta, also warns of this rising security risk. “AI-powered social engineering attacks are on the rise,” Penland explains. “Attackers are using AI to clone voices and likenesses to impersonate executives and manipulate employees into actions they wouldn’t otherwise take.”
Penland emphasizes the need for HR teams to use proactive identity verification processes to combat AI-driven tactics. “If you haven’t experienced AI-based impersonation attacks yet, it’s likely only a matter of time—preparation is essential,” he warns.
Deepfakes target HR departments
Michael Marcotte, CEO and co-founder of enterprise-grade cybersecurity firm artius.iD, warns that HR departments are especially vulnerable to high-profile deepfake scams because of their access to extensive personal and corporate data.
Marcotte, founder of the non-profit National Cybersecurity Center (NCC), points to a notable attack where cybercriminals used a deepfake to impersonate the CEO of advertising giant WPP during a May 2024 Microsoft Teams call. Although this was not a direct attempt to steal employee information, HR leaders should be aware that they could be targeted through deepfake impersonations.
Furthermore, HR teams may need to educate employees on how to recognize and avoid this type of fraud as well as create hiring strategies that include more cybersecurity professionals.
“Many corporations are now far more exposed to cyber threats than they’re actually aware of,” says Marcotte.
He says that responsibility for vigilance can’t be placed fully on tech leaders, adding that “CIOs and CTOs have been unconsciously complicit in this, as they have slacked on teaching the hard skills necessary to protect a corporation from the myriad of AI-enabled threats out there.”
Marcotte says the increasing sophistication of deepfake scams is surprising HR teams, which have traditionally considered themselves less vulnerable to cyberattacks compared to finance or infrastructure departments, which are often seen as more lucrative targets than human resources.
However, Marcotte explains that HR departments handle sensitive personal data that can be exploited in several ways. This includes selling employee data on the dark web, training future deepfakes and manipulating hiring processes to gain access to corporate systems and intellectual property.
He emphasizes the urgent need for HR executives to strengthen their cybersecurity defenses, noting that organizations must act now to prevent becoming the next target of deepfake-fueled attacks.
Remote hiring and synthetic identities: KnowBe4 incident
Bud says that the rise of remote hiring is putting HR at risk of falling for deepfakes. A case in point is a July 2024 incident involving KnowBe4, a leading provider of security awareness training. The HR team at KnowBe4 conducted four video interviews with a job applicant, who appeared to match the photo provided.
Background checks and other pre-hiring steps were completed without issue, but the applicant was using a stolen, AI-enhanced photo. After onboarding, the new hire deliberately installed malware, which was detected by KnowBe4’s security software.
When the organization shared the suspicious activity with cybersecurity experts and the FBI, it was confirmed that the new hire was a North Korean operative.
Stu Sjouwerman, CEO of KnowBe4, shared this story with clients, offering 10 proactive steps to avoid falling victim to similar tactics. He clarified that no data was lost or compromised but emphasized the importance of recognizing this as an organizational learning opportunity. The incident underscores the evolving threat of synthetic identities, which are becoming harder to detect and protect against.
The financial and operational implications of deepfakes
The risks of deepfake and synthetic identity attacks are not hypothetical. A deepfake scam targeting British engineering firm Arup resulted in a $25 million loss after an employee was deceived by a synthetic video call in early 2024. This incident highlights the massive financial and reputational risks that businesses face.
Moreover, a recent report from cybersecurity company CyberArk found that nearly two-thirds of office workers admit to prioritizing productivity over cybersecurity practices, while 80% mix work applications with personal devices—behaviors that CyberArk researchers say make it easier for cybercriminals to exploit vulnerabilities.
iProov’s Bud notes that the social threat posed by deepfakes has been a concern for years, but as technology has matured, bad actors now have more sophisticated tools at their disposal.
Enterprises are beginning to realize they’re not doing enough to mitigate these risks, with 70% of technology decision-makers acknowledging the potential impact of AI-enabled cyberattacks, according to iProov research.
However, despite this awareness, a significant gap remains between understanding the threat and taking action. While nearly two-thirds of organizations are implementing cybersecurity measures to combat deepfakes, 62% worry their efforts are insufficient, pointing to the urgent need for more comprehensive and proactive strategies, according to CyberArk research.
artius.iD’s Marcotte says that business leaders must prioritize investments in cyber skills development.
“I’m not talking about annual training days that teach employees not to open a phishing email and then never discuss anything cyber again,” he warns. “I mean teaching comprehensive, robust cyber expertise to security experts.” As more organizations focus on streamlining operations with AI, they risk neglecting the vital need for skilled cybersecurity experts—who are increasingly rare these days, according to Marcotte.
The post Could that new hire be a deepfake? These pros say the risk is growing appeared first on HR Executive.