How deepfakes and AI are reshaping workplace risk: What HR leaders need to know
Summary
The article, authored by Jonathan Isaacs (Asia Pacific Chair, Baker McKenzie’s Employment & Compensation Practice), outlines how AI-driven deepfakes are creating new operational, financial and reputational risks for organisations. It explains what deepfakes are, details the main ways they can harm organisations (fraud, harassment, privacy breaches and evidence manipulation), and sets out five practical actions HR and legal teams should take now to reduce exposure.
Key takeaways are practical: train staff to spot manipulation, tighten verification and approval processes, update policies to ban unauthorised use of employees’ images and data, be vigilant in internal investigations for AI-manipulated evidence, and foster a culture where suspicious activity is reported.
Key Points
- Deepfakes are AI-generated audio, image or video manipulations that can make people appear to say or do things they did not.
- Financial fraud risk: attackers can impersonate senior leaders (voice/video) to trick staff into urgent or unauthorised payments.
- Harassment risk: manipulated sexually compromising images or videos of employees may amount to sexual harassment and expose employers to liability.
- Data privacy exposure: deepfakes created using employer-held personal data can trigger investigations and regulatory risk under privacy laws.
- Evidence manipulation complicates internal investigations and increases the risk of wrongful disciplinary action or unfair dismissal claims.
- Five recommended mitigations: invest in training, be vigilant when evaluating evidence, enforce multi-layer verification (dual authorisation/call-backs), update policies to specifically prohibit deepfake misuse, and encourage clear reporting channels.
Context and relevance
As generative AI becomes more accessible and convincing, HR and legal teams move from theoretical concern to frontline risk managers. The threats described affect payroll and finance controls, employee wellbeing, regulatory compliance and the integrity of internal investigations — so the issue spans multiple functions, not just IT or security.
For organisations operating in APAC and jurisdictions with specific harassment and privacy laws (for example Hong Kong’s Sex Discrimination Ordinance and Personal Data (Privacy) rules), the legal and reputational consequences are immediate. The guidance is tightly aligned with current trends: increasing AI sophistication, rising social engineering attacks, and regulator focus on data protection.
Why should I read this?
Short and sharp — this is your quick checklist to avoid getting stitched up by AI trickery. If you’re responsible for people, pay or investigations, the article tells you what to watch for and what to put in place now so you don’t wake up to a costly scam, a harassment claim, or a ruined investigation.
Author style
Punchy and pragmatic: Jonathan Isaacs urges HR leaders to act decisively. If you care about protecting finances, reputations and employee wellbeing, treat these recommendations as immediate priorities rather than optional nice-to-haves.