Fake military IDs, bogus résumés: How North Korean and Chinese hackers use AI tools to infiltrate companies and other targets

Fake military IDs, bogus résumés: How North Korean and Chinese hackers use AI tools to infiltrate companies and other targets

Summary

Nation-state-linked hackers from North Korea and China are increasingly using generative AI — including ChatGPT, Anthropic’s Claude and Google’s Gemini — to make phishing, social-engineering and long-term intrusion campaigns far more effective. Recent findings show North Korea’s Kimsuky used ChatGPT to draft a fake South Korean military ID for a phishing email, while Anthropic and Google have flagged multiple cases where their models were abused to write résumés, cover letters, code and operational plans.

AI tools are lowering the skill barrier: attackers can produce convincing fake IDs, pass technical hiring screens with fabricated résumés and even get remote jobs to access corporate systems. Providers say they are improving defences, but attackers have found prompt-workarounds and used AI as a “full-stack” assistant in long campaigns.

Key Points

  • North Korea’s Kimsuky used ChatGPT to generate a mock South Korean military ID attached to phishing emails impersonating a defence institution.
  • Anthropic reported North Korean actors using Claude to craft résumés, portfolios, pass coding tests and maintain fraudulent remote employment at US firms.
  • A China-based actor used Claude as a technical adviser for an extended cyber campaign targeting Vietnamese telecoms, agriculture and government databases.
  • OpenAI found Chinese actors using ChatGPT to generate brute-force scripts, reconnaissance on US defence networks and influence-operation content with fake profiles.
  • Google reported hackers using Gemini to troubleshoot code, draft cover letters and scout IT job postings; its safeguards blocked more advanced misuse in some cases.
  • Security experts warn genAI lets even novices produce realistic scams, deepfakes and personalised phishing at scale, increasing risk to small businesses and large enterprises alike.
  • Model providers are updating detection and mitigation methods, but prompt-engineering and creative misuse remain persistent challenges.

Content summary

The article collates reports from security firms and AI companies showing how state-linked groups weaponise generative AI. Specific incidents include fake military IDs by Kimsuky, résumé and job-fraud operations enabled by Claude, and Chinese actors using chatbots to write attack code and influence content. Providers (OpenAI, Anthropic, Google) have documented misuse and said they are refining safeguards, but attackers adapt quickly by reframing prompts or using multiple tools. The net effect is faster, cheaper, and more convincing social-engineering and intrusion techniques.

Context and relevance

This matters because it shifts the threat model for hiring, identity verification and perimeter security. Organisations that rely on remote hiring, email verification or lightweight identity checks are particularly vulnerable. The trend also highlights two industry pressures: the need for stronger vetting and authentication practices, and continued investment by AI providers in misuse detection. For security teams, supply-chain managers and HR departments, the piece signals an immediate need to reassess controls around remote access and identity proofing.

Author’s take

Punchy and direct: nation-state hackers are weaponising AI to do the boring, repetitive parts of espionage — drafting convincing lies, automating reconnaissance and even helping execute attacks. If you manage hiring, access or incident response, this isn’t a future problem — it’s happening now and will only get cheaper for attackers.

Why should I read this?

Short answer: because it shows how generative AI is being used in real-world espionage and fraud, not just theory. Read it to understand practical risks to hiring, ID checks and remote access — and to get a sense of how attackers are combining human social engineering with AI tools to slip past defences.

Source

Source: https://www.businessinsider.com/north-korea-china-hackers-infiltrate-companies-ai-resumes-military-id-2025-9

Leave a Reply

Your email address will not be published. Required fields are marked *