Generative artificial intelligence – ITSAP.00.041 – Canadian Centre for Cyber Security

Generative artificial intelligence – ITSAP.00.041 – Canadian Centre for Cyber Security

Summary

This Canadian Centre for Cyber Security awareness guide (ITSAP.00.041, December 2025) explains what generative AI is, where it’s being used across industries and the cyber risks it introduces. It outlines common applications (image, video, language, robotics, entertainment) and sector use-cases (healthcare, software development, marketplaces, agriculture, education and cyber security). The publication lists potential threats such as misinformation, sophisticated phishing, data privacy loss, malicious or buggy code, poisoned datasets, bias and intellectual property exposure. Practical mitigation advice is provided for organisations and individuals: enforce strong authentication, patch promptly, monitor networks, train staff, choose vendors carefully, limit sensitive prompts, and adopt an AI security policy and oversight.

Key Points

  • Generative AI creates new content (text, images, audio, code) by modelling large datasets, distinct from traditional pattern-recognition AI.
  • Large language models (LLMs) like ChatGPT, Copilot and LaMDA power many conversational and content-generation tools since late 2022.
  • Uses span many sectors: faster diagnoses in healthcare, code generation in software, personalised customer communications in business, automation in agriculture and adaptive learning in education.
  • Major risks include misinformation/disinformation, highly convincing spear-phishing, data privacy leaks, creation of malicious code and propagation of biased content.
  • Attackers can abuse generative AI to scale scams, write malware, poison training data or exfiltrate intellectual property.
  • Users must be sceptical of AI outputs—they can be incorrect, nonsensical, biased or incomplete and should be verified against credible sources.
  • Organisational defences: enforce multi-factor authentication, apply timely patches, monitor networks for anomalies, and maintain cyber hygiene.
  • Operational controls: establish an AI risk policy, vet vendors’ data practices, restrict sharing of PII or sensitive corporate data in prompts, and require human review of outputs.
  • Employee training and easy reporting channels for suspected phishing or social engineering remain crucial as AI-enhanced attacks rise.
  • Individuals should use strong passwords, enable MFA, avoid public Wi-Fi, limit what they share with AI tools and verify suspicious content or requests.

Context and relevance

Generative AI is mainstream and rapidly evolving; this guidance positions cybersecurity teams and everyday users to recognise new threat vectors that scale social engineering and automated attacks. For public- and private-sector organisations, the document ties AI-specific risks to existing cyber controls—showing practical steps rather than purely theoretical warnings. It is directly relevant to anyone responsible for risk, IT security, compliance, software development or digital services, and complements other CCS guidance on AI, MFA and network monitoring.

Why should I read this?

Short answer: because generative AI isn’t sci-fi any more—it’s in your inbox, your dev pipeline and possibly your CRM. This quick guide tells you what to watch for and the straight-forward steps to stop AI from becoming a liability. No jargon, just useful actions you can implement now.

Source

Source: https://cyber.gc.ca/en/guidance/generative-artificial-intelligence-ai-itsap00041

Leave a Reply

Your email address will not be published. Required fields are marked *