Prompt injection is not SQL injection (it may be worse)

Prompt injection is not SQL injection (it may be worse)

Summary

Prompt injection is a class of vulnerability in generative AI systems where untrusted content included in prompts can be interpreted by a large language model (LLM) as instructions, altering its behaviour. Unlike SQL injection, where a clear separation between code and data can be enforced (for example via parameterised queries), LLMs operate by predicting the next token and do not inherently distinguish ‘instructions’ from ‘data’.

The blog from the NCSC argues that treating prompt injection like traditional code injection is dangerous. Instead, organisations should treat LLMs as “inherently confusable deputies”: the risk cannot be fully eliminated, only reduced. Practical mitigations include raising developer and organisational awareness, designing systems with deterministic safeguards and least privilege, marking or isolating untrusted content, and logging/monitoring for suspicious activity. The piece references ETSI guidance and recent academic work on architectural mitigations.

Key Points

  • Prompt injection arises because LLMs predict the next token and do not distinguish data from instructions.
  • Unlike SQL injection, prompt injection likely cannot be fully fixed by a single technical mitigation such as parameterisation.
  • Treat prompt injection as an “inherently confusable deputy” problem: reduce impact by limiting privileges and tool access when processing untrusted content.
  • Practical controls: developer awareness, secure design (deterministic safeguards), data marking/segregation techniques, and robust logging and monitoring.
  • Be sceptical of vendors that claim they can completely “stop” prompt injection—focus on risk reduction and detection instead.

Context and Relevance

This is important for anyone building or operating systems that embed generative AI—developers, architects, security teams and risk owners. As genAI is integrated into more applications, the potential impact of successful prompt injection grows (especially where LLM outputs drive tools, APIs or privileged actions). The NCSC highlights relevant standards (ETSI TS 104 223) and recent academic work proposing architectural mitigations.

Author note

Punchy take: This is a wake-up call. Prompt injection is not a neat repeat of old web vulnerabilities — it’s a different beast that needs design-level thinking, not just filters. If you care about avoiding a wave of AI-driven compromises, read the recommendations and act on them.

Why should I read this?

Short version: because the NCSC explains why treating prompt injection like SQL injection is dangerous and gives practical, realistic steps to reduce risk. If your app uses LLMs at all, this saves you headaches by flagging the real failure modes and simple design rules to lower impact.

Source

Source: https://www.ncsc.gov.uk/blog-post/prompt-injection-is-not-sql-injection

Leave a Reply

Your email address will not be published. Required fields are marked *