The Hidden C-Suite Risk Of AI Failures
Summary
This article warns that broad AI exclusions being added to liability insurance policies can leave directors, officers and organisations unexpectedly uninsured for claims that are ‘related to’ AI, even where AI played a minor role. Insurers are inserting wide-ranging language that can exclude defence and indemnity for errors, bodily injury, property damage, economic loss and cyber or privacy incidents tied to AI — including third-party AI used by vendors or partners.
Key Points
- Insurers are introducing AI-specific exclusions and endorsements that can be very broad in scope.
- Typical exclusions deny coverage for claims ‘based upon, attributable to, arising out of, or related to’ any use of AI, which can capture minimal or indirect involvement of AI.
- Exclusions often extend to third-party AI, so vendor or partner failures may also be precluded from coverage.
- D&O, E&O and cyber policies may all be affected — creating gaps where one policy excludes an AI-related suit and another refuses follow-on claims.
- Definitions of ‘artificial intelligence’ vary widely; ambiguity increases the risk of disputes at claim time.
- Organisations should re-review policies at renewal, negotiate removal or narrowing of AI exclusions, and consider affirmative AI liability products.
- Early engagement of experienced brokers, coverage counsel and risk advisers is crucial to identifying and closing coverage gaps.
Content summary
The piece explains how proposed and emerging AI exclusions are drafted to be extremely broad, often denying coverage for any claim even remotely connected to AI. It highlights examples in professional liability and D&O forms that exclude errors in AI decision-making, bodily injury or economic loss caused by AI, and breaches or privacy violations traced to AI. Because many organisations use third-party AI services, insurers’ language that applies to ‘any party’ can negate coverage for incidents originating at vendors.
The article also explores related pitfalls: cyber policies that exclude failures of AI-based security tools, professional liability forms that limit cover to services provided by natural persons, and E&O forms that only cover failures of software developed by the insured. All of these carve-outs can leave organisations exposed to direct claims and subsequent investor, derivative or regulatory suits that follow an AI failure. On the positive side, some insurers are developing affirmative AI liability products to fill these emerging gaps.
Context and Relevance
This is highly relevant to boards, C-suite executives, general counsel, risk and insurance teams, and anyone managing digital transformation. As AI becomes embedded across healthcare, finance, software development and client services, insurers are reacting by tightening policy language. That shift means previously relied-upon protections under D&O, E&O and cyber policies may no longer respond as expected.
For organisations expanding into AI or relying on third-party models, the article outlines practical steps: map AI exposures, scrutinise policy definitions and exclusions annually, negotiate precise language or seek affirmative AI cover, and involve brokers and coverage counsel early.
Why should I read this?
Short version: your insurance might not save you if AI trips up. If your organisation uses or plans to use AI, this piece tells you where the nasty holes in cover are likely to be, and what to do about them — quickly and practically. It saves you time by flagging the legal and insurance traps executives often miss.
Source
Source: https://corpgov.law.harvard.edu/2025/09/22/the-hidden-c-suite-risk-of-ai-failures/