Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out
Summary
Anthropic is changing its consumer terms to allow new Claude chat conversations and coding sessions to be used as training data for future models unless users opt out. The policy update (originally scheduled for 28 September, later moved) takes effect with a privacy-policy change on 8 October. The change also extends Anthropic’s data-retention period from 30 days in most cases to five years. Commercial-tier accounts (including many government and education licences) are excluded from being used for model training.
Key Points
- Anthropic will start using new Claude chats and coding sessions as training data unless users opt out.
- The policy change is scheduled to take effect via a privacy update on 8 October 2025 (previously planned for 28 September).
- Opt-out is available under Privacy Settings via a toggle labelled “Help improve Claude”; the toggle is set to ON by default.
- New users are prompted at signup; existing users may see a pop-up explaining the choice.
- The change does not automatically train on your entire chat history unless you reopen old conversations — reopened threads become eligible for training.
- Anthropic has extended data retention from ~30 days to five years regardless of training consent.
- Commercial-tier (licensed) accounts are exempt from having their conversations used for training.
- Claude now joins other major models (ChatGPT, Gemini) in defaulting to possible training unless users opt out.
Content Summary
Anthropic previously did not use user chats to train its generative models. With this update, the company says real-world interactions help improve accuracy and usefulness, so it will include new chat logs and coding work for training unless the user switches off the setting. The setting appears as a default-on toggle during sign-up and in account Privacy Settings under “Help improve Claude.” If you prefer not to contribute, toggle that option off and leave it off. The policy also lengthens how long Anthropic stores data — up to five years — which raises additional privacy considerations even for users who opt out of training.
Context and Relevance
This matters if you use Claude for private conversations, brainstorming, or coding: your prompts and code could be fed back into future models. The change reflects a wider industry trend where companies rely on real user interactions to refine LLM behaviour, and it narrows a privacy distinction that previously set Claude apart. The five-year retention window is especially notable — even without active training consent, holding data longer increases exposure risk. For developers, businesses and privacy-conscious users, the update affects both intellectual-property exposure (coding sessions included) and compliance considerations.
Why should I read this?
Quick version: if you use Claude, check your settings now. It’s the kind of policy tweak that quietly changes how your chats get used — and for how long they’re kept. We’ve saved you the hassle: toggle off “Help improve Claude” in Privacy Settings if you don’t want your messages or code feeding future models.
Source
Source: https://www.wired.com/story/anthropic-using-claude-chats-for-training-how-to-opt-out/