Yesterday, Anthropic quietly dropped a bombshell. Unless users explicitly opt out by September 28, it will use consumer chat data to train future AI models. This is a stunning reversal from Anthropic’s previous position as the privacy-first alternative to ChatGPT.
Previously, Anthropic automatically deleted user conversations after 30 days. Under the new policy, conversations from users who don’t opt out will be retained for five years.
The new policy affects all consumer tiers: Claude Free, Pro, and Max users, plus those using Claude Code. Importantly, business customers using Claude for Work, Claude Gov, Claude for Education, or API access through services like Amazon Bedrock remain unaffected.
This creates a clear two-tiered privacy system where enterprise customers get protection while consumers become training data.
Anthropic frames the change around improving “model safety” and helping future Claude models “improve at skills like coding, analysis, and reasoning.” The company emphasizes user choice and the ability to change settings at any time.
This is total nonsense, of course. In reality, training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions will provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google.
This isn’t happening in isolation. Google recently announced a similar opt-out policy for Gemini, set to take effect on September 2. That policy is similarly broad, covering user-uploaded files, photos, videos, and even screenshots that users ask questions about. The entire industry is converging on the same strategy: make data collection the default and require users to actively opt out.
If your company uses Claude, review your access method immediately. Consumer accounts now default to data sharing. Enterprise accounts maintain privacy protections, but at significantly higher cost. And you’ll probably want to let your workforce know that they have to properly configure their personal AI accounts if they are likely to accidentally input sensitive company data while using their personal devices.
To opt-out today, go to Settings>Privacy. Under the Privacy settings area, you’ll see “Help improve Claude.” Toggle it off. Accept the terms. You’re done.
The deadline is September 28, 2025. After that date, users must make their selection to continue using Claude. I think we should consider this a preview of coming industry standards. Privacy-by-default will quickly transition to privacy-by-choice, with the burden shifting to users to protect their own data.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.