Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out

Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data.

anthropic data collection
New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes.

The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude.

Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude."

Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education.

In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now.

Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties.

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.

Popular Stories

Apple Logo Black

Apple Just Made Its Second-Biggest Acquisition Ever After Beats

Thursday January 29, 2026 10:07 am PST by
Apple today confirmed to Reuters that it has acquired Q.ai, an Israeli startup that is working on artificial intelligence technology for audio. Apple paid close to $2 billion for Q.ai, according to sources cited by the Financial Times. That would make this Apple's second-biggest acquisition ever, after it paid $3 billion for the popular headphone and audio brand Beats in 2014. Q.ai has...
Aston Martin CarPlay Ultra Screen

Apple's CarPlay Ultra to Expand to These Vehicle Brands Later This Year

Sunday February 1, 2026 10:08 am PST by
Last year, Apple launched CarPlay Ultra, the long-awaited next-generation version of its CarPlay software system for vehicles. Nearly nine months later, CarPlay Ultra is still limited to Aston Martin's latest luxury vehicles, but that should change fairly soon. In May 2025, Apple said many other vehicle brands planned to offer CarPlay Ultra, including Hyundai, Kia, and Genesis. In his Powe...
14 inch MacBook Pro Keyboard

Apple Changes How You Order a Mac

Saturday January 31, 2026 10:51 am PST by
Apple recently updated its online store with a new ordering process for Macs, including the MacBook Air, MacBook Pro, iMac, Mac mini, Mac Studio, and Mac Pro. There used to be a handful of standard configurations available for each Mac, but now you must configure a Mac entirely from scratch on a feature-by-feature basis. In other words, ordering a new Mac now works much like ordering an...
Apple Logo Black

Apple's Next Launch is 'Imminent'

Sunday February 1, 2026 12:31 pm PST by
The calendar has turned to February, and a new report indicates that Apple's next product launch is "imminent," in the form of new MacBook Pro models. "All signs point to an imminent launch of next-generation MacBook Pros that retain the current form factor but deliver faster chips," Bloomberg's Mark Gurman said on Sunday. "I'm told the new models — code-named J714 and J716 — are slated...
Apple MacBook Pro M4 hero

New MacBook Pros Reportedly Launching Alongside macOS 26.3

Sunday February 1, 2026 5:42 am PST by
Apple is planning to launch new MacBook Pro models with M5 Pro and M5 Max chips alongside macOS 26.3, according to Bloomberg's Mark Gurman. "Apple's faster MacBook Pros are planned for the macOS 26.3 release cycle," wrote Gurman, in his Power On newsletter today. "I'm told the new models — code-named J714 and J716 — are slated for the macOS 26.3 software cycle, which runs from...

Top Rated Comments

turbineseaplane Avatar
23 weeks ago
gross

I hate switcharoos like this
Score: 24 Votes (Like | Disagree)
dontwalkhand Avatar
23 weeks ago
Deleted all my AI apps because they are all worthless. Inaccurate mess. Literally pointless.
Score: 17 Votes (Like | Disagree)
canadianreader Avatar
23 weeks ago

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.
This is the main reason many ChatGPT users switched to Claude. Enshi**ification continues.
Score: 12 Votes (Like | Disagree)
mdatwood Avatar
23 weeks ago

Despite these limitations they're still very useful tools. Just be sensible about what you share.
Yeah, basically treat them like you would the open internet.
Score: 11 Votes (Like | Disagree)
routine_analyst Avatar
23 weeks ago
this has been the plan all along. create a compelling product, get you to use it, you train it for free (you're paying a fee) and then it replaces you a few years down the road. why have human employees when you can have AI bots that have no rights?
Score: 8 Votes (Like | Disagree)
novagamer Avatar
23 weeks ago
Note: you need to delete your conversations for the 30 day window to apply.

Also, if you violate trust and safety and it gets flagged by their systems, it's 2 years of retention and 7 years of the classification score.

TL;DR don't do anything extraordinarily nefarious with any of these tools, which should be obvious, but people that might do those things are dense.

The fact that they do delete data after 30 days of you doing so is still notable and commendable; OpenAI may not train on your data if you opt out but right now they aren't deleting anything unless you have a ZDR policy with them due to the NYT lawsuit.

If the outcome of that lawsuit is in OpenAI's favor they will purge the backups, if not and especially if it becomes material for discovery processes, oh boy.

TL;DR #2: Don't use ChatGPT for anything sensitive at all, full stop.

Despite these limitations they're still very useful tools. Just be sensible about what you share.
Score: 8 Votes (Like | Disagree)