Anthropic’s latest consumer policy update has left many users feeling more puzzled than reassured. On August 28, the company announced that chats and coding sessions from Claude’s Free, Pro and Max plans may now be used for model training, unless individuals explicitly opt out. Yet, the process of opting out seems less straightforward than suggested.
At the core of this update is a new five-year data retention rule, replacing the previous 30-day limit for users who allow their data to be used for training. Anthropic argues that this will strengthen safeguards against scams and abuse, while also improving Claude’s coding, analysis and reasoning skills. Yet, because the option is enabled by default, critics worry that many users may not realise what they’re consenting to.
The update does not apply to Claude’s enterprise, education or government versions, nor to API access via Amazon Bedrock or Google Cloud’s Vertex AI. For millions of individual users, however, unless one intervenes, their conversations and code could now be stored and processed for years.
Confusion With Data Sharing Could Lower Adoption
The ambiguity has struck a nerve in workplaces, where many developers already tread carefully when it comes to AI use.
Eduard Ruzga, former staff engineer at Prezi, told AIM that the shift could complicate Claude’s adoption within companies.
“Up until now, with ChatGPT and Claude, the default was that data was not used in learning.” That made it simpler for company policies to permit their use. “Now things will be much more confusing when it comes to allowing the use of Claude for work,” he explained.
Ruzga suggested businesses may be forced into stricter rollouts or dedicated team plans to avoid the risk of unintentional data leaks.
In a LinkedIn post, he also mentioned, “I can only imagine what kind of mess that could mean for companies in terms of policies of how to allow such tool use at work with internal information.”
That corporate unease is already surfacing. Denis Stebunov, CTO of ivelum, shared on X that his developers had been instructed to disable model training in privacy settings.
While praising Claude Code as “best in class”, he warned that leaving the feature on would expose proprietary client code, something their firm cannot risk. His frustration was blunt. The update, he argued, “is not user-friendly”, and if such practices persist, migration to alternatives will be inevitable.
Developers, Users are Not Happy
The backlash has also taken on an emotional edge. AI researcher Pierre-Marcel De Mussac described the change as Anthropic “reversing their entire privacy stance”. In his view, the opt-out toggle was buried, the retention window stretched unreasonably and everyday users were left unprotected compared to enterprise customers.
“Big businesses protected, users thrown to the wolves. Classic,” he wrote on X.
Joel Latto, threat advisor at F-Secure, argued that Anthropic’s motives are less about safety and more about necessity.
“LLM companies are running out of usable training data, which makes retaining all chat logs essential for their further development,” he told AIM.
For him, the real problem lies in the defaults, which are opt-out, not opt-in, as he described them as “by design anti-user, anti-privacy”. Latto also warned that many users may not realise that opting out later does not erase data already collected.
In his view, Anthropic’s emphasis on AI safety in its announcement serves more as a polish to soften what is essentially a business-driven decision. At the same time, Latto highlighted that the company commends their transparency in Threat Intelligence Reporting.
Anthropic, for its part, maintains that user choice remains central, promising that deleted conversations will not be used in training and that settings can be changed at any time. However, the optics tell a different story, where the default settings nudge users towards data sharing, while the responsibility to protect sensitive material falls squarely on individuals.
The broader question is whether this shift undermines trust. Enterprises may shield themselves with team plans, but independent professionals and casual users face the murkier end of the policy. For a company built on trust and safety as its brand identity, this new chapter in data collection may test just how much control users truly have over their data.
The post Anthropic Throws Users’ Data Privacy to the Wolves appeared first on Analytics India Magazine.