Anthropic is lowering the barrier for people looking to switch AI assistants by expanding the Claude memory feature to its free tier . Previously limited to paid subscribers since its initial launch, this capability allows the chatbot to retain user context, preferences, and work styles across multiple conversations .
Alongside the expanded access, Anthropic introduced a new import tool designed to help users seamlessly transfer their chat history and personal data from rival applications like OpenAI’s ChatGPT and Google’s Gemini . By bringing these premium capabilities to everyone, the company aims to make its platform more personalized and accessible without requiring a financial commitment .
How the New Import Tool Works
The new migration system is built around a straightforward, user-friendly process . Anthropic provides a pre-written prompt that individuals can copy and paste directly into their existing AI chatbot . This prompt instructs the competing assistant to compile every stored memory, including behavioral preferences, personal details, project goals, and customized instructions .
The rival AI then outputs this information into a single code block . From there, users simply paste the generated text into Claude’s import tool, which absorbs the background information in one quick step .
According to Anthropic, the goal is to ensure that the time users spend teaching an AI how they work is not lost when moving to a new platform . A new promotional landing page highlights this effort with the tagline, “Switch to Claude without starting over” . The company emphasizes that a user’s first conversation with Claude should immediately feel like their hundredth .
Complete Control Over Saved Context
The Claude memory feature fundamentally changes how people interact with the assistant . Instead of repeatedly explaining the same details or dealing with generic responses, users benefit from an AI that builds on past interactions . To activate this continuity, users can navigate to the “Capabilities” section within their account settings .
Anthropic also ensures that individuals maintain total control over their stored data . Users can easily view, edit, or permanently delete specific memories at any time . For those who prefer a temporary break from personalized responses, the feature can be paused . This keeps previously learned information dormant but preserved until the user chooses to reactivate it .
This memory expansion follows a broader trend of Anthropic offering premium tools at no cost . The company recently opened up access to file creation, Connectors, and customizable Skills for free users, reinforcing its strategy to deliver advanced functionality without a subscription .
Surging Popularity and Productivity Tools
This major update arrives as Claude experiences a significant surge in user adoption . The chatbot recently became the most downloaded free app in the United States Apple App Store, successfully overtaking major competitors like ChatGPT and Gemini .
Anthropic continues to position Claude as a highly capable productivity assistant . The company recently launched Claude Code and Claude Cowork, tools designed to handle complex workflows . Additionally, Anthropic rolled out its new Opus 4.6 and Sonnet 4.6 models, which deliver stronger performance for intricate, multi-step tasks such as spreadsheet analysis and coding .
Political Standoff and Defense Department Pushback
Anthropic’s aggressive push for new users is unfolding against a tense political and regulatory backdrop . CEO Dario Amodei has consistently positioned Claude as a safer, more responsible AI alternative . This stance recently led to a high-profile standoff with the Pentagon .
Anthropic rejected a deal with the military, refusing to compromise on its core safety safeguards . The company drew strict red lines against using its AI systems for mass surveillance of American citizens or fully autonomous lethal weapons . Just hours after Anthropic walked away from the agreement, OpenAI struck its own deal with the Pentagon, a move that sparked backlash among some users .
The fallout from Anthropic’s refusal has drawn direct political fire . President Donald Trump has publicly criticized the company, labeling it “woke” and pushing to block its software across federal agencies . Furthermore, Defense Secretary Pete Hegseth formally designated Anthropic as a “supply-chain risk” . The company has called this designation unprecedented and stated its intention to challenge the move in court .
Despite these regulatory challenges, Anthropic is moving forward with its consumer expansion . By making the Claude memory feature free for all users and simplifying the transition from other platforms, the company is actively working to capture a larger share of the highly competitive generative AI market .
