Claude's Import Memory Feature: Switch AI Assistants Without Starting Over
Claude's Import Memory Feature: Switch AI Assistants Without Starting Over
You spent four months teaching your AI assistant who you are.
It knows you write in first person. It knows you prefer bullet points over paragraphs for summaries. It knows you work in TypeScript, not Python. It knows the name of your product, your target audience, the competitors you care about, and the tone you want for every email. You didn't set this up in one sitting — it accumulated over hundreds of conversations, small corrections, and explicit instructions.
Then Anthropic releases a new Claude model that handles your core use case dramatically better than what you're using now.
And switching means starting over from zero.
This has been one of the most underappreciated friction points in AI adoption. The context you build with an AI assistant is not portable. Until now.
What claude.com/import-memory Does
Anthropic has launched a feature at claude.com/import-memory that solves this problem directly. It allows you to take the context, preferences, and working style you've accumulated in another AI tool — ChatGPT, Gemini, or any other assistant — and import it into Claude's memory system.
The result is that Claude can start a conversation already knowing the things about you that would otherwise take months to communicate organically.
The feature works through a two-step process:
Step 1: Export your context from your current AI tool. Claude provides a specific prompt you run inside ChatGPT, Gemini, or your current assistant. This prompt instructs the model to produce a structured summary of everything it knows about you: your communication preferences, recurring projects, working style, domain knowledge, and any specific instructions you've given it over time.
Step 2: Paste the output into Claude's memory settings. You take that structured summary and paste it directly into Claude's memory configuration. Claude stores it as persistent context that applies to every future conversation.
That's it. Two steps, and Claude knows what your previous assistant knew.
What Context Is Worth Migrating
Not all context is equally valuable to transfer. Before you run the export prompt, it helps to think about what actually shapes your AI interactions day to day.
Working Style and Communication Preferences
These are high-leverage. If you always want responses in a certain length, always want code examples included, always want the assistant to challenge your assumptions rather than agree — that context saves you from re-specifying it in every conversation. "Keep responses concise and direct" or "always include a counterargument" are the kinds of instructions that have compound value over time.
Project and Domain Background
If you work on a specific product, codebase, or subject area, the background context is invaluable. The tech stack you use, the architecture decisions that are already made, the constraints that aren't worth re-litigating — all of this helps Claude give you answers that are actually applicable to your situation rather than generic best-practice advice.
Format Preferences
Do you want Markdown output always? Tables when comparing options? Numbered steps for procedures? These seem small but they shape every response. Migrating format preferences means you stop spending tokens specifying structure every time you open a new conversation. If you want to understand why Markdown formatting matters so much for AI interactions, see our guide on how Markdown improves LLM output quality.
Specific Instructions and Constraints
"Never suggest switching to a different framework." "Always assume I'm using the latest stable version." "Don't add disclaimers to code snippets." These are the instructions that experienced AI users accumulate to prevent the same frustrations from recurring. They're worth migrating in full.
What Not to Migrate
Stale context is worse than no context. Before migrating, review the export output and remove anything that's no longer true: a project you finished six months ago, a tool you've stopped using, preferences that were specific to how an older model behaved. Outdated context can actively mislead Claude.
Step-by-Step: How to Use the Import Memory Feature
Here is the complete process from start to finish.
1. Go to claude.com/import-memory
Navigate to the import page. You'll find the prompt that's designed to extract structured context from your current AI assistant.
2. Copy the Export Prompt
The page provides a prompt specifically designed to elicit a well-structured memory summary. Copy it exactly as provided. The format matters — Claude's import system is designed to parse a specific structure.
3. Run the Prompt in Your Current AI Tool
Open your existing AI assistant (ChatGPT, Gemini, or whichever tool you're migrating from). Paste the export prompt and run it. The assistant will produce a structured document covering:
- Your name and role
- Communication style preferences
- Active projects and their context
- Technical environment (tools, languages, frameworks)
- Recurring instructions and constraints
- Format preferences
This output is your portable context document.
4. Review and Edit the Output
Before importing, read through the entire output. Look for:
- Anything that's outdated or no longer accurate
- Projects or contexts that were specific to work you've completed
- Preferences that were workarounds for limitations in your old tool, not genuine preferences
- Any sensitive information you'd rather not store in Claude's memory
This review step is important. You're not just copying data — you're curating context.
5. Paste into Claude's Memory Settings
Go to Claude's memory settings (accessible from your account settings at claude.ai) and paste the reviewed context. Claude will store this as persistent memory that applies to all future conversations.
6. Test with a Representative Task
Don't just trust the import — verify it. Open a new Claude conversation and ask Claude something that should draw on your imported context. Ask it to help with a task in your domain, request output in your preferred format, or ask it to summarize a project background. If it responds correctly, the migration worked. If something seems off, go back to memory settings and adjust.
Where Web2MD Fits Into This Workflow
Here's a scenario that comes up more often than you'd expect.
Your context isn't only stored inside your AI tool's memory. A significant part of what shapes your AI interactions lives on the web: the help documentation for the tools you use, your previous conversations you've saved as web pages, prompt libraries you've bookmarked, AI workflow guides you keep coming back to.
When you're building your memory import document, you may want to pull from these sources. The problem is that raw web pages are messy. If you try to manually copy-paste from a documentation site, a prompt library, or a saved conversation page, you'll get text full of navigation menus, ads, formatting artifacts, and boilerplate that buries the actual content.
This is where Web2MD is useful.
Install the Web2MD Chrome extension, navigate to any web page you want to reference — a help article, a saved ChatGPT conversation, a prompt engineering guide — and click the extension. Web2MD converts the page to clean, structured Markdown. You then paste that Markdown into your preferred AI tool to help synthesize your context document, or directly into the relevant section of your Claude memory import.
The difference is significant. Instead of a wall of messy text that the AI has to work around, you get structured content that maps clearly to the memory format Claude expects. The import ends up cleaner and more useful.
This matters particularly for:
- Tool documentation — If part of your context is "I work with [specific tool] and here are its key constraints," you can convert that tool's documentation to Markdown and include the relevant sections in your memory document.
- Saved AI conversations — If you've saved particularly useful conversations as bookmarks or exported them as web pages, Web2MD gives you a clean Markdown version you can use as source material.
- Prompt libraries — Sites like PromptHero or community-maintained prompt repositories can be converted and mined for instructions you want to carry into your Claude memory.
- Personal workflow notes — If you keep workflow notes in a wiki or note-sharing tool that's web-accessible, you can convert those pages and use them as input when generating your memory export.
The flow looks like this:
Web pages with relevant context
↓
Web2MD conversion
↓
Clean Markdown
↓
Use as source material for memory export
↓
Paste into Claude memory settings
↓
Claude starts with full context
What This Means for AI Portability Going Forward
The import memory feature reflects something Anthropic is betting on: that users who stick with an AI assistant long enough to build real context are exactly the users worth winning. Removing the switching cost is a direct play for that audience.
For you, it means the months you've spent customizing your AI workflow are no longer locked to a single platform. You can evaluate new models on their actual capability rather than dreading the re-onboarding cost. For tips on optimizing your ChatGPT and Claude Markdown workflow once you have migrated, we have a dedicated guide.
The two-step migration takes less than 15 minutes for most users. The result is a Claude that already knows how you work, what you're building, and how you want to communicate.
That's a meaningful shift in what switching AI assistants actually costs.
Get Started
- Go to claude.com/import-memory and copy the export prompt
- Run it in your current AI assistant and review the output
- Install Web2MD to convert any web-based context sources to clean Markdown
- Paste your curated context into Claude's memory settings
- Test with a task from your actual workflow
The context you've built with AI assistants is your work. It should travel with you.
Web2MD converts any webpage to clean, AI-ready Markdown — perfect for assembling AI memory imports and context documents. Try it free at web2md.org.