Export DeepSeek, ChatGPT, Claude, and Gemini Conversations to Markdown (2026)
Export DeepSeek, ChatGPT, Claude, and Gemini Conversations to Markdown (2026)
A surprising thing happened in 2025: AI chat conversations became the new long-form content. People are spending two, three, sometimes ten hours in a single Claude or ChatGPT session — building arguments, debugging code, brainstorming projects, doing deep research with web search. By the end, that conversation has more value per word than half of the articles people bookmark.
And then it just sits there. Locked in a chat history nobody else can read. If you want to share it with a colleague, save it to your Obsidian vault, or feed it into a different AI tool tomorrow morning, your options are bad: copy-paste blocks of text and watch the formatting collapse, take a screenshot, or use the built-in "Share" link that creates a public URL but doesn't give you a file.
This article covers how to actually export AI conversations from the major platforms — DeepSeek, ChatGPT, Claude, Gemini, and Perplexity — into clean, structured Markdown that survives a paste into Obsidian, Notion, NotebookLM, or another AI.
Why this is harder than it sounds
When you copy text from a chat interface, you're copying the rendered HTML, not the source Markdown. The chat platform renders Markdown, KaTeX math, code blocks with Highlight.js, and tables on the fly — all of that gets lost the moment you select-all-and-copy.
Concretely, here's what breaks:
- Code blocks lose their language tag. Pasting
def foo():from ChatGPT into Notion produces a plain monospace blob with no syntax highlighting. - Tables collapse into "long unreadable sentences" — the most common complaint on the Hacker News thread about pasting ChatGPT output.
- LaTeX math is rendered as KaTeX HTML in the page; copying gives you the rendered glyphs as Unicode, not the original
$$...$$source. The math is gone. - Conversation structure is lost. There's no
## User/## Assistantseparator — just a wall of alternating text. - Attachments and images drop entirely.
Each platform has its own DOM, so a generic web clipper that "just grabs the article" can't do this — it has to know that the page is a chat interface and parse it accordingly.
DeepSeek (chat.deepseek.com)
DeepSeek has been growing fast — especially in markets where ChatGPT and Claude have access friction. The flip side: there's no built-in export. The "share" button creates a public URL, which has obvious downsides if your conversation contains anything sensitive.
The DeepSeek DOM is an SPA with hashed CSS class names that change between releases. The reliable extraction strategy is:
- Find all elements with
data-role="user"ordata-role="assistant"(when present). - If those attributes aren't present, fall back to alternating message containers in the chat scroll area, where messages typically alternate human-then-AI.
- Strip UI noise: action buttons, "copy" / "regenerate" / thumbs-up affordances.
- Output
## User/## Assistantheadings around each turn.
This is what Web2MD's AI chat extractor does. You open a DeepSeek conversation page, press Ctrl+M, and you get back a Markdown file like:
# How to optimize a Python script for memory
## User
I have a script that loads 50GB of CSVs into pandas and runs out of memory…
## Assistant
There are three classes of optimization to consider…
```python
import pandas as pd
chunks = pd.read_csv("big.csv", chunksize=10000)
…
Code blocks keep their language tag, math stays as TeX source, and the multi-turn structure is preserved.
## ChatGPT (chatgpt.com)
ChatGPT is the easiest of the bunch because OpenAI uses semantic `data-message-author-role` attributes on each message. A clipper just walks every element with that attribute, reads the role, and dumps the message body.
The catch: ChatGPT's body element is a `.markdown` div with rendered HTML, not raw Markdown. You still need to convert it back. Code blocks come from a `<pre><code class="language-X">` structure that Highlight.js wraps in span trees — those need to be flattened or you end up with a wall of `<span>` tags in your output.
For URL pattern, both `chatgpt.com/c/<id>` and `chatgpt.com/share/<id>` work.
## Claude (claude.ai)
Claude's chat interface uses `[data-testid="conversation-turn"]` to mark each turn, and the role is in `[data-testid="human-turn-input"]` vs the implicit "everything else is Claude." The extraction is straightforward.
Where Claude is uniquely tricky: artifacts. When Claude generates a substantial code block or document as an artifact, the artifact lives in a side panel. A naive extractor that walks the conversation messages misses the artifact entirely. A good extractor knows to inline the artifact content in the right place in the conversation flow.
URL: `claude.ai/chat/<id>` or `claude.ai/project/<id>`.
## Gemini (gemini.google.com)
Gemini wraps each round in `<user-query>` and `<model-response>` custom elements. Inside `<user-query>`, the actual prompt text is in `.query-text`. Inside `<model-response>`, the answer is in `.markdown` or `<message-content>`.
Gemini also has a "Share" button that creates a public URL, but the same caveats as DeepSeek apply — if your conversation has anything you don't want indexed, you don't want it shared publicly.
## Perplexity (perplexity.ai)
Perplexity is structurally different from the other four. It's a question-answer pattern: each "turn" is a search query and the answer is a synthesized response with citations, not a multi-turn dialogue.
A good extractor for Perplexity:
1. Treats the search query as the User turn.
2. Treats the answer body (the `.prose` or `[data-testid="search-result"]` block) as the Assistant turn.
3. Preserves the citation links — those are the most valuable part of a Perplexity answer.
URL: `perplexity.ai/search/<id>`.
## How to actually do this
Web2MD has a unified AI chat extractor that handles all five platforms above. Open the conversation page, press `Ctrl+M`, and you get clean Markdown with multi-turn structure preserved. It also handles the formatting fidelity issues — code blocks come out clean, KaTeX math gets converted back to TeX source via the `application/x-tex` annotation, and tables survive.
The extension is on the [Chrome Web Store](https://chromewebstore.google.com/detail/web2md-web-to-markdown/ijmgpkkfgpijifldbjafjiapehppcbcn). Free tier — no signup — handles 3 conversations per day, which is enough to evaluate. Pro at $9/mo removes the limit and adds batch convert via the MCP server.
## Why this matters more than it seems
Two reasons.
**One — knowledge compounding.** Conversations with Claude or ChatGPT are work products. They contain reasoning, decisions, code, and references that are worth keeping. If they're trapped in your chat history, they're not searchable, not shareable, not feedable into your second-brain. The friction of "ugh, I'd have to copy and reformat" is what stops most people from putting this content where it belongs.
**Two — model-switching is the new normal.** Most heavy users now have an active relationship with two or three models. Claude for writing, ChatGPT for code, DeepSeek for translation, Gemini for image work, Perplexity for research. A conversation in one is often a starting point for a follow-up in another. Without a Markdown export, that workflow is friction-by-design.
The export is a small thing, but it's the kind of small thing that compounds over months of use.
---
**Related**:
- [How to feed webpage content to ChatGPT and Claude](/blog/how-to-feed-webpage-content-to-chatgpt-claude)
- [Best web clipper for Obsidian and AI in 2026](/blog/best-web-clipper-obsidian-ai-2026)
- [Markdown for the AI era](/blog/markdown-ai-era-programming-language)