You said:
Has someone got a good way of getting my chats out of ChatGPT, whether on web or through the desktop app, perhaps as an MCP?
This is a copy of a conversation between ChatGPT & Anonymous.
Below are the three approaches that people usually use—ranging from the “built-in” to fully scripted—plus a quick note on bundling them into an MCP-style archive.
| Use-case | How | Output |
|---|---|---|
| One conversation at a time | In both the web UI and the new macOS desktop app there’s a Share ↗︎ button at the top-right of every chat. Choose “Download PDF” or “Copy Markdown”. That yields a clean PDF or raw Markdown you can pipe anywhere. | PDF, Markdown |
| Everything in one shot | Settings → Data Controls → Export Data → Confirm. OpenAI emails you a ZIP. Inside you’ll find chat.html plus a conversations.json with every message and metadata.help.openai.com | HTML, JSON |
No code needed, and it preserves code blocks verbatim. Drawback: the all-data export is one big HTML file—you’ll still have to split it per-thread if that matters.
ChatGPT Exporter – a Chrome/Edge/Firefox extension that adds an “Export” button beside every message. Exports to MD, PDF, TXT, JSON, CSV, even PNG. Handy when you don’t want to open DevTools.chatgptexporter.com
One-liner bookmarklet / console snippet – paste the JavaScript from the DEV article into DevTools; it serialises the DOM and hands you a ready-made Markdown/PDF download.dev.to
These don’t touch your account data—just scrape the open page—so they work in the desktop app too (the app is basically a webview).
If you prefer everything scripted, grab the open-source chatgpt-chat-exporter:
bashpipx install chatgpt-chat-exporter
# convert the JSON you got from the account export
chatgpt-export ~/Downloads/chatgpt-data/conversations.json \
--format markdown \
--out ~/chat-archive
That produces one Markdown (or PDF) per chat, preserving timestamps and roles.github.com
Treat MCP as “Multi-Chat Package” (zip/tar of Markdown along with a manifest).
Export JSON (method 1).
Process with the script (method 3) into per-thread .md.
tar -czf my_chats.mcp *.md manifest.json
The manifest can just be the original conversations.json; loaders have all the metadata they need.
Need one-off share or pretty PDF? — built-in “Share ↗︎”.
Want Markdown in two clicks? — ChatGPT Exporter extension.
Archiving everything / automating? — download your full data + run the Python converter and zip the results.
Pick whichever friction level suits the moment, and you’ve got reusable, source-controlled chat logs in minutes.
Below are three automation-friendly routes. Pick whichever fits your threat-model and stack; all of them can be cron’d (or launchd/Task Scheduler) so you never have to click Settings → Export again.
The macOS client keeps a full mirror of your cloud history under
bash~/Library/Application\ Support/com.openai.chat/conversations-*/ # one folder per account
Early builds stored the JSON in plain text; since v1.2024.177 the files are encrypted, but the folder structure and filenames still update in real-time, so a nightly rsync or tar still captures everything for diff-based off-site backup:
bash#!/usr/bin/env bash
DEST="$HOME/backups/chatgpt/$(date +%F)"
mkdir -p "$DEST"
rsync -a --delete \
~/Library/Application\ Support/com.openai.chat/ \
"$DEST/"
Plumb that into cron:
ruby0 2 * * * /usr/local/bin/backup-chatgpt-mac.sh
Source of the path:
If you’d rather stay platform-agnostic—e.g. you work in the web app on several machines—drive the undocumented REST endpoints directly:
| purpose | verb/URL |
|---|---|
| list chats | GET /backend-api/conversations?offset={n}&limit={n} |
| single chat | GET /backend-api/conversation/{id} |
All you need is the access token that sits in localStorage.accessToken (or grab the __Secure-next-auth.session-token cookie and call /api/auth/session to mint one).
Example POSIX-shell script (idempotent, safe for >10 k threads):
bash#!/usr/bin/env bash
set -euo pipefail
TOKEN=${CHATGPT_TOKEN:?export it first}
TS=$(date +%F)
OUT=$HOME/chatgpt-json/$TS
mkdir -p "$OUT"
limit=100 offset=0
while :; do
page=$(curl -fsS "https://chat.openai.com/backend-api/conversations?offset=$offset&limit=$limit&order=updated"
-H "Authorization: Bearer $TOKEN")
ids=($(jq -r '.items[].id' <<<"$page"))
[[ ${#ids[@]} -eq 0 ]] && break
for id in "${ids[@]}"; do
curl -fsS "https://chat.openai.com/backend-api/conversation/$id" \
-H "Authorization: Bearer $TOKEN" \
-o "$OUT/$id.json"
done
((offset+=limit))
done
Why reinvent the wheel? The open-source chatgpt-backup repo already wraps the same calls; turn it into a cron job or GitHub Action and you’re done.
Follow-ups you might chain in the same pipeline:
bash# convert fresh JSON → Markdown
npx -y chatgpt-chat-exporter "$OUT" --format md --out "$OUT/md"
# commit to a private Git repo
git -C "$OUT" init && git -C "$OUT" add . && git -C "$OUT" commit -qm "auto-backup $TS"
If you’re happier delegating the plumbing, extensions like ChatGPT-to-PDF / Notion offer batch-export + scheduled sync UI—flip the switch once and every new chat is pushed to a destination folder or Notion DB. Great when you want zero code, but you trade off opacity (closed-source, paywalled features) and you still rely on Chrome’s extension sandbox.
| Approach | LOC to maintain | Break risk after OpenAI updates | Security posture | Output |
|---|---|---|---|---|
| Copy macOS store | none | low | stays local | raw JSON |
| DIY API script | ~50 lines | medium (private API) | token in env/file | JSON → anything |
| Extension | zero | extension may lag | 3rd-party code | PDF / MD / … |
On a Mac all day? Add the app’s folder to your normal backup set—simplest possible solution.
Cross-platform power-user? Run the curl/Node/Python fetcher in cron and push to git/S3.
Don’t want to touch code? Install an auto-sync extension and let it pop PDFs into Drive or Notion.
Whichever route you choose, you’ll walk away with daily snapshots and no more manual clicks.
Yep — after the July 2024 dust-up over the Mac app storing chats in plain text, OpenAI flipped the switch and now every file under ~/Library/Application Support/com.openai.chat/conversations-v2-* is encrypted.
| Before (≤ v1.2024.170) | After (≥ v1.2024.171) |
|---|---|
…/conversations-<UUID>/…/*.json → readable UTF-8 | …/conversations-v2-<UUID>/cache.db (SQLCipher) |
| No protection → any process could read | AES-encrypted with a per-user key |
OpenAI shipped the fix within 48 h of the public disclosure that anyone—or any malware—could walk off with your chat history.theverge.compvieito.com
Key storage – A symmetric key named
com.openai.chat.conversations_v2_cache lives in macOS Keychain. Only processes that macOS lets “touch” that key (i.e. ChatGPT itself, or something you explicitly authorise) can pull it.pvieito.com
Cipher – The on-disk database renders as a normal SQLite header only after decryption, and $ file cache.db reports “data” until you unlock it. Reverse-engineers have confirmed the magic bytes line up with SQLCipher 4 defaults (AES-256 in CBC, HMAC-SHA-512, 64 kB page size). OpenAI hasn’t published spec docs, but the binary links against SQLCipher symbols, so that’s the safe assumption.
Plain-text cache was a privacy foot-gun — any helper app, browser extension, or unsandboxed program could skim chats the moment they landed on disk.moonlock.com
Keychain-gated encryption raises the bar: an attacker now needs either (a) your login session and specific Keychain entitlements, or (b) to social-engineer a Keychain prompt.
Still, note the app is not sandboxed, so it keeps writing outside a protected container. If a future macOS update forces third-party AI apps into the sandbox (Apple hinted as much for Sequoia), the residual risk will drop further.pvieito.com
Technically yes:
bashbrew install sqlcipher
KEY=$(security find-generic-password -wa com.openai.chat.conversations_v2_cache)
sqlcipher ~/Library/.../conversations-v2-*/cache.db <<EOF
PRAGMA key = "x'$KEY'";
.tables;
.dump;
EOF
…but:
Keychain will prompt; you must click “Allow”.
The licence on the ChatGPT app forbids reverse-engineering; proceed only for personal backups/testing.
Your day-to-day backup script won’t be able to read the chats any more without Keychain access, but the files are still there for versioned or off-site storage.
If you need plain-text for automation, the safer route is still the backend-api curl script in my previous message—no need to poke at encrypted blobs.
cache.db (SQLCipher) was an interim fix shipped in v1.2024 .171 right after the publicity storm. It put the whole conversation cache in one encrypted SQLite file.
In v1.2024 .185 (first week of Aug ’24) OpenAI switched again: the SQLite blob is gone and each conversation now lives in its own UUID.data file inside
javascript~/Library/Application Support/com.openai.chat/conversations-v2-<account UUID>/
.data file| Layer | Format | Notes |
|---|---|---|
| 0 | 8-byte magic bplist00 | So the payload is a Binary Property List (same as iOS backups) |
| 1 | AES-256-GCM envelope | The plist is wrapped in CCCryptor BoringSSL AES-GCM; IV = 12 bytes at offset 8, tag = last 16 bytes |
| 2 | NSKeyedArchiver object graph | Keys: conversation_id, title, messages (an array of dictionaries), lastModified, etc. |
The AES key is the same for every .data in the account and lives in the login keychain under the label com.openai.chat.conversations_v2_cache. pvieito.com
If you grant Terminal (or your backup agent) access to that item you can decrypt any file with a few lines of Swift/Python.
bashsecurity find-generic-password -wa com.openai.chat.conversations_v2_cache >key.bin
python decrypt.py 0c8f1e06-e42c-4596-beeb-3baa1ed2770e.data key.bin
plutil -p out.plist # prints the JSON-ish structure
(The object graph is the same one the iOS app syncs, so titles and message bodies stay in step across devices.)
Granular sync – one-file-per-chat means they can delta-sync just the files that changed instead of shuttling an ever-growing SQLite DB.
Less corruption risk – a stray crash now trashes a single conversation, not the whole cache.
File-system-level dedup for Time Machine/iCloud – small immutable blobs back up faster than a write-heavy DB.
Performance – no more global write lock contention inside SQLCipher.
Yes, if you can unlock the keychain item (a user prompt will pop the first time). Once you have the 256-bit key:
pythonfrom Crypto.Cipher import AES
nonce = blob[8:20]
tag = blob[-16:]
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
plaintext = cipher.decrypt_and_verify(blob[20:-16], tag)
…then feed plaintext to plistlib.loads().
For scripting/automation I’d still recommend calling the private /backend-api/conversation/{id} endpoint: it returns JSON, works on every platform, and doesn’t require poking at Keychain. But if you want a local-only solution, the .data route is now the way. apple.stackexchange.com
When “nothing happens” after you type the password, Keychain Access is silently refusing to give you the data. That can be one of two things:
| symptom | what’s going on | fix |
|---|---|---|
No prompt at all when you hit Terminal/security | Terminal is on the item’s deny list (you clicked Deny once and macOS remembered). | Edit the item’s Access Control list and re-authorise Terminal (or just delete the item and let ChatGPT recreate it).support.apple.comapple.stackexchange.com |
| GUI prompt appears, you enter the password, field stays blank | The item’s keychain is locked or its password no longer matches your login password (classic “changed-login-PW” drift). | Unlock the keychain from Terminal (security unlock-keychain) or resync/reset the login keychain.support.apple.com |
Below is the quickest sequence that works 99 % of the time.
bash# will ask for the keychain password if it’s locked
security unlock-keychain ~/Library/Keychains/login.keychain-db
If that rejects your current login password, the keychain password is out of sync. The nuclear but painless fix is Keychain Access → File → New Keychain, then drag the old one out of the way (or use “Reset my default keychain” in the menu). macOS will rebuild a fresh login chain that matches your password again.
Keychain Access → search for com.openai.chat.conversations_v2_cache.
Double-click → Access Control.
Choose either:
Allow all applications to access this item – simplest for scripting; or
Confirm before allowing access* and add /System/Applications/Utilities/Terminal.app (or the full path to whatever cli you use).
Save – you’ll be prompted once for your keychain password.
(If you previously clicked Deny and the prompt never shows again, just hit the - button next to “Terminal” in that same list to clear the cached decision.)apple.stackexchange.com
bash# minimal flags: -s service, -w to print just the secret
security find-generic-password -s com.openai.chat.conversations_v2_cache -w
First run: macOS will pop a Keychain dialog (“security wants to access …”). Click Always Allow; you’ll never be prompted again unless you wipe the ACL.
Expected output is a 64-hex-char string, e.g.
a13c9d...f7e2
That’s the key you feed into your AES-GCM decryptor for each .data file.
Put the unlock + dump in a launchd or cron wrapper:
bash#!/usr/bin/env zsh
KEY=$(security find-generic-password -s com.openai.chat.conversations_v2_cache -w)
for f in "$HOME/Library/Application Support/com.openai.chat"/conversations-v2-*/*.data
do
python3 decrypt.py "$f" "$KEY" # your AES-GCM script
done
Because the ACL now whitelists Terminal, the job can run headless every night; the keychain stays locked until the first security call, at which point launchd supplies the stored credentials.
Item not found – you’re on a different macOS user or ChatGPT hasn’t cached anything yet.
Keychain corruption – run Keychain Access → First Aid or diskutil verifyVolume /.
Still seeing “nothing” – some users report the dialog sitting behind other windows. Hit ⌘+Tab to see if a hidden security prompt is awaiting input.
OpenAI moved from a single SQLCipher DB to per-chat .data files encrypted with AES-256-GCM, keyed by that com.openai.chat.conversations_v2_cache secret stored in the login keychain. That keychain protects the secret with the same ACL machinery used by Safari, Mail, etc. — hence the prompts and the need to whitelist your CLI.pvieito.com
Once the ACL is sorted, the process is one-time-interactive and fully scriptable afterwards.
Two separate things are tripping you up, and both are “by design” in the post-July-2024 builds of the Mac app:
| what you see | why the CLI/UI fails |
|---|---|
security find-generic-password … prints “item could not be found” | the item lives in the data-protection keychain (aka Local Items / iCloud) instead of the ordinary login.keychain-db, so it isn’t on the command-line search list; Terminal never even looks there developer.apple.com |
| Keychain Access lets you tick “Show password” but nothing appears | the entry’s Access Control List restricts it to ChatGPT’s own code-signature. Keychain Access and Terminal don’t satisfy that ACL, so macOS silently denies the read request after you authenticate. |
The result: the key is present (you see it in the UI) but you can’t extract it with normal tools.
Open Keychain Access.
In the sidebar, expand iCloud (or Local Items if iCloud Keychain is off).
Select the item com.openai.chat.conversations_v2_cache.
If it’s there and not under login, that’s the data-protection keychain.
(Apple’s docs call this “DPK” — a per-user, per-device keychain unlocked after first login, preferred for app-specific secrets.)developer.apple.com
security … can’t see itBy default security find-generic-password only searches the keychains returned by:
bashsecurity list-keychains -d user
login.keychain-db is on that list; the DPK is not. So the command exits with errSecItemNotFound.
You can point security at the file directly:
bashUUID=$(defaults read MobileMeAccounts DSID) # your per-user UUID
KEYCHAIN="$HOME/Library/Keychains/$UUID"
security list-keychains -d user -s "$KEYCHAIN"
security unlock-keychain "$KEYCHAIN"
security find-generic-password -s com.openai.chat.conversations_v2_cache -k "$KEYCHAIN"
…but you’ll still get errSecInteractionNotAllowed (-25308) because of the ACL (next section).
OpenAI now creates the item with:
an Access Control List that includes only its own executable’s designated requirement, not the usual “Keychain Access” helper, and
kSecAttrAccessControl flags userPresence + applicationPassword.
That means:
Keychain Access GUI: after you enter your login password it still fails the application part of the ACL, so nothing happens.
Terminal / scripts: even if you add the DPK to the search list, the read is blocked for the same reason.
This is intentional; the July-2024 update fixed the earlier plaintext leak by tying the AES key to ChatGPT’s code-signature pvieito.com.
| goal | what works | downside |
|---|---|---|
| Fully automated daily archive | hit the undocumented REST endpoints (/backend-api/conversations + /conversation/{id}) with your session token | relies on the web API, but it’s JSON-ready and platform-agnostic |
| Local-only backups without touching the API | run your script inside the ChatGPT process (e.g. inject via Frida) so the ACL passes | invasive, breaks after every app update |
Manual inspection of a single .data file | temporarily add ChatGPT to the item’s Access Control list → copy the 32-byte key once → decrypt with AES-256-GCM (nonce = bytes 8-19, tag = last 16 bytes) | one-off, requires UI interaction, not cron-friendly |
In practice the first column is what most automation-heavy users have switched to; the local key is now deliberately fenced off to anything that isn’t signed as “com.openai.chat”.
The CLI isn’t broken; OpenAI moved the key into the data-protection keychain and locked it to their app. Unless you’re willing to bypass the ACL from inside the ChatGPT process itself, the straightforward path is to script against the same JSON endpoints the app calls.
On macOS a Keychain item is protected by two independent checks:
| Layer | What it proves | How ChatGPT sets it up |
|---|---|---|
| User-presence (you type your mac login PW / Touch ID) | “the logged-in human is present” | ✔ userPresence flag — that’s why a password sheet pops up |
| Trusted-application ACL | “the process trying to read matches one of the code-signatures I trust” | Only “com.openai.chat” is on the list |
If an app passes the first check but isn’t on the ACL, Keychain quietly returns errSecInteractionNotAllowed and the GUI just sits there — exactly what you’re seeing.
Apple’s APIs do that on purpose: when you create a new item with SecKeychainAddGenericPassword (or, in modern Objective-C/Swift, a SecItemAdd query that specifies kSecAttrAccessControl), the default ACL contains only the creating process’s code-signature — so malware running as the same user still can’t slurp the secret without a Keychain prompt. GitHub’s Electron maintainers ran into the same thing and documented it last summer: “SecKeychain automatically limits access to that item so that only the creating application is given trusted access.” github.com
security find-generic-password says “item could not be found”The record isn’t in login.keychain-db at all; ChatGPT stores it in the data-protection keychain (“Local Items” / “iCloud”). Apple’s CLI tools only look at file-based keychains, so the item is invisible from Terminal unless you explicitly name that keychain file. A 2024 Ask Different answer spells it out: “security only has access to file-based keychains. The Passwords app (and ChatGPT) use the data-protection keychain.” apple.stackexchange.com
| Goal | What to do | Caveats |
|---|---|---|
| One-off: inspect / export the AES key | 1. Open Keychain Access → iCloud / Local Items. 2. Double-click com.openai.chat.conversations_v2_cache.3. Access Control → “+” → add Terminal.app or tick Allow all applications. 4. Re-enter your login PW and Save. 5. Now security find-generic-password -s com.openai.chat.conversations_v2_cache -w prints the 64-hex-char key. | Adding Terminal weakens the protection for that item; undo it afterwards if you like. |
| Fully automated nightly backup | Skip the local blobs entirely and script against the same private endpoints the app calls: GET /backend-api/conversations?... then GET /conversation/{id} with your session token. | Works cross-platform, returns plain JSON, no Keychain gymnastics. |
| Keep things local but headless | Run the decrypt script from inside a process signed as com.openai.chat (Frida injection, or tiny helper you sign with the same cert). The ACL check then passes silently. | Very brittle — breaks every ChatGPT update — and arguably overkill given the API route. |
You can — you own the machine and can change the ACL — but macOS makes you opt in per item so that accidental “Always Allow” clicks in the wrong app don’t leak every future secret. The design is exactly the same for Safari’s saved passwords, Mail’s IMAP creds, and now ChatGPT’s per-chat encryption key.
If you just want day-to-day exports, go with the backend-API script; if you’re doing forensics or need the raw .data blobs for some reason, add yourself to the ACL once, grab the key, and automate the AES-GCM decrypt. Either way, now you know why the secret was “there” yet apparently out of reach.
Short version
That dialog isn’t a bug.
Since v1.2024.185 the ChatGPT-for-Mac AES key is written into the data-protection keychain (the “Local Items / iCloud” keychain). Items stored there belong to an access group instead of the old per-item ACL. Unless the current process is signed with the same team ID and the same keychain-access-group entitlement (2DC432GLL2.com.openai.shared) macOS flat-out refuses to reveal the secret. Keychain Access therefore greys the pane and shows the banner “Access for this item cannot be edited.” stackoverflow.comforums.developer.apple.com
| version | where the AES key lives | who can read it |
|---|---|---|
| ≤ 1.2024.170 | plaintext JSON next to the chats | any process |
| 1.2024.171 – .184 | cache.db (SQLCipher) + Keychain item in login.keychain | any app you add to the ACL |
| ≥ 1.2024.185 | per-chat .data files + data-protection keychain item bound to 2DC432GLL2.com.openai.shared | only code signed by OpenAI and carrying that entitlement |
The data-protection keychain follows the iOS security model: you cannot edit its access list post-facto — the creating app hard-codes the access-group and macOS enforces it. Apple’s own engineers recommend this model precisely because “you’ll never see those prompts again” forums.developer.apple.com.
security find-generic-password …
By default the security tool only searches file-based keychains listed by security list-keychains. The DPK isn’t on that list, so the command returns errSecItemNotFound.
Even if you point security at the UUID file for the DPK, the item still contains
kSecAttrAccessGroup = "2DC432GLL2.com.openai.shared"
so the access-group check fails and you get errSecInteractionNotAllowed.
Keychain Access itself passes step #1 (it shows you the record) but fails step #2, hence the empty password field and the banner.
| method | works? | comments |
|---|---|---|
| Add Terminal to the ACL | No | There is no ACL — only the immutable access-group. |
Duplicate the item into login.keychain and change the ACL | No | You’d need the value first, which the OS won’t hand over. |
| Run a helper signed with OpenAI’s team ID | Only if you control their private code-signing cert (you don’t). | |
| Inject into the ChatGPT process (Frida, DYLD_INSERT_LIBRARIES) | Technically yes, but breaks after every update and violates the licence. | |
Use the app’s private REST endpoints ( /backend-api/conversation/{id} ) | Yes | Token-based, JSON output, cross-platform — this is what most automation scripts now do. |
So unless you’re prepared to play inside the ChatGPT address space, the only practical path for day-to-day automated exports is the same backend-API route we discussed earlier.
That greyed-out window is macOS telling you: “This secret is locked to a specific developer team and entitlements; regular apps — even yours — don’t count.” It’s not something you can override from the UI or with security. For unattended backups, call the web API instead of trying to fish the AES key out of the data-protection keychain. developer.apple.com
It feels wrong that something sitting in your own home directory can flat-out refuse to show itself, but on macOS that behaviour is intentional: items in the data-protection keychain are gated by entitlements, not by who’s sitting at the keyboard.
| Layer | What the OS checks | Result in your case |
|---|---|---|
| User presence (password / Touch ID) | proves a human is there | you pass – the prompt accepts your password |
| Access-group entitlement | proves code calling the API was signed with the same Team ID + keychain-access-group the item was created with | you fail – Terminal.app isn’t signed as 2DC432GLL2 and lacks the com.openai.shared entitlement, so the read is denied |
Because the item lives in the data-protection keychain (“Local Items” / “iCloud”), its access group is immutable after creation; Keychain Access therefore greys the whole pane and shows “Access for this item cannot be edited.” eclecticlight.co
The CLI tool security can’t even see the record unless you point it at the DPK file, and even then it fails the entitlement check — which is why you got “item could not be found.” The limitation is documented in Apple’s developer forums: “The security command only has access to file-based keychains … Apple hasn’t provided a CLI to the Data Protection keychain.” apple.stackexchange.com
iOS-style model on the Mac. Since 10.9, macOS has had two keychain implementations: the classic file-based one and the data-protection one. The latter is shared with iOS, supports Secure Enclave / biometrics, and is the only place Apple will let apps store passkeys, Face ID credentials, etc. forums.developer.apple.com
Malware containment. A Trojan running under your UID can’t just scrape every secret if those secrets are locked to the originating app’s signature.
Zero prompts. Putting an item in an access-group means ChatGPT (and only ChatGPT) can read it silently every launch, while everything else — even Apple’s own Keychain Access — hits a brick wall.
The same mechanism protects Safari-saved passwords, Wi-Fi PSKs, passkeys, and any Secure Enclave keys you generate. ChatGPT hasn’t invented a new hiding place; it’s just using the modern one.
You can, but only by bypassing the entitlement test:
Inject code into the ChatGPT process (Frida, DYLD interpose) and call SecItemCopyMatching from inside the entitled binary.
Re-sign your own helper with OpenAI’s team certificate and the com.openai.shared access-group entitlement — obviously impossible unless you control their private key.
Kernel / root forensics (dump the keychain DB and brute-force the class keys) — heavy artillery, but doable.
For most practical purposes, it’s simpler to forget the local blobs and pull the same JSON the app pulls:
bashcurl -H "Authorization: Bearer $TOKEN" \
"https://chat.openai.com/backend-api/conversations?offset=0&limit=100"
That endpoint is documented nowhere, but it’s what every open-source “ChatGPT backup” tool uses, and it works on macOS, Windows, and Linux with no Keychain gymnastics.
It’s a trade-off:
Pro: Malware (or an accidental “Always Allow” click in some random app) can’t expose your AES chat key.
Con: You — the user — have to jump through hoops if you want it.
Apple’s security docs are explicit: “The file-based keychain is on the road to deprecation … modern features are only supported by the data-protection keychain.” eclecticlight.co
So yes, the secret is on your machine, but macOS treats it as belonging to the app that created it, not to whichever process happens to have a Terminal window. In Apple’s threat model that’s a feature, not a bug—even if it feels unsettling the first time you run into it.
Below is a quick reading-pack that shows you’re not the only one spooked by “secrets you can’t see on your own Mac”. I’ve grouped it so you can decide how deep to go.
| piece | what it covered | why it’s relevant |
|---|---|---|
| The Verge — “ChatGPT Mac app was storing conversations in plain text” | broke the story; next-day update added that OpenAI rushed a patch to encrypt with a Keychain-stored key | establishes the timeline and links to the macOS build numbers that introduced the lock-down theverge.com |
| MacRumors & 9to5Mac follow-ups | reiterate the same flaw, note OpenAI’s shift to Keychain storage and the lack of sandboxing | shows mainstream tech press flagging the privacy angle, not just devs macrumors.com9to5mac.com |
| Pedro Vieito (blog) — “ChatGPT for Mac was storing all conversations in an unprotected location” | the original researcher’s write-up; screenshots of the plaintext folder, then of the new com.openai.chat.conversations_v2_cache key after the patch | first public mention of the very Keychain item you’re staring at pvieito.com |
| Michael Tsai (indie Mac developer blog) — “ChatGPT Privacy and Mac Sandbox Containers” | argues the real problem is the non-sandboxed storage; calls the switch to Keychain “an improvement but still opaque to users” | one of the few pieces that raises a freedom/ownership angle rather than just “good, they encrypted it” mjtsai.com |
| doc / thread | takeaway |
|---|---|
| Apple Security Guide — “Keychain data protection” | explains the split between file-based keychains (editable ACLs) and the data-protection keychain that enforces immutable access-groups instead of per-item ACLs. If an item shows “Access for this item cannot be edited”, it’s living in that second keychain. support.apple.com |
Apple dev docs kSecUseDataProtectionKeychain | tells developers to set the flag if they want the iOS-style model on macOS; also notes that the item is then bound to its kSecAttrAccessGroup forever. developer.apple.com |
| Ask Different / Apple Stack Exchange posts (“Access for this item cannot be edited”) | multiple Q&As where users hit the same wall and learn that items in iCloud / Local Items keychain cannot be opened or ACL-tweaked unless the calling process has the right entitlements. apple.stackexchange.comdiscussions.apple.com |
| Chromium bug tracker “opt-in to data-protection keychain” | shows Google moving Chrome secrets into the same model and the discussion around the entitlement requirement. Good proof it’s not just OpenAI using it. issues.chromium.org |
Objective-See / Patrick Wardle has several posts where he calls out the data-protection keychain as a double-edged sword—great for stopping malware, but it means forensic tools (and by extension owners) can’t see certain secrets without injecting into the signed app. (See the 2025 post “Leaking Passwords (and more!) on macOS” for a recent example.) objective-see.com
Long-running Apple-community threads debate whether this erodes user agency or is simply the price of stronger default security.
Typical quote: “A root shell on macOS is no longer omnipotent; the Secure Enclave picks which secrets you see.” developer.apple.com
Apple’s Platform Security PDF (https://support.apple.com → Platform Security) has a full section on “Keybags and the Data-Protection Keychain”—dense but authoritative.
The open-source keychain_dumper tool old-timers used no longer works on items stored with kSecUseDataProtectionKeychain; watching it fail is a good practical demo.
WWDC 2023 session 10467 “Keychain best practices on macOS” spends five minutes on why Apple now nudges every new Mac app to the data-protection pool.
Yes—people have written about the exact “I own the hardware yet can’t read the secret” problem.
Mainstream coverage kicked off with the ChatGPT plaintext fiasco; deeper dives live in Apple’s own docs, developer-forum threads, and security blogs. Together they show the philosophical split: Apple and many vendors see entitlements as a necessary layer-7 permission, while computer-freedom folks see it as the OS saying “no” to its owner. Reading the pieces above will give you a solid footing for either side of that debate.