CamoSuite
AI Privacy Analysis · 2026

"Private" AI mode is shallow.
The privacy problem is six layers deep.

ChatGPT's temporary chat, Grok's ghost icon, Perplexity incognito — these are history features, not data privacy features.

Where AI workflows leak data, and what plugs each layer.

01.AI Privacy Concerns Layered

Every prompt you send to an AI passes through six vectors where data can leak. "Private mode" only touches the least consequential.

L1
File metadata EXIF, author tags, GPS, timestamps, embedded usernames, revision history
leaks by default
L2
File & document content Names, IDs, addresses, client identifiers inside PDFs, DOCX, images
leaks by default
L3
Prompt content The text you type — often the most sensitive part of the request
leaks by default
L4
Provider retention & logs Server-side storage for safety review, debugging, abuse monitoring
policy-dependent
L5
Training-data inclusion Whether your prompt is used to train future models
tier-dependent
L6
Local artifacts Chat history, browser cache, exported files, "memory" features on your own device
private mode-dependent

Notice that layers 1–3 are what you choose to send. Layers 4–5 are the provider's. Layer 6 is local to your machine. Private mode is a Layer 6 control.


02.Where an AI Prompt Goes

Copies of your prompt and file context (or fragments derived from it) live in roughly five distinct places.

Data journey from your device to AI provider and back Your device L1 · L2 · L3 origin network Provider server processes prompt L4 retention & logs L5 training pipeline* *tier-dependent Audit / safety logs retained 30d–years Output back to you also stored client-side (L6) Training corpus* free tiers, by default

03.Data Exposure by Layer

For each layer: what it is, what it actually exposes, and whether private/temporary/incognito mode does anything about it.

L1

File metadata

Most files you upload to an AI carry hidden fields the user never sees: EXIF (GPS, camera serial, timestamp), DOCX/PPTX author and revision history, PDF producer and edit timestamps, image thumbnails embedded inside the original. These survive screenshots, exports, and renames.

Concrete example

An iPhone HEIC of a whiteboard photo includes the GPS coordinates of the office it was taken in, plus the iCloud account associated with the device.

Does private mode help?

No. Private mode is a chat-UI control. It does not inspect or modify uploaded files.

What fixes it

Re-encode the file locally before upload — strip EXIF and metadata, reduce hidden information. CamoConvert does this offline for images, video, audio, and documents.

Severity

High — uniquely identifying, often forensic-grade.

L2

File & document content

Often carrying confidential information: client names, account numbers, addresses, medical record IDs, internal organization names. AI models and hosts see and log everything uploaded.

Concrete example

"Summarize this contract" sends named parties, signatures, addresses, and counsel of record to the provider, even though the analysis doesn't require the true identities.

Does private mode help?

No. The document is transmitted in full regardless of UI state.

What fixes it

Redact PII and identifiers locally before upload. CamoText processes PDFs, DOCX, MD, XLSX entirely on-device, replacing entities with categorized mask tags or full [REDACTED] placeholders.

Severity

High — usually the largest single payload of sensitive data.

L3

Prompt content

The text you actually type. Knowledge workers paste in client emails, draft contracts, internal Slack threads, and other highly sensitive identifying text.

Concrete example

"Help me respond to PERSON at COMPANY about MATTER" — three sensitive entities in one sentence, all transmitted in plaintext.

Does private mode help?

No. The prompt body is the first thing transmitted.

What fixes it

Run the prompt text through a local redactor before paste. CamoText replaces entities with unique tags so the prompt structure is preserved (PERSON_a1c0f1, COMPANY_b233ab) and the model can still reason effectively.

Severity

High — prompt content combined with context files and user session data provides a full profile.

L4

Provider retention & logs

Once the prompt arrives at the provider, it is logged for safety review, abuse monitoring, debugging, and (sometimes) future fine-tuning. Retention windows vary by provider and by tier, and they extend automatically if a conversation is flagged.

Typical retention

~30 days baseline, longer if flagged. API/enterprise: zero-retention options often require negotiated expensive agreements.

Does private mode help?

Marginally. Some providers shorten retention for "temporary" chats, but they don't reduce it to zero.

What fixes it

More expensive tiers and bespoke DPAs. A simpler mitigation is to make the logged data unhelpful — i.e., redact L1–L3 first so what gets logged contains no identifiers.

Severity

Moderate — depends entirely on provider policy and legal hold.

L5

Training-data inclusion

Whether your prompt feeds the next generation of the model, which means it can be repeated to other users. Free tiers commonly opt users into training; paid, API, and enterprise tiers typically opt out by default.

Concrete example

A free-tier ChatGPT user pastes confidential meeting notes. Unless they actively opt out (Settings → Data Controls), those notes are eligible for training.

Does private mode help?

Partially. Some temporary/private modes are excluded from training, but require periodic settings review.

What fixes it

Use API/enterprise tiers, opt out of training in account settings, or better, never let identifying content reach the provider in the first place (redact L1–L3).

Severity

Moderate to high — irreversible if the prompt was ever in a training run.

L6

Local artifacts

Everything that ends up on your device after the conversation: chat history visible in the UI, browser cache, exported chat logs, the provider's "memory" feature, voice transcripts, screenshots. Anyone with access to your device (IT, malware, a forensic image) can read this.

Concrete example

A laptop is shared within a company. Yesterday's full conversation is one click away in the chat sidebar.

Does private mode help?

Yes — this is what these modes are designed for. Temporary/private/incognito chats are not added to the visible history and most don't persist to local memory.

What fixes it

Use the provider's private/temporary mode, disable account-level chat history, disable "memory", clear browser data on a schedule, and avoid exporting chats unless necessary.

Severity

Variable — depends on who else can touch the device.


04.What "Private Mode" Actually Toggles

THE MYTH

"I'm in temporary chat / private mode / incognito, so my data is safe."

In practice, these features turn off three small things: (a) visible chat history, (b) inclusion in the provider's memory/personalization, and (c) occasionally, eligibility for training. They do not turn off: file metadata transmission, file content transmission, prompt content transmission, server-side processing, server-side retention for abuse review, or output mirroring back to your device.

Private mode deals in local hygiene, not remote data retention or processing.

ChatGPT temporary chat, Grok ghost mode, and Perplexity incognito are useful, especially on shared devices and for one-off questions you don't want surfacing in your own history. But layering a UI history toggle on top of unredacted prompts does not change what transmission already happened, and what was remotely accessed and stored.


05.The Fix Map

A one-screen reference for which layer is fixed by what.

Layer
What leaks
What plugs it
L1
File metadata
CamoConvert — re-encode locally, strips EXIF and document metadata.
L2
File / document content
CamoText — local PII redaction across PDF/DOCX/MD/XLSX.
L3
Prompt content
CamoText — paste-redact-paste workflow, or use the bundled CLI in scripts and agents.
L4
Provider retention & logs
Cannot be fully fixed client-side. Mitigate by redacting L1–L3 so logs contain no identifiers; use API/enterprise tiers with shorter retention.
L5
Training-data inclusion
Account-level training opt-out, API/enterprise tier, or temporary/private mode. Best handled at L1–L3 — redacted prompts are useless as training data anyway.
L6
Local artifacts
Provider private/temporary mode, disable chat history, disable memory, clear browser data, encrypt the device.

The pattern is consistent: L1–L3 are fixed before sending, with local tools. L4–L5 are fixed by what you choose not to send in the first place. L6 is fixed by device hygiene, which is where private mode lives.


06.A Working Privacy Recipe

For a knowledge worker who wants to use frontier AI on real work without sending real names to a server:

  1. Strip metadata first. Run any file you plan to attach through a local converter like CamoConvert. Same-format re-encoding works fine if you don't need a different output type — the metadata still gets dropped.
  2. Redact the content. Use a local tool like CamoText to mask names, addresses, account numbers, and client identifiers in the document and the prompt body. Keep the redaction key locally if you'll need to de-anonymize the output later.
  3. Choose your tier. Paid / API / enterprise plans usually opt out of training by default. If you stay on a free tier, opt out manually under Data Controls.
  4. Use private/temporary mode for L6 hygiene. Worth doing, especially on shared devices — but understand that you've already done the heavy lifting in steps 1–3.
  5. Review the output before saving. The model's response can echo back identifiers. Treat AI output as untrusted text until you've read it.

One sentence to take away.

Treat private mode as device hygiene, not network privacy. The actual privacy decisions are made before the prompt leaves your machine — and the tools for that work are local, fast, and don't require trusting a privacy policy.


07.FAQ

Is ChatGPT temporary chat actually private?

It's a Layer 6 control — it hides the conversation from your local chat history and OpenAI states it isn't used for training. Layers 1–5 are unchanged: the prompt and any attachments are still transmitted to OpenAI, processed on their servers, and retained for safety review (currently up to ~30 days, longer if flagged).

Does Grok's ghost icon / private chat actually protect my data?

Same shape as ChatGPT temporary chat. xAI states private chats aren't used for training and they don't appear in your visible history. The prompt itself still travels to xAI servers and is subject to their retention and any legal preservation orders.

Is Perplexity incognito mode private?

Perplexity incognito hides queries from your saved history (Layer 6). The query still routes through Perplexity and the upstream model providers it forwards to, so Layers 3–5 remain in play.

Does Claude have a "private mode"?

Not as a distinct toggle on consumer accounts. Anthropic states consumer chats aren't used for training by default, but content is still transmitted to and processed by Anthropic, with retention windows that extend if a conversation is flagged.

Are paid AI plans more private than free plans?

Mostly at Layer 5 (training): paid, API, and enterprise tiers commonly opt out of training by default. Layers 3 and 4 are unchanged — the provider still receives, processes, and logs the prompt.

Is using the API more private than the chat UI?

For Layer 6 yes (no UI history), and for Layer 5 typically yes (no training on API traffic by default for major providers). Layers 1–4 are unchanged: the prompt is still transmitted, processed, and retained for abuse monitoring under the provider's policy.

What actually makes a prompt private?

Defense in depth applied before the prompt leaves your device. Strip file metadata locally, redact PII from documents and prompt text locally, then send the cleaned prompt to whatever AI you prefer. Tools like CamoText and CamoConvert do this entirely on-device, which is why provider policy stops mattering: the provider never receives the sensitive parts.

What about local LLMs — do they fix all six layers?

A locally hosted model collapses Layers 4 and 5 (no provider) and reduces Layer 6 to local file hygiene. It does not, by itself, fix Layers 1–3: a local model still ingests whatever metadata and unredacted content you feed it, and any logs it writes live on disk. The recipe (strip → redact → send) still applies.