Private AI Options

April 22, 2025

LLMs and AI services often involve sending your personal data or conversations to big tech companies’ servers. Understandably, many users worry: what happens to my data? In response, leading AI providers have rolled out various types of “private” or incognito modes, and a wave of privacy-first alternatives has emerged. In this post, we’ll compare current options with a focus on privacy, how they handle your info (and what controls you have), highlight some privacy-focused AI alternatives, discuss what it takes to run AI locally on your own, and cover tools & tips to protect your data.

CamoText: Avoid AI Surveillance

Avoid the eyes of AI.

Private Modes in AI Services

Some leading hosted AI models now offer some form of “private” or no-history mode to give users more control. Here’s how ChatGPT, Grok (xAI), Claude (Anthropic), and Google’s Gemini stack up on privacy options:

1. ChatGPT – “Temporary Chat” Incognito Mode (OpenAI)


OpenAI’s ChatGPT allows you to turn off chat history and model training on your conversations – essentially an incognito mode. By default, ChatGPT does save your conversations to your account and OpenAI can use them to further train their models.1 If you’re privacy-conscious, you’ll want to enable the Temporary Chat feature (also called Chat History & Training Off in settings). In Temporary Chat mode, your conversation will not be stored in your history or used to train. However, even in incognito mode OpenAI will still retain the chat on their servers for up to 30 days (their rationale? In case they need to review it for abuse or policy violations). Also keep in mind that OpenAI still collects some metadata (like general usage stats and possibly IP address) for security and performance, even if the content of your chat isn’t used to improve the AI.

2. Grok – Private Mode (“Temporary Mode”) (xAI)


By default, X fed everything – your public tweets and your interactions with Grok – back into Grok’s training data.2 In response, X introduced settings to opt out: under Privacy & Safety > “Grok & AI”, users can disable the option to allow their posts and Grok chats to be used for training. Additionally, making your X account private (protected tweets) will ensure your posts aren’t included in Grok’s learning pool.
xAI built a Private Chat mode (also labeled “Temporary Mode” in the interface) similar to ChatGPT’s incognito. On the Grok app or website, you’ll see a little ghost icon – clicking that activates Private Mode. Similar to ChatGPT’s incognito mode, a message right on the chat screen confirms: “This chat is temporary and won’t appear in your history or be used to train models. We may securely retain it for up to 30 days for safety purposes.”

3. Claude (Anthropic)


Anthropic ostensibly doesn’t use your conversations with Claude to train their model or improve it, unless you explicitly give permission.3 Anthropic’s policy states that user prompts and outputs won’t be used for model training unless: (1) the data is flagged for abuse/misuse (in which case they might analyze it to improve safety systems), (2) you opt in by feedback, or (3) for certain enterprise arrangements.
However, what about data storage and history? If you use Claude’s free website (claude.ai) or Claude Pro, your conversations are saved in your account so you can revisit them, but you can delete chats. Similar to ChatGPT and Grok, when you delete a conversation, it’s “removed” from the Claude app immediately but only purged from Anthropic’s servers within 30 days. If your prompt triggers a big red flag in their safety system (e.g. something that violates their usage policies), they might retain that data longer (up to 2 years) to improve their filters.4

4. Gemini – Activity Controls (Google AI)


Since Gemini is integrated with your Google account, it falls under Google’s broader privacy framework. By default, if you’re an adult user, Google logs and stores your conversations and uses them to improve its AI models .5 In fact, Google may employ human reviewers to read excerpts of conversations (with "identifying info removed") to help refine the system. Google also collects related metadata – for example, your location (based on your IP or device), general usage patterns, and any feedback you give.
Google provides a limited Activity Control to turn off data collection and training. However, even with activity turned off, Google will still store your recent conversations for up to 72 hours to operate the service and allow follow-ups. This short-term storage is not part of your account history and supposedly not used for improvements – but it exists on their servers. If any of your past conversations were already reviewed by humans (during the time you had the setting on), those review annotations are kept for up to 3 years. It’s also worth noting that Gemini interface does not (at time of writing) have a one-chat incognito button like ChatGPT or Grok.

5. Perplexity AI – “Incognito Mode”


Perplexity is an AI search engine and chatbot known for providing cited sources. Perplexity offers a true Incognito search mode, in which your queries are not saved to your account or history, and they disappear after 24 hours.

6. Venice AI


Venice is a browser-based AI prompt router6 that does not store or log your prompts or the AI’s responses on any central server, but the LLMs they query still see the content itself. When you ask a question, it’s routed to a computing node to get answered, but not saved along the way. Your conversation history is kept only in your own browser’s local storage.

Local LLMs: Running AI on Your Device

For the ultimate in privacy, local LLMs – large language models that run entirely on your own hardware – have become viable albeit for those with better-than-average hardware and some technical know-how.

Researchers have open-sourced many language models (examples: Meta’s LLaMA 2, EleutherAI’s GPT-J and GPT-NeoX, MosaicML’s MPT, etc.). Developers have created optimized versions of these models (using quantization and other tricks to shrink them) so that they can run on ordinary CPUs. Keep in mind, the responses you get will depend on the model you chose – a smaller model might be a bit less coherent or factual.If you have a gaming PC with a strong GPU or Apple’s latest M1/M2 chips, you may be able to handle:

  1. GPT4All by Nomic: Free to download, and upon first launch it offers a list of models to download (for example, a 7 billion-parameter LLaMA 2 chat model, among others). Once loaded, all the computation happens on your machine and the interface is clean and simple, with features like saving chat transcripts and copying answers.
  2. LM Studio: LM Studio is another user-friendly GUI for running local models, with a slick interface. It uses the same underlying tech (llama.cpp) to run models on CPU. The advantage of LM Studio is a polished experience for people who don’t want to fiddle with settings.

Remember, open models might hallucinate or make mistakes more often than the much more powerful hosted alternatives like ChatGPT. So, manage your expectations: you may need to experiment with prompts or try different variants of models to get the results you want. Local AI is all yours, but a bit more limited; it’s a trade-off between convenience and performance.

Perhaps a hybrid approach could work for you: use local LLMs for straightforward tasks, and use the big cloud AIs for more complex tasks that require their extra “IQ”, but only after stripping out any private info. Speaking of stripping private info, let's look at some extra tools and practices to help you stay private while still benefiting from AI.

AI Privacy Best Practices & Tools

Anonymize your text and data

Sensitive documents or text (say, a customer email, legal document, or personal journal entry) should be run through an offline anonymizer like CamoText first in order to prevent any personally identifiable information from being exposed outside your computer.

The cleaned text lets the AI do analysis or editing on the substance of the text without ever seeing the actual private details. Afterward, you can map the AI’s output back to the real details. It means you never upload real names, emails, or numbers to the cloud AI – you maintain control.

Use a VPN and tracker-blocking browser

When using web-based AI services, a VPN can add a layer of privacy by hiding your IP address and thus the origin of your traffic (assuming the submitted data itself doesn't reveal your location, such as image metadata7). It also protects against eavesdropping if you’re on public Wi-Fi. Additionally, tools like Brave Browser or Firefox with privacy extensions (uBlock Origin, Privacy Badger, etc.) will block third-party scripts or cookies that could track your usage of the hosted AI page.

Regularly delete your chat history and account data

For reasons mentioned above (and subject to hold periods mentioned above), it's a good idea to regularly delete your chat history and account data. Every so often, purge your chats, especially any that contained sensitive discussions.

Periodically check settings and updates

AI services are evolving rapidly. New privacy features pop up, and sometimes defaults change-- keep an eye on the settings. For example, OpenAI introduced the ability to disable chat history only in mid-2023 – before that, you had to email support to opt out. Periodically review the privacy/data control settings for the tools you use and ensure they’re configured to your comfort level. Update your apps to the latest versions so you have any new features (like the aforementioned limited “private mode”) available.

The safest way to protect sensitive data is to never put it into a cloud-based AI in the first place.

As a final note: do you trust these big tech companies to abide by their public policies, to process your deletion requests timely and fully, and to refrain from clandestinely changing their terms on a whim?

Do you trust yourself to regularly check your settings, these policies, and manually delete or update settings when necessary, especially if you use several of the hosted models?

Even if the hosts truly do not retain or see data, there is always the risk of exploits, middlemanning, resetting non-default private options, other ancillary services or tools collecting data, and more. Anonymizing your text and removing image metadata locally first is an easy first step to substantially mitigate these risks.


Endnotes