Private AI Options
April 22, 2025
LLMs and AI services often involve sending your personal data or conversations to big tech companies’ servers. Understandably, many users worry: what happens to my data? In response, leading AI providers have rolled out various types of “private” or incognito modes, and a wave of privacy-first alternatives has emerged. In this post, we’ll compare current options with a focus on privacy, how they handle your info (and what controls you have), highlight some privacy-focused AI alternatives, discuss what it takes to run AI locally on your own, and cover tools & tips to protect your data.

Avoid the eyes of AI.
Private Modes in AI Services
Some leading hosted AI models now offer some form of “private” or no-history mode to give users more control. Here’s how ChatGPT, Grok (xAI), Claude (Anthropic), and Google’s Gemini stack up on privacy options (with specific voice mode considerations):
1. ChatGPT – “Temporary Chat” Incognito Mode (OpenAI)
OpenAI’s ChatGPT allows you to turn off chat history and model training on your conversations – essentially an incognito mode. By default, ChatGPT does save your conversations to your account and OpenAI can use them to further train their models.1 If you’re privacy-conscious, you’ll want to enable the Temporary Chat feature (also called Chat History & Training Off in settings). In Temporary Chat mode, your conversation will not be stored in your history or used to train. However, even in incognito mode OpenAI will still retain the chat on their servers for up to 30 days (their rationale? In case they need to review it for abuse or policy violations). Also keep in mind that OpenAI still collects some metadata (like general usage stats and possibly IP address) for security and performance, even if the content of your chat isn’t used to improve the AI.
Voice Mode Considerations: ChatGPT's voice mode (GPT-4o) is not available in Temporary Chat mode, meaning users seeking ephemeral conversations cannot use voice features.8 When voice mode is used, audio recordings are retained for 30 days by default, and transcripts may be retained longer for model improvement unless you opt out of training. Voice inputs may be sampled for quality assurance and safety review, potentially involving human reviewers. Opting out of training reduces but does not eliminate voice data retention and potential access.
2. Grok – Private Mode (“Temporary Mode”) (xAI)
By default, X fed everything – your public tweets and your interactions with Grok – back into Grok’s training data.2 In response, X introduced settings to opt out: under Privacy & Safety > “Grok & AI”, users can disable the option to allow their posts and Grok chats to be used for training. Additionally, making your X account private (protected tweets) will ensure your posts aren’t included in Grok’s learning pool.
xAI built a Private Chat mode (also labeled “Temporary Mode” in the interface) similar to ChatGPT’s incognito. On the Grok app or website, you’ll see a little ghost icon – clicking that activates Private Mode. Similar to ChatGPT’s incognito mode, a message right on the chat screen confirms: “This chat is temporary and won’t appear in your history or be used to train models. We may securely retain it for up to 30 days for safety purposes.”
Voice Mode Considerations: Grok's voice mode interactions are linked to your X (Twitter) account, meaning voice data may be associated with your broader social media profile.8 Voice interactions contribute to Grok's training data by default, and the privacy policy indicates broad data usage rights. Voice features are not available in Private Mode, preventing truly private voice interactions. Retention periods for voice data are not precisely specified in xAI's documentation.
3. Claude (Anthropic)
Anthropic ostensibly doesn’t use your conversations with Claude to train their model or improve it, unless you explicitly give permission.3 Anthropic’s policy states that user prompts and outputs won’t be used for model training unless: (1) the data is flagged for abuse/misuse (in which case they might analyze it to improve safety systems), (2) you opt in by feedback, or (3) for certain enterprise arrangements.
However, what about data storage and history? If you use Claude’s free website (claude.ai) or Claude Pro, your conversations are saved in your account so you can revisit them, but you can delete chats. Similar to ChatGPT and Grok, when you delete a conversation, it’s “removed” from the Claude app immediately but only purged from Anthropic’s servers within 30 days. If your prompt triggers a big red flag in their safety system (e.g. something that violates their usage policies), they might retain that data longer (up to 2 years) to improve their filters.4
Voice Mode Considerations: Claude's voice features involve server-side processing of audio through speech-to-text services, and voice interactions are not fully ephemeral.8 Audio is retained for service improvement purposes, though retention periods are not precisely specified in consumer documentation. Voice features have limitations in privacy-focused usage scenarios, and full voice conversation history is typically retained for context. Enterprise tiers may offer different retention terms and enhanced privacy controls compared to consumer versions.
4. Gemini – Activity Controls (Google AI)
Since Gemini is integrated with your Google account, it falls under Google’s broader privacy framework. By default, if you’re an adult user, Google logs and stores your conversations and uses them to improve its AI models .5 In fact, Google may employ human reviewers to read excerpts of conversations (with "identifying info removed") to help refine the system. Google also collects related metadata – for example, your location (based on your IP or device), general usage patterns, and any feedback you give.
Google provides a limited Activity Control to turn off data collection and training. However, even with activity turned off, Google will still store your recent conversations for up to 72 hours to operate the service and allow follow-ups. This short-term storage is not part of your account history and supposedly not used for improvements – but it exists on their servers. If any of your past conversations were already reviewed by humans (during the time you had the setting on), those review annotations are kept for up to 3 years. It’s also worth noting that Gemini interface does not (at time of writing) have a one-chat incognito button like ChatGPT or Grok.
Voice Mode Considerations: Gemini's voice features (Gemini Live/Gemini Voice) require Gemini Apps Activity to be enabled, meaning you cannot use voice in a truly private mode.8 Voice data is retained for up to 18 months by default and can be managed through Google Activity Controls, though deletion may not remove data from all backup systems immediately. Voice recordings are linked to your Google account and may be connected to cross-service data (Search, Assistant history). Human reviewers may access voice recordings for quality improvement, and data may inform advertising profiles for free tier users.
5. Perplexity AI – “Incognito Mode”
Perplexity is an AI search engine and chatbot known for providing cited sources. Perplexity offers a true Incognito search mode, in which your queries are not saved to your account or history, and they disappear after 24 hours.
6. Venice AI
Venice is a browser-based AI prompt router6 that does not store or log your prompts or the AI’s responses on any central server, but the LLMs they query still see the content itself. When you ask a question, it’s routed to a computing node to get answered, but not saved along the way. Your conversation history is kept only in your own browser’s local storage.
Information Extractable from Voice Recordings
Voice can reveal far more than the spoken text.
Identity
Voice biometrics can identify individuals across recordings.
Demographics
Age, gender, ethnicity, and geographic origin can be inferred.
Emotional State
Stress, anxiety, happiness, fatigue, and other emotions are detectable.
Health Indicators
Certain medical conditions and neurological disorders manifest in voice patterns.
Environment
Background sounds reveal location type, companions, and activities.
Linguistic Profile
Education level, socioeconomic status, and cognitive patterns.
Source: AI Privacy Pro
Local LLMs: Running AI on Your Device
For the ultimate in privacy, local LLMs – large language models that run entirely on your own hardware – have become viable albeit for those with better-than-average hardware and some technical know-how.
Researchers have open-sourced many language models, and developers have created optimized versions (using quantization and other techniques to shrink them) so that they can run on ordinary CPUs or consumer GPUs. Keep in mind, the responses you get will depend on the model you choose – smaller models may be less coherent or factual than larger ones. If you have a gaming PC with a strong GPU or modern Apple chips, you may be able to run more capable models locally.
Various user-friendly applications are available that provide graphical interfaces for downloading and running local models. These tools typically offer features like model selection, chat interfaces, and the ability to save conversation transcripts. The main advantage is that all computation happens entirely on your machine, giving you complete control over your data and privacy.
Remember, open models might hallucinate or make mistakes more often than the much more powerful hosted alternatives like ChatGPT. So, manage your expectations: you may need to experiment with prompts or try different variants of models to get the results you want. Local AI is all yours, but a bit more limited; it’s a trade-off between convenience and performance.
Perhaps a hybrid approach could work for you: use local LLMs for straightforward tasks, and use the big cloud AIs for more complex tasks that require their extra “IQ”, but only after stripping out any private info. Speaking of stripping private info, let's look at some extra tools and practices to help you stay private while still benefiting from AI.
AI Privacy Best Practices & Tools
Anonymize your text and data
Sensitive documents or text (say, a customer email, legal document, or personal journal entry) should be run through an offline anonymizer like CamoText first in order to prevent any personally identifiable information from being exposed outside your computer.
The cleaned text lets the AI do analysis or editing on the substance of the text without ever seeing the actual private details. Afterward, you can map the AI’s output back to the real details. It means you never upload real names, emails, or numbers to the cloud AI – you maintain control.
Use a VPN and tracker-blocking browser
When using web-based AI services, a VPN can add a layer of privacy by hiding your IP address and thus the origin of your traffic (assuming the submitted data itself doesn't reveal your location, such as image metadata7). It also protects against eavesdropping if you’re on public Wi-Fi. Additionally, tools like Brave Browser or Firefox with privacy extensions (uBlock Origin, Privacy Badger, etc.) will block third-party scripts or cookies that could track your usage of the hosted AI page.
Dictate with a dedicated app
Instead of using an AI service's voice mode, consider using a dedicated speech-to-text desktop app like CamoVoice to dictate your conversations privately and offline, then paste the dictated text into the text input field for a private or incognito chat. This provides the speed and convenience of spoken prompting without the privacy risks.
Regularly delete your chat history and account data
For reasons mentioned above (and subject to hold periods mentioned above), it's a good idea to regularly delete your chat history and account data. Every so often, purge your chats, especially any that contained sensitive discussions.
Periodically check settings and updates
AI services are evolving rapidly. New privacy features pop up, and sometimes defaults change-- keep an eye on the settings. For example, OpenAI introduced the ability to disable chat history only in mid-2023 – before that, you had to email support to opt out. Periodically review the privacy/data control settings for the tools you use and ensure they’re configured to your comfort level. Update your apps to the latest versions so you have any new features (like the aforementioned limited “private mode”) available.
The safest way to protect sensitive data is to never put it into a cloud-based AI in the first place.
As a final note: do you trust these big tech companies to abide by their public policies, to process your deletion requests timely and fully, and to refrain from clandestinely changing their terms on a whim?
Do you trust yourself to regularly check your settings, these policies, and manually delete or update settings when necessary, especially if you use several of the hosted models?
Even if the hosts truly do not retain or see data, there is always the risk of exploits, middlemanning, resetting non-default private options, other ancillary services or tools collecting data, and more. Anonymizing your text and removing image metadata locally first is an easy first step to substantially mitigate these risks.
See Use Any AI Privately for details on how handling privacy at the very first step (before any data transmission) substantially mitigates downstream exposure risks and enables you to use any AI service more securely.
Endnotes
- ZDNet - ChatGPT Privacy Tips
- X Help - About Grok
- Anthropic - Is My Data Used for Model Training?
- Anthropic - How Long Do You Store Personal Data?
- Google Support - Gemini Activity Controls
- Venice AI
- CamoPhoto: Metadata Remover
- AI Privacy Pro - Voice Mode Privacy Concerns: What LLM Providers Collect When You Speak
