Using artificial intelligence tools has become an everyday thing. We summarize texts, generate images, write code, and even ask ChatGPT for personal advice. But most users don’t stop to think about something fundamental: what happens to the data we share with these AIs? In this article I’ll explain how to protect your privacy when using AI apps without giving up their benefits.
Table of contents
Table of contents
What data do AI apps collect
Before protecting yourself, you need to know what you’re protecting against. AI apps collect more data than you think:
What’s normally collected:
- Your conversations (everything you type and the responses you get)
- Usage data: frequency, duration, types of queries
- Device information: model, operating system, approximate location
- Files you upload: PDFs, images, documents
- Account data: email, name, phone number
What many do with that data:
- Use it to train and improve their AI models
- Store it indefinitely on their servers
- Share it with partners and service providers
- Use it to personalize ads
This doesn’t mean all AI apps are malicious. But it means the information you share can be stored and used in ways you didn’t expect. I’ve seen people write sensitive financial information, personal medical data, and even passwords directly into ChatGPT without thinking twice.
Pro-tip: The golden rule is: don’t write anything in an AI that you wouldn’t be comfortable sharing with a stranger. Your conversations may be reviewed by humans to improve the service, and that’s in the terms of use almost nobody reads.
How to limit the data you share with AI
Now that you know what’s collected, let’s look at how to minimize the information you share:
Disable training with your data
Most AI platforms have an option to exclude your conversations from model training:
ChatGPT (OpenAI):
- Go to chat.openai.com > Settings.
- Enter Data Controls.
- Turn off “Improve the model for everyone.”
- You can also disable chat history so conversations aren’t saved to your account.
Google Gemini:
- Go to myactivity.google.com.
- Look for “Gemini Apps Activity.”
- Turn off activity saving.
- You can delete existing history from there.
Microsoft Copilot:
- Go to your Microsoft account settings.
- Look for Privacy > Copilot Data.
- Turn off data usage for service improvement.
Don’t upload sensitive information
This seems obvious, but the temptation is real. When an AI offers to analyze a document, it’s easy to upload it without thinking.
Never upload:
- Documents with social security numbers or ID numbers
- Bank statements with full account details
- Medical reports with your full name
- Passwords or access keys
- Confidential legal documents
- Photos of your ID or passport
If you need to analyze sensitive documents:
- Delete or redact personal information before uploading
- Use tools that process locally (like PDFgear)
- Anonymize data: change names, substitute account numbers
Use separate accounts
If you use AI for work and personal use, consider having separate accounts. This prevents personal conversations from mixing with professional data, and if one is compromised, the other isn’t affected.
Essential privacy settings
Within each AI app, there are settings you should review immediately:
Conversation history: Most AIs save all your conversations by default. This is useful for returning to old queries, but it means your data is stored indefinitely. If you don’t need history, disable it.
Data-based personalization: ChatGPT has a memory feature that remembers your preferences and data across conversations. It’s useful, but it means OpenAI stores a profile of you. Review what it has memorized and delete what you don’t want.
Integration with other apps: Many AIs can connect to your Gmail, calendar, Google Drive, etc. Each integration is an additional door to your data. Only activate the ones you really need.
Sharing conversations: Some AIs allow sharing conversations publicly. Make sure this feature is disabled if you don’t use it, and never share conversations with personal data.
| Setting | ChatGPT | Gemini | Copilot |
|---|---|---|---|
| Disable history | Yes | Yes | Yes |
| Exclude from training | Yes | Yes | Yes |
| Delete existing data | Yes | Yes | Yes |
| Disable memory | Yes | Limited | No |
| Delete full account | Yes | Yes | Yes |
Pro-tip: Do an audit of your AI apps once a month. Review what data they have stored, delete old conversations, and verify that privacy options are still set the way you left them. Apps update their policies constantly.
More private alternatives
If privacy is your absolute priority, there are alternatives to the big platforms:
On-device AI: As I explained in another article, on-device AI processes everything locally. Apple Intelligence, Galaxy AI features, and tools like LM Studio on laptops don’t send your data to any server.
Open-source tools: Applications like Ollama let you run AI models locally on your computer. No external server, no data collection. The trade-off is you need a powerful computer.
Browser privacy: If you use a web-based AI, do it in an incognito window or in a browser like Brave that blocks trackers. Don’t log in if it’s not necessary for your task.
VPN + AI: Using a VPN when accessing AI services hides your IP address and location, though it doesn’t prevent the AI from collecting what you write.
Privacy checklist for AI apps
Before continuing to use any AI app, ask yourself these questions:
- Have I disabled using my data to train models?
- Have I reviewed and deleted conversations with sensitive information?
- Have I disabled memory/personalization if I don’t need it?
- Do I have two-factor authentication enabled on my account?
- Have I reviewed which other app integrations are active?
- Do I know how to delete all my data if I want to stop using the service?
- Have I not uploaded documents with identifiable personal data?
- Have I reviewed the service’s data retention policy?
FAQ: Frequently asked questions
Do AI companies read my conversations?
Yes, it’s possible. OpenAI, Google, and Microsoft admit that employees may review conversations to improve the service, especially if there are reports of inappropriate content. Disabling training reduces this, but doesn’t eliminate it completely.
Can I delete all my data from an AI app?
Most platforms let you delete your history and account. But data already used to train models may not be reversible. In the EU, GDPR gives you the right to request complete deletion.
Does using a VPN protect my privacy with AI?
Partially. A VPN hides your location and IP, but the AI still sees everything you write. The VPN protects data transport, not the content of your conversations.
Is on-device AI better than cloud AI for privacy?
Yes, absolutely. If privacy is your priority, on-device AI is the most secure option because your data never leaves your device. The downside is local models are less powerful.
Conclusion
Artificial intelligence is an incredibly useful tool, but it’s not free in terms of privacy. Every conversation, every uploaded document, every personal question you ask is information a company stores. The good news is you have control. Disable training with your data, don’t upload sensitive information, review your settings regularly, and consider on-device alternatives if privacy is critical for you. It’s not about stopping using AI, but about using it intelligently and consciously.
TecnoOrange