GDPR and AI Chatbots: What You Need to Know in 2026
The General Data Protection Regulation (GDPR) is the world's strongest data protection law. But AI chatbots are testing its limits. Every major AI assistant collects conversational data, and most use it for model training — often without meaningful consent. Here is what GDPR says, what companies are doing, and how to protect yourself.
What GDPR requires
GDPR establishes clear rules for how personal data must be handled:
- Lawful basis: Companies need a valid legal reason to process your data — usually consent or legitimate interest
- Purpose limitation: Data collected for one purpose cannot be used for another without consent
- Data minimisation: Companies should collect only the data they need
- Right to access: You can request a copy of all data a company holds about you
- Right to erasure: You can request deletion of your data (the "right to be forgotten")
- Right to object: You can object to automated processing, including profiling
- Data portability: You can request your data in a machine-readable format
- Breach notification: Companies must report data breaches to the supervisory authority within 72 hours (Article 33). Data subjects must be informed without undue delay when the breach poses a high risk (Article 34)
Violations carry two tiers of fines. The upper tier (Art. 83(5)) covers core violations — up to 4% of global annual revenue or €20 million, whichever is higher. The lower tier (Art. 83(4)) covers administrative and technical obligations — up to 2% of global annual revenue or €10 million.
How AI chatbots violate GDPR
Despite these protections, AI chatbot companies have repeatedly violated GDPR:
Training on user data without valid consent
ChatGPT, Claude, Gemini, Grok, Copilot, Meta AI, Perplexity, Mistral, and Character.AI all train on user conversations by default. Under GDPR, using personal data for a new purpose (AI training) requires separate, informed, freely given consent. Burying an opt-out in settings does not meet this standard.
Dark patterns in opt-out interfaces
Anthropic (Claude) was criticised for introducing an opt-out interface that privacy advocates described as a "dark pattern" — designed to discourage users from opting out. Under GDPR, withdrawing consent must be as easy as giving it.
Gating privacy behind paywalls
Mistral initially made the training opt-out available only on paid plans. A GDPR complaint was filed by French lawyer Jeremy Roche with CNIL in February 2025 because Article 12 requires that privacy controls be freely accessible regardless of payment tier.
Notable GDPR enforcement actions against AI companies
| Company | Fine/Action | Year |
|---|---|---|
| Meta | ~€2.5 billion in cumulative GDPR fines (including €1.2B for data transfers, €390M for behavioural advertising, €251M in 2024, €225M for WhatsApp) | 2021-2025 |
| OpenAI (ChatGPT) | €15 million fine (Italy) | 2024 |
| DeepSeek | Banned in Italy (January 30, 2025), under investigation across EU | 2025 |
| Grok/X | NOYB complaints filed across 9 EU countries | August 2024 |
| Mistral | GDPR complaint for paywall-gated opt-out (filed with CNIL) | February 2025 |
| Microsoft | Paused Copilot training in EEA to avoid enforcement | August 2024 |
The EU AI Act adds a new layer
The EU AI Act, which entered into force in August 2024 and is being implemented in stages through 2027, complements GDPR with AI-specific rules. Key dates: prohibited practices took effect February 2025, general-purpose AI (GPAI) rules apply from August 2025, high-risk system requirements from August 2026, and full applicability from August 2027.
- Risk classification: AI systems are classified as minimal, limited, high, or unacceptable risk
- Transparency requirements: AI systems must disclose when content is AI-generated
- Prohibited practices: Social scoring, manipulative AI techniques, and real-time biometric surveillance in public spaces are banned, with narrow exceptions for terrorism threats, missing persons searches, and serious crime investigations
- Foundation model obligations: Providers of general-purpose AI models must document training data (a mandatory training data summary template was published in July 2025), comply with copyright law, and publish energy consumption data
Together, GDPR and the AI Act create the most comprehensive AI governance framework in the world.
What to look for in a privacy-respecting AI assistant
When choosing an AI chatbot, ask these questions:
- Where is my data stored? Look for EU-only data centres. Avoid services that route data through US or Chinese servers.
- Is my data used for training? Read the privacy policy. If the default is opt-in to training, the company profits from your data.
- Who funds the company? Big Tech investors (Microsoft, Amazon, Google) create conflicts of interest. An AI company funded by the same companies that profit from your data cannot be truly independent.
- Is there a meaningful opt-out? An opt-out buried in settings, gated behind payment, or not retroactive is not meaningful consent.
- Does the company have EU presence? Companies without European offices or data centres are harder to hold accountable under GDPR.
eustella: GDPR compliant by design
eustella is designed from the ground up to comply with GDPR and the EU AI Act:
- All data stored in EU data centres — no transfers to the US, China, or any third country
- No training on user data — your conversations are never used to improve models
- No data selling or sharing — your data is yours
- European and open-source models only — transparent, auditable, independent from Big Tech
- Built by an independent European company — AI Newsrooms Technology GmbH, Vienna, Austria
eustella does not need an opt-out because it never opts you in.
Sign up for early access to eustella →
Sources
- GDPR Articles 33 & 34 (breach notification): gdpr-info.eu
- GDPR Article 83 (fine tiers): gdpr-info.eu
- OpenAI €15M fine (Italy, 2024): Euronews
- DeepSeek banned in Italy (January 2025): Euronews
- Grok/X NOYB complaints (August 2024): noyb.eu
- Mistral GDPR complaint (February 2025): Sifted
- Microsoft Copilot training pause (August 2024): Microsoft Blog
- Anthropic dark pattern criticism: The Decoder
- EU AI Act implementation timeline: artificialintelligenceact.eu
- EU AI Act Article 5 (prohibited practices): artificialintelligenceact.eu
- AI training data summary template (July 2025): WilmerHale
- GPAI energy consumption requirements: White & Case