OpenAI Notified: ChatGPT Violates GDPR

Image created using DALL·E 3 with this prompt: Create an image that captures the metaphorical fight between the EU and OpenAI. It’s a boxing match between the Italian Data Protection Authority (DPA) and OpenAI over the use of ChatGPT. Aspect ratio: 16×9.

 

The Italian Data Protection Authority (DPA) has informed OpenAI that it suspects ChatGPT of violating European Union privacy laws. The Italian DPA’s investigation, which has been ongoing for several months, has led to preliminary conclusions that ChatGPT is in breach of EU law. The concerns are primarily related to the mass collection of data used to train the AI models and the lack of a suitable legal basis for such collection and processing of personal data. Additionally, the DPA has highlighted issues with the AI tool’s potential to produce inaccurate information about individuals (“hallucinations”) and raised concerns about child safety due to the absence of an age verification mechanism.

Under the EU’s General Data Protection Regulation (GDPR), companies found to have violated data protection rules can face fines of up to 4% of the company’s global revenue. OpenAI has been given 30 days to respond to the charges, and the company may face significant fines if the violations are confirmed.

The Italian DPA’s actions are part of a broader effort coordinated by the European Data Protection Board to oversee ChatGPT, although individual authorities remain independent in their decision-making. The investigation by the Italian DPA follows a temporary ban on ChatGPT in Italy in March 2023 due to privacy concerns; this ban was lifted approximately four weeks later after OpenAI addressed the issues raised.

OpenAI has stated, “We believe our practices align with GDPR and other privacy laws, and we take additional steps to protect people’s data and privacy. We want our AI to learn about the world, not about private individuals. We actively work to reduce personal data in training our systems like ChatGPT, which also rejects requests for private or sensitive information about people. We plan to continue to work constructively with the Garante.”

Once again, we see the EU demonstrating its overarching tech strategy: sue, then settle with big tech. This strategy, which has worked for the past 20 years or so, is going to fail spectacularly; GDPR is the wrong tool to use for AI regulation and – maybe more importantly – the EU needs big AI way more than big AI needs the EU. Addio grande tecnologia.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it. This work was created with the assistance of various generative AI models.

About Shelly Palmer

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.

Tags

Categories

PreviousWe Can't Fix Political Deepfakes NextNightshade: Poisoning AI Training Sets

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe