ChatGPT Faces GDPR Heat Over AI Hallucinations in Europe
ChatGPT’s false claims spark a new GDPR complaint in Norway, raising questions about AI accuracy and privacy rights in Europe.
Imagine asking a question about yourself online, only to discover a chilling tale of fiction masquerading as fact—a story so dark it could unravel your life. For Arve Hjalmar Holmen, a Norwegian man, this nightmare became reality when ChatGPT, the wildly popular AI chatbot from OpenAI, spun a tale claiming he’d been convicted of murdering two of his children and attempting to kill a third. The truth? None of it happened. Yet, this disturbing falsehood has ignited a fresh privacy battle in Europe, thrusting OpenAI into the crosshairs of the continent’s stringent data protection laws.
On March 20, 2025, the privacy advocacy group Noyb threw its weight behind Holmen, filing a complaint with Norway’s data protection authority. The accusation is stark: ChatGPT’s tendency to “hallucinate”—a term for when AI generates fabricated or nonsensical outputs—violates the European Union’s General Data Protection Regulation (GDPR). This isn’t the first time OpenAI has faced scrutiny over its chatbot’s loose grip on reality, but this case might just be the wake-up call regulators can’t ignore.
A Tale of Fiction and Real Consequences
Holmen’s ordeal began with a simple query: “Who is Arve Hjalmar Holmen?” ChatGPT’s response was a gut punch. It claimed he’d been sentenced to 21 years in prison for a gruesome crime against his own family. While the chatbot correctly noted his hometown and the genders of his three children, it veered into a realm of pure invention with the murder conviction. “The case shocked the local community,” the AI added, piling on details that made the lie sound eerily plausible.
For Holmen, the stakes couldn’t be higher. Falsehoods like these don’t just sting—they can shatter reputations, derail careers, and sow chaos in personal lives. Noyb argues that this mix of truth and fiction is precisely what makes ChatGPT’s hallucinations so dangerous. “It’s not just problematic—it’s unlawful,” says Joakim Söderberg, a data protection lawyer at Noyb. “The GDPR demands accuracy in personal data, and individuals have a right to correct it when it’s wrong.”
Under GDPR, companies like OpenAI, as data controllers, must ensure the personal information they process is accurate. If it’s not, users can demand rectification—a right Holmen says OpenAI denied him. Instead of fixing the error, the company reportedly offered to block responses to prompts about him, a workaround Noyb calls inadequate. “A tiny disclaimer saying ‘ChatGPT can make mistakes’ doesn’t cut it,” Söderberg told reporters. “You can’t spread lies and then shrug it off with a footnote.”
The GDPR’s Teeth: Fines and Forced Fixes
The stakes for OpenAI are steep. GDPR violations can carry fines of up to 4% of a company’s global annual turnover—potentially billions for a tech giant like OpenAI, whose valuation soared past $157 billion in 2024, according to Forbes. But it’s not just about money. European regulators can also mandate operational changes, forcing AI developers to rethink how their tools handle personal data.
Italy’s data protection authority set a precedent in 2023, briefly banning ChatGPT over privacy concerns. OpenAI scrambled to comply, tweaking user disclosures and paying a €15 million fine for processing data without a proper legal basis. That episode showed the GDPR’s power to bend even the most innovative tech to its will. Yet, since then, Europe’s privacy watchdogs have treaded cautiously, grappling with how to apply a law written for traditional data systems to the unpredictable world of generative AI.
Ireland’s Data Protection Commission (DPC), a key player given OpenAI’s European base in Dublin, has urged patience. Two years ago, it warned against hasty bans on AI tools, advocating for a measured approach to enforcement. But patience is wearing thin. A separate ChatGPT complaint filed in Poland in September 2023 still lingers unresolved, while an Austrian case from April 2024 sits stalled at the DPC’s desk. “It’s ongoing,” Risteard Byrne, a DPC spokesperson, told TechCrunch this week, offering no timeline for a decision.
Why Does ChatGPT Hallucinate?
To understand the mess, you have to peek under the hood. ChatGPT, built on a large language model, predicts the next word in a sequence based on patterns in its training data—a sprawling, often messy stew of internet text. When faced with gaps or ambiguity, it doesn’t say “I don’t know.” It fills the void with what seems plausible, even if it’s dead wrong.
In Holmen’s case, Noyb couldn’t pinpoint why ChatGPT conjured a tale of filicide. “We scoured archives and found no similar cases it might’ve confused him with,” a Noyb spokesperson said. One theory: the AI’s training data might be rife with crime stories, nudging it toward dramatic fiction when asked about an obscure name. Whatever the cause, the result was a defamation bomb—and a legal headache for OpenAI.
Interestingly, the chatbot’s behavior has since shifted. Recent tests show it now searches the web for real-time info rather than spinning tales from its memory alone. When I asked ChatGPT about Holmen today, it hedged, saying it “couldn’t find information” before pivoting to call him a “Norwegian musician” with an album titled Honky Tonk Inferno. The murderous fiction seems gone—for now. But Noyb and Holmen worry that false data could still lurk in the model’s depths, ready to resurface.
A Pattern of Trouble
Holmen’s story isn’t unique. ChatGPT has a rap sheet of hallucinations with legal bite. In Australia, a mayor was falsely tied to a bribery scandal. In Germany, a journalist was branded a child abuser. In the U.S., a radio host sued OpenAI after ChatGPT accused him of embezzlement—a case still winding through courts. These incidents underscore a systemic issue: an AI tool used by millions can’t reliably separate fact from fiction when it comes to people’s lives.
A 2024 study by Stanford University found that large language models hallucinate between 3% and 27% of the time, depending on the task. For OpenAI, that’s a dice roll with every query—one that’s landed them in hot water repeatedly. “AI companies act like GDPR doesn’t apply to them,” says Kleanthi Sardeli, another Noyb lawyer. “But it does. If they can’t stop hallucinations, the reputational damage to individuals could be catastrophic.”
Norway vs. Ireland: A Regulatory Tug-of-War
Noyb’s latest move is strategic. By filing in Norway and targeting OpenAI’s U.S. entity, it hopes to sidestep Ireland’s DPC, which has become a bottleneck for GDPR cases against Big Tech. OpenAI shifted its European operations to Dublin in 2024, leveraging the GDPR’s “one-stop-shop” rule, where a single regulator oversees cross-border data issues. Critics, including privacy advocates, call Ireland a soft touch—slow to act and light on penalties compared to peers like Italy.
The DPC’s track record fuels that skepticism. A 2023 Reuters report noted it took an average of 18 months to resolve major cases, far longer than the EU average. Meanwhile, Italy’s watchdog moved fast to curb ChatGPT, proving agility matters. Noyb’s Norwegian gambit tests whether local authorities can still flex muscle when AI missteps hit close to home.
What’s Next for OpenAI and AI Privacy?
This complaint could be a tipping point. If Norway’s authority takes it up and rules against OpenAI, it might force a reckoning on how AI handles personal data—not just in Europe, but globally. The U.S., where privacy laws lag behind GDPR, could take note as lawsuits pile up. Meanwhile, the EU’s AI Act, set to take effect in 2026, looms as another layer of oversight, promising tougher rules on transparency and risk.
For everyday users, the stakes are personal. How do you trust a tool that might smear your name with a keystroke? OpenAI has stayed mum on this latest complaint, but its past responses lean on disclaimers and technical tweaks. That might not suffice this time. As Söderberg puts it, “The technology has to bend to the law—not the other way around.”
A Call to Action
The Holmen case isn’t just about one man’s fight—it’s a litmus test for AI’s place in our lives. For readers, it’s a reminder to double-check what chatbots churn out, especially about people. For regulators, it’s a nudge to act before hallucinations spiral further. And for OpenAI, it’s a chance to prove it can tame its creation—or face the GDPR’s full wrath.
As AI races ahead, blending marvel and mischief, one thing’s clear: the truth still matters. Holmen’s story, twisted by code into something unrecognizable, shows the human cost when it doesn’t. Will Europe’s privacy guardians rise to the challenge? Only time—and perhaps a hefty fine—will tell.
(Disclaimer: This article is based on publicly available information, and reflects the latest developments in the ChatGPT GDPR complaint filed in Norway. It is intended for informational purposes only and does not constitute legal advice. For the most current updates, consult official statements from OpenAI, Noyb, or relevant data protection authorities.)
Also Read: Apple’s EU Compliance Challenge: Navigating the Digital Markets Act