ChatGPT accused of claiming an harmless man murdered his kids


A privateness grievance has been filed in opposition to OpenAI by a Norwegian man who claims that ChatGPT described him as a convicted assassin who killed two of his personal kids and tried to kill a 3rd.

Arve Hjalmar Holmen says that he needed to search out out what ChatGPT would say about him, however was offered with the false declare that he had been convicted for each homicide and tried homicide, and was serving 21 years in a Norwegian jail. Alarmingly, the ChatGPT output mixes fictitious particulars with details, together with his hometown and the quantity and gender of his kids.

Austrian advocacy group Noyb filed a complaint with the Norwegian Datatilsynet on behalf of Holmen, accusing OpenAI of violating the info privateness necessities of the European Union’s Normal Knowledge Safety Regulation (GDPR). It’s asking for the corporate to be fined and ordered to take away the defamatory output and enhance its mannequin to keep away from comparable errors.

“The GDPR is obvious. Private knowledge needs to be correct. And if it’s not, customers have the precise to have it modified to replicate the reality,” says Joakim Söderberg, knowledge safety lawyer at Noyb. “Displaying ChatGPT customers a tiny disclaimer that the chatbot could make errors clearly isn’t sufficient. You possibly can’t simply unfold false data and in the long run add a small disclaimer saying that every part you stated may not be true.”

Noyb and Holmen haven’t publicly revealed when the preliminary ChatGPT question was made — the element is included within the official grievance, however redacted for its public release — however says that it was earlier than ChatGPT was up to date to incorporate internet searches in its outcomes. Enter the identical question now, and the outcomes all relate to Noyb’s grievance as a substitute.

That is Noyb’s second official grievance concerning ChatGPT, although the primary had decrease stakes: in April 2024 it filed on behalf of a public determine whose date of delivery was being inaccurately reported by the AI software. On the time it took challenge with OpenAI’s declare that faulty knowledge couldn’t be corrected, solely blocked in relation to particular queries, which Noyb says violates GDPR’s requirement for inaccurate knowledge to be “erased or rectified directly.”

Leave a Reply

Your email address will not be published. Required fields are marked *