Anthropic has responded to allegations that it used an AI-fabricated supply in its authorized battle in opposition to music publishers, saying its Claude chatbot made an “sincere quotation mistake.”
In a response filed on Thursday, Anthropic protection lawyer Ivana Dukanovic stated that the scrutinized supply was real and that Claude had certainly been used to format authorized citations within the doc. Whereas incorrect quantity and web page numbers generated by the chatbot have been caught and corrected by a “handbook quotation test,” Anthropic admits that wording errors had gone undetected.
Dukanovic stated, “sadly, though offering the right publication title, publication yr, and hyperlink to the supplied supply, the returned quotation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The corporate apologized for the inaccuracy and confusion brought on by the quotation error, calling it “an embarrassing and unintentional mistake.”
That is one in every of many rising examples of how utilizing AI instruments for authorized citations has brought about points in courtrooms. Final week, a California Decide chastised two legislation corporations for failing to reveal that AI was used to create a supplemental temporary rife with “bogus” supplies that “didn’t exist.” A misinformation knowledgeable admitted in December that ChatGPT had hallucinated citations in a authorized submitting he’d submitted.