Anthropic’s lawyer was forced to apologize after Claude hallucinates legitimate citation

Rate this post


A lawyer representing Anthrop admitted that he uses a miscellaneous citation created by the company’s Claude Ai Chatbot in its continuing legal battle with music publishers, according to A submission Made in a court in Northern California on Thursday.

Claude hallucinates the quote with “inaccurate title and inaccurate authors,” says the anthropic in the submission, First reported by BloombergS Anthropic’s attorneys explain that their “manual citation” check did not capture it, nor a few other mistakes caused by Claude’s hallucinations.

The Anthropic apologized for the mistake and called it a “honest quotation mistake, not the production of authority.”

Earlier this week, lawyers representing the Universal Music Group and other publishers of music Using Claude to cite fake articles in her testimonyS Federal judge Susan van Keulen ordered the Anthropic to respond to these allegations.

The lawsuit of music publishers is one of several disputes between copyright owners and technology companies over the alleged abuse of their work to create generative AI instruments.

This is the latest case of lawyers who use AI in court and then regret the decision. Earlier this week, a California Judge hit a pair of law firms to submit. “Generated by fake AI studies“In his court. In January an Australian lawyer was caught using Chatgpt in preparing the court documents And chatbot produces defective quotes.

However, these mistakes do not stop startup companies from raising huge circles to automate legal work. Harvey who uses AI generative models to support lawyers is It is reported that more than $ 250 million has been reported with a $ 5 billion valuation.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *