Tuesday, April 16, 2024

Lawyer says sorry for fake court citations created by ChatGPT

Share

There’s been much talk in recent months about how the new wave of AI-powered chatbots, ChatGPT among them, could upend numerous industries, including the legal profession.

However, judging by what recently happened in a case in New York City, it seems like it could be a while before highly trained lawyers are swept aside by the technology.

Recommended Videos

The bizarre episode began when Roberto Mata sued a Columbian airline after claiming that he suffered an injury on a flight to New York City.

Related

  • Nvidia’s supercomputer may bring on a new era of ChatGPT

  • Google’s ChatGPT rival just launched in search. Here’s how to try it

  • Nvidia GPUs see massive price hike and huge demand from AI

The airline, Avianca, asked the judge to dismiss the case, so Mata’s legal team put together a brief citing half a dozen similar cases that had occurred  in an effort to persuade the judge to let their client’s case proceed, the New York Times reported.

The problem was that the airline’s lawyers and the judge were unable to find any evidence of the cases mentioned in the brief. Why? Because ChatGPT had made them all up.

The brief’s creator, Steven A. Schwartz — a highly experienced lawyer in the firm Levidow, Levidow & Oberman — admitted in an affidavit that he’d used OpenAI’s much-celebrated ChatGPT chatbot to search for similar cases, but said that it had “revealed itself to be unreliable.”

Schwartz told the judge he had not used ChatGPT before and “therefore was unaware of the possibility that its content could be false.”

When creating the brief, Schwartz even asked ChatGPT to confirm that the cases really happened. The ever-helpful chatbot replied in the affirmative, saying that information about them could be found on “reputable legal databases.”

The lawyer at the center of the storm said he “greatly regrets” using ChatGPT to create the brief and insisted he would “never do so in the future without absolute verification of its authenticity.”

Looking at what he described as a legal submission full of “bogus judicial decisions, with bogus quotes and bogus internal citations,” and describing the situation as unprecedented, Judge Castel has ordered a hearing for early next month to consider possible penalties.

While impressive in the way they produce flowing text of a high quality, ChatGPT and other chatbots like it are also known to make stuff up and present it as if it’s real — something Schwartz has learned to his cost. The phenomenon is known as “hallucinating,” and is one of the biggest challenges facing the human developers behind the chatbots as they seek to iron out this very problematic crease.

In another recent example of a generative-AI tool hallucinating, an Australian mayor accused ChatGPT of creating lies about him, including that he was jailed for bribery while working for a bank more than a decade ago.

The mayor, Brian Hood, was actually a whistleblower in the case and was never charged with a crime, so he was rather upset when people began informing him about the chatbot’s rewriting of history.

Read more

More News