UPDATED 18:46 EDT / MAY 28 2023

Judge's gavel with nondescript books in the background, brown covers with the pages facing the viewer, two of the stacked on one another, plain white background AI

Lawyer’s reliance on ChatGPT leads to false case citations in airline lawsuit

A New York lawyer has found himself in trouble in a lawsuit between a man and the airline Avianca Holding S.A. after presenting nonexistent citations in the case generated by ChatGPT.

The case involved a man named Roberto Mata suing Avianca, claiming he was injured when a metal service cart struck his knee during a flight. Injury claims are typically uninteresting, aside from the broader cultural considerations about how the U.S. is so litigious, but the case took an interesting twist after the airline attempted to have the case dismissed.

The New York Times reported Saturday that in response to the filing, lawyers representing Mata submitted a 10-page brief citing more than a half-dozen relevant court cases, arguing that the cases show the “tolling effect of the automatic stay on a statute of limitations.”

One huge problem, however, is that none of the cases was genuine. The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, had used OpenAI LP’s ChatGPT to write it.

Schwartz, who is said to have practiced law for three decades, defended himself, claiming that he wasn’t aware of the AI’s potential to generate false content. Schwartz told Judge P. Kevin Castel that he had no intent to deceive the court or the airline and vowed not to use ChatGPT again without thorough verification. The unusual situation prompted the judge to call a hearing on potential sanctions against Schwartz, describing the incident as an “unprecedented circumstance” filled with “bogus judicial decisions.”

The incident has sparked discussions among the legal community about the values and risks of AI. Stephen Gillers, a legal ethics professor at New York University School of Law, told the Times that the case highlights that legal professionals can’t simply take the output from an AI and incorporate it into court filings. “The discussion now among the bar is how to avoid exactly what this case describes,” Gillers added.

The case creates a precedent because of the role AI plays in legal research and argument construction, which has surfaced severe concerns about the reliability of AI tools in the legal profession. The case also underscores the potential hazards of trusting AI outputs in not only court filings but also in general use without secondary verification.

Photo: Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.