US Attorney Embarrassed by AI-Generated Court Ruling
When a U.S. attorney utilized ChatGPT to draft a court ruling, the outcome proved embarrassing. The artificial intelligence program generated fabricated cases and verdicts, leaving the lawyer rather red-faced.
The Blunder
New York-based lawyer Steven Schwartz apologized to a judge this week for submitting a brief full of falsehoods generated by the OpenAI chatbot.
The Case
The blunder occurred in a civil case being heard by Manhattan federal court involving a man who is suing the Colombian airline Avianca.
The Fabrication
After the airline’s lawyers asked the court to dismiss the case, Schwartz filed a response that claimed to cite more than half a dozen decisions to support why the litigation should proceed.
They included Petersen v. Iran Air, Varghese v. China Southern Airlines and Shaboon v. Egyptair. The Varghese case even included dated internal citations and quotes.
There was one major problem, however: neither Avianca’s attorneys nor the presiding judge, P. Kevin Castel, could find the cases.
The Admission
Schwartz was forced to admit that ChatGPT had made up everything.
The Apology
In a filing on Tuesday, ahead of the hearing, Schwartz said that he wanted to “deeply apologize” to the court for his “deeply regrettable mistake.”
He said his college-educated children had introduced him to ChatGPT, and it was the first time he had ever used it in his professional work.
The Fallout
The judge ordered Schwartz and his law partner to appear before him to face possible sanctions.
Schwartz said he and his firm, Levidow, Levidow & Oberman, had been “publicly ridiculed” in the media coverage.
The Lesson
Schwartz added: “This matter has been an eye-opening experience for me, and I can assure the court that I will never commit an error like this again.”