Two Lawyers Sanctioned for Citing Fake Cases Created by AI Chatbot

By

Jun 25th


In a landmark decision, a New York judge has sanctioned two lawyers for citing fake cases that were created by the AI chatbot ChatGPT. The lawyers, Peter LoDuca and Steven Schwartz, had asked ChatGPT for examples of past cases to include in their legal filings, and the bot just made up some previous proceedings. The lawyers then slotted those made-up cases into their legal filings and submitted them to the court.

The judge, Kevin Castel, found that the lawyers had “abandoned their responsibilities” and had “acted in bad faith” by submitting the fake cases. He ordered the lawyers to pay a $5,000 fine each and to notify the real judges who were falsely identified as the authors of the fake cases. The judge also dismissed the plaintiff’s injury claim against Avianca because more than two years had passed between the injury and the lawsuit.

The lawyers’ actions have highlighted the risks of using AI tools in the legal profession. As Stephen Wu, shareholder in Silicon Valley Law Group and chair of the American Bar Association’s Artificial Intelligence and Robotics National Institute, said, “you can’t delegate to a machine the things for which a lawyer is responsible.”

How Did This Happen?

The lawyers in this case, Peter LoDuca and Steven Schwartz, were representing a passenger who was injured on an Avianca flight. They were trying to argue that the passenger’s injuries were the result of the airline’s negligence. In order to support their argument, they wanted to cite some past cases that had ruled in favor of passengers in similar situations.

They decided to use ChatGPT to help them find these cases. ChatGPT is a large language model that is trained on a massive dataset of text and code. It can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. However, ChatGPT is also known to hallucinate – to state things that are not true.

In this case, ChatGPT generated a list of cases that did not exist. The lawyers did not realize this, and they simply slotted the fake cases into their legal filings.

The Sanctions

The judge in this case, Kevin Castel, was not amused. He found that the lawyers had “abandoned their responsibilities” and had “acted in bad faith” by submitting the fake cases. He ordered the lawyers to pay a $5,000 fine each and to notify the real judges who were falsely identified as the authors of the fake cases. The judge also dismissed the plaintiff’s injury claim against Avianca because more than two years had passed between the injury and the lawsuit.

The Implications of This Case

This case has important implications for the use of AI tools in the legal profession. It shows that lawyers cannot simply delegate their responsibilities to a machine. They must still do their own due diligence and make sure that the information they are using is accurate.

This case also highlights the risks of using AI tools that are not properly trained. ChatGPT is a powerful tool, but it can also be dangerous if it is not used correctly. Lawyers need to be aware of the limitations of AI tools and use them with caution.

The Future of AI in the Legal Profession

This case is a setback for the use of AI in the legal profession, but it is not a death knell. AI tools have the potential to revolutionize the legal profession, but they need to be used carefully. Lawyers need to be aware of the risks and limitations of AI tools, and they need to use them in a responsible way.

As AI tools continue to develop, the legal profession will need to adapt. Lawyers will need to learn how to use these tools effectively, and they will need to be aware of the ethical implications of using AI. The future of AI in the legal profession is uncertain, but it is clear that AI will play an increasingly important role in the years to come.

Leave a Reply

 
© 2006-2024 Security Enterprise Cloud magazine.