Just about every week, someone asks one of us how our work is being affected by artificial intelligence (“AI”), and especially about the large language models (“LLMs”), such as ChatGPT, that have become prevalent in everyday life. As commercial litigators, AI has limited application to the work that we do. Certainly, it can be helpful and important in some mechanical tasks, such as document review. However, the current technology is far away from being able to assist us, let alone replace us, in most of what we do. In fact, relying on LLMs to do or help us with much of our work is dangerous and can be unethical. There have been numerous recent examples that demonstrate the danger and how far the technology must advance before commercial litigation lawyers can safely rely on LLMs.
Two years ago, an incident in the American justice system garnered a lot of well-deserved attention. It started as a routine personal injury claim. A man sued the airline Avianca after being hit in the knee with a cart during a flight to New York. The airline brought a motion to dismiss the claim on the basis that “the statute of limitations had expired.”
In opposing the motion, the plaintiff’s lawyer submitted a comprehensive brief that cited multiple decisions involving airlines. The decisions were on the money and were very supportive of his client’s case, including “Martinez v. Delta Air Lines” and “Zicherman v. Korean Air Lines.” One case that was particularly helpful to the plaintiff was “Varghese v. China Southern Airlines,” which contained good information on “the tolling effect of the automatic stay on a statute of limitations.”
There was only one problem – the airline’s lawyer could not find any of those decisions. Neither could the judge.
When asked to explain himself, the lawyer candidly admitted that he didn’t write the brief. And neither did anyone else, for that matter. The culprit was AI. The lawyer admitted that he used ChatGPT to do his legal research for him. To ensure that he could do so, he asked the program to confirm that the cases were, indeed, real. According to ChatGPT, they were all genuine and were good law. But that was not true. The cases were all fabricated, what is known in AI as a “hallucination.”
This incident was very unfortunate for a number of reasons. First and foremost, it was sad to hear that a lawyer who had been practicing for 30 years had a machine do his work for him without even checking it himself. In doing so, he did a disservice to his client, the court, and the legal profession. The subsequent fines, disciplinary hearings, and fallouts were felt by the legal community. That particular case (Mata v. Avianca) garnered international headlines and served as a strong cautionary tool for lawyers (and all professionals, for that matter) on the risks of having AI do our work for us. Unfortunately, the lesson was short-lived.
Last year, a similar incident happened in British Columbia in a family law case. Zhang v Chen involved a child custody dispute between two divorced parents. The children’s father brought an application to allow him to travel with his children to China. In support of the application, the father’s lawyer, Chong Ke, filed materials that relied on two cases. The cases involved instances where parents were permitted to travel overseas with their children in similar circumstances. Both decisions were very helpful to the applicant’s case.
After the record was served and filed, the mother’s lawyer, Fraser MacLean, advised Ms. Ke that the cases could not be found. In response, Ms. Ke’s office sent a letter stating that she apologized for the “incorrectness of the case laws referenced” in the application record and that she intended to rely on other cases instead. Mr. Maclean objected and continued to request copies of the cases that were cited, as he could not locate them.
Before the matter was heard, Ms. Ke wrote an email to Mr. Fraser and the court stating that she made a “serious mistake” in preparing her court materials. Namely, she asked ChatGPT to suggest cases for her application and then cited them “without verifying the source of information.” The email was highly apologetic and remorseful, and it stated that she was not aware of the risks of using AI to prepare court materials.
After the application was determined, Justice Masuhara held that a portion of the adverse costs were to be paid by Ms. Ke personally. In the decision, the judge noted that “citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court. Unchecked, it can lead to a miscarriage of justice.”
However, Justice Masuhara also accepted that Ms. Ke did not intend to intentionally mislead or deceive. He also recognized that she was genuinely apologetic and accepted her explanation that she was naïve about the dangers of using AI.
In the end, he declined to order “special costs” against Ms. Ke, as her conduct was not found to be reprehensible or an abuse of process. He also noted the “significant negative publicity to which she has been subjected” as a mitigating factor. However, some costs were awarded against her as a result of the extra steps that Mr. MacLean had to take to remedy the confusion that the fake cases presented.
Ms. Ke was also ordered to review all of her files before the court and advise as to whether any of them contained cases obtained by AI tools. She is also facing an investigation by the Law Society. Nonetheless, the consequences to Ms. Ke were likely substantially mitigated by her admission to the court and counsel prior to the hearing that the case law did not exist.
Unfortunately, some lawyers are still not getting the message, and a similar but worse situation happened recently before the Ontario courts.
In Ko v. Li, a widowed spouse brought a court application to invalidate her divorce from her deceased husband and sought claims against his estate. Justice Myers heard the application. In his written decision, before addressing the substantive issues, Justice Myers wrote that he first needed “to deal with a serious issue that arose at the hearing.”
The applicant’s factum cited two cases that dealt with setting aside a divorce. The applicant’s lawyer, Jisuh Lee, a lawyer with 30 years of experience, relied on those cases in her oral argument during the hearing. The cases were cited with links that did not lead to the actual decisions, and one of them led nowhere. When he was unable to find the cases, Justice Myers asked Ms. Lee whether she had used an AI tool to prepare her factum. Ms. Lee stated that she did not know and would have to check with her clerk. This response, implying that the lawyer had a clerk prepare a factum, likely did not help Ms. Le.
After the hearing, Justice Myers reviewed the factum again and found even more issues with the cases cited. One of the cases linked to a different decision that had nothing to do with the applicable subject matter. For another decision, the factum stated that it stood for a particular proposition when, in fact, the case itself said the opposite.
Justice Myers’ concluded that the factum was likely prepared by an AI tool, and the applicant’s lawyer did not check if the cases were accurate, or if they even existed, before filing the factum with the court. In response to this issue, the court noted that:
- All lawyers have duties to the court, to their clients, and to the administration of justice.
- It is the lawyer’s duty to faithfully represent the law to the court.
- It is the lawyer’s duty not to fabricate case precedents and not to mis-cite cases for propositions that they do not support.
- It is the lawyer’s duty to use technology, conduct legal research, and prepare court documents competently.
- It is the lawyer’s duty to supervise staff and review material prepared for her signature.
- It is the lawyer’s duty to ensure human review of materials prepared by non-human technology such as generative artificial intelligence.
- It should go without saying that it is the lawyer’s duty to read cases before submitting them to a court as precedential authorities. At its barest minimum, it is the lawyer’s duty not to submit case authorities that do not exist or that stand for the opposite of the lawyer’s submission.
- It is the litigation lawyer’s most fundamental duty not to mislead the court.
- The court must quickly and firmly make clear that, regardless of technology, lawyers cannot rely on non-existent authorities or cases that say the opposite of what is submitted.
- With the sudden advent of AI, this has quickly become a very important issue.
Justice Myers also wrote that this situation was worse than Zhang v. Chen because, in that case, the lawyer caught her mistake before the hearing, apologized, and withdrew the cases. This situation was different. Ms. Lee relied on fake cases in open court. After the hearing ended, she did not reach out to the court to correct the mistake or even acknowledge it.
Justice Myers ordered Ms. Lee to attend before him and show cause why she should not be cited for contempt of court. Prior to the contempt hearing, Ms. Lee sent a letter to the court in which she admitted that her factum had been prepared by her staff using ChatGPT and that the cases were hallucinated. Ms. Lee apologized and undertook to commit to professional development training in legal ethics and technology, with specific focus on the professional use and risks of AI tools in legal practice. She also revised her factum to remove the fake cases and made an apology.
Justice Myers determined that Ms. Lee should not be found in contempt, given that she fully disclosed what happened, took responsibility, apologized and committed to educate herself on the risks of AI in legal practice. He ordered that she would not bill her client for the time spent for the research, factum writing and attendance at the hearing. Justice Myers also noted that the negative publicity that Ms. Lee received from this ordeal would be punishment enough.
But this order may not be over yet. It is possible that the Law Society will open an investigation based on what transpired.
In these cases, all of the experienced lawyers involved opted to have AI create documents for them, which were used in court proceedings. They did not verify the information themselves. As a result, they tried to make arguments in court based on fake case law. These issues, as noted by Justice Myers, go to the fundamental professional and ethical obligations that lawyers have, especially to the court. As a practical matter, these issues negatively impact the lawyer’s client and the lawyer’s professional reputation.
Using a tool that is known to make mistakes and “hallucinate” case law is akin to having another person do our work for us with full knowledge that they are prone to make mistakes and to make things up that don’t exist. To make things worse, despite knowing that mistakes and hallucinations can likely be made, we would still not bother to check the work for ourselves before putting our names on the document and signing it to verify its accuracy.
This is not to suggest that AI will not, and does not, have a place in the legal world. Lawyers should take the time to acquaint themselves with AI tools and find ways to integrate them into practice. See Daniel Waldman’s column on how to thrive in the age of legal AI for advice on that topic.
But lawyers need to remember that they are obligated to perform their jobs honestly and with integrity. Pawning our work off on unreliable tools is both a disservice to our clients and the profession at large. Lawyers need to understand what the current AI technology can do and, most importantly, what it cannot do. We are a long way from AI replacing or doing most of the work of commercial litigation lawyers. Lawyers who are ignorant of this fact or what AI cannot do will continue to get themselves into trouble.
In both the Zhang v. Chen and Ko v. Li decisions, the offending lawyers were able to escape serious punishment because they took responsibility after the fact and suffered reputational damage as a result of their actions. However, the next time something like this happens, it is likely that judges will stop showing leniency, and lawyers will no longer get off with a warning.
Related Services:
Artificial Intelligence | Commercial & Business Litigation
About the Authors:
Daniel Waldman is Of Counsel in the firm’s Toronto office. He has a broad commercial litigation practice with an emphasis on real property litigation, including commercial leasing, commercial real estate, construction law, and debt collection. Daniel can be reached at 416-644-2838 or [email protected]. To read his full bio, please click here.
Brian Radnoff is Chair of the Canadian Litigation practice in Dickinson Wright’s Toronto office. He handles a broad range of commercial litigation at both the trial and appellate levels. Brian has extensive experience in securities and shareholder disputes, defamation cases, professional liability matters, estate litigation, class actions, administrative law, competition litigation, employment disputes, as well as insolvency and receiverships. He can be reached at [email protected]. His full bio can be viewed here.