Artificial Intelligence (AI) tool ChatGPT has garnered significant attention since its introduction. Renowned for its ability to complete various tasks, such as generating work emails with specific tones, styles, and instructions, ChatGPT’s capabilities have recently caused a stir in the legal field. A lawyer from New York, representing the firm Levidow, Levidow & Oberman, found himself embroiled in a court hearing due to the use of this AI tool for legal research. This article delves into the peculiar incident, shedding light on the repercussions and the attorney’s defense.
1. Unearthing an “Unprecedented Circumstance”
The incident came to the forefront when a court filing presented hypothetical legal cases as examples, drawing the attention of the presiding judge. Recognizing the usage of an AI tool for legal research, the judge remarked on the unprecedented situation this presented to the court. The lawyer involved in the case claimed to be unaware that the AI tool’s content could potentially be misleading or false.
2. The Initial Case and Its Unraveling
Originally, the case revolved around an individual who had filed a personal injury lawsuit against an airline. The plaintiff’s legal team submitted a brief that cited multiple previous court cases to establish a precedent, thereby justifying the continuation of their lawsuit. However, the airline’s legal representatives responded with a letter to the judge, stating that they were unable to locate some of the cited examples in the brief.
3. Judge’s Investigation Uncovers Bogus Cases
Prompted by the airline’s lawyers, Judge Castel requested an explanation from the plaintiff’s legal team. In his correspondence, he highlighted that six of the cases submitted appeared to be fabricated, with false quotes and internal citations. Later investigations revealed that the research was not conducted by the primary lawyer, Peter LoDuca, but by one of his colleagues at the law firm. Steven A Schwartz, a seasoned lawyer with over three decades of experience, had utilized ChatGPT to find relevant cases similar to the ongoing one.
4. Lawyer’s Regret and Pledge for Authenticity
Steven A Schwartz released a statement acknowledging Mr. LoDuca’s lack of involvement in the research process and his own ignorance of its methodology. Expressing deep remorse, he confessed to using ChatGPT for legal research without fully comprehending the possibility of false information. Schwartz vowed never to rely on AI for supplementing his legal research again unless its authenticity is absolutely verified.
5. Twitter Thread Exposes AI-Generated Findings
Amidst the controversy, a Twitter thread gained viral attention as it showcased a conversation between Mr. Schwartz and the ChatGPT chatbot. The attorney inquired about the authenticity of a specific case, “Varghese v. China Southern Airlines Co Ltd.” ChatGPT responded affirmatively, citing the case as “real” and providing the relevant citation, “925 F.3d 1339 (11th Cir. 2019).” Curiosity led Mr. Schwartz to ask about the source, and after “double checking,” ChatGPT assured him that the case could be found on reputable legal research platforms such as LexisNexis and Westlaw.
The controversial utilization of AI tool ChatGPT for legal research has thrust the legal community into a state of introspection. This incident has highlighted the importance of ensuring the authenticity and reliability of information obtained through AI platforms. While the lawyer involved expressed regret and pledged caution in the future, this case serves as a reminder of the potential risks and challenges that arise with the integration of AI in legal practices. It prompts us to consider the necessary safeguards to prevent the dissemination