A straightforward wrongful termination claim resulted in a $2,000 sanction against a lawyer representing a former employee. Why? Inartful use of artificial intelligence (AI).
Reading the Rules
The federal judges covering the Eastern District of Texas (the eastern part of our state, roughly from Beaumont to Texarkana to Plano) issued a rule on the use of AI by lawyers in their courts. Here it is:
If a lawyer, in the exercise of his or her professional judgment, believes that the client is served by the use of technologies (e.g., ChatGPT, Google Bard, Big AI Chat, or generative artificial intelligence services), then the lawyer is cautioned that certain technologies may produce factually or legally inaccurate content and should never replace the lawyer’s most important asset—the exercise of independent legal judgment. If a lawyer chooses to employ technology in representing a client, the lawyer continues to be bound by Federal Rule of Civil Procedure 11 . . . and must review and verify any computer-generated content or ensure that it complies with all such standards.
Oh, and Federal Rule 11? That states a lawyer’s signature on a court-filed document certifies that “the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or establishing new law.”
All well and good until these rules collide with AI “hallucinations”—that is, stuff that generative AI just makes up out of thin air, like a mirage in the desert! Using AI is fine, but a lawyer must check to see if—like a bad dream—hallucinations creep into a document they file with the court.
Oops!
Well, the lawyer representing the employee didn’t check, and two types of hallucinations ended up in his briefing. First, cases that never existed. Here they are: Roca v. King’s Creek Plantation, LLC, 500 F. App’x 273, 276 (5th Cir., 2012 (unpublished) and Beets v. Texas Instruments Inc., No. 94-10034, 1994 WL 714026, at *3 (5th Cir., Dec. 16, 1994) (unpublished). Seem real, don’t they? 100% made up.
The second type of AI hallucination consisted of made-up quotes from real cases. One case hit the Daily Double with both a fake citation and hallucinatory quote.
Consequences!
The company’s lawyers looked over the brief very carefully and discovered the hallucinations. They told the employee’s lawyer. His response? Crickets. Per the court: “This silence is deafening.” The court-imposed sanctions:
- The lawyer was required to pay $2,000 into the registry of the court.
- He was ordered to attend a continuing legal education course on AI.
- He was ordered to provide a copy of the court’s sanctions opinion to his client.
Gauthier v. Goodyear Tire & Rubber Co., case No. 1:23-CV-281 (E.D. Tex., November 24, 2024).
Bottom Line
As the saying goes, attributed to Irish-American writer Finley Peter Dunne through his character Mr. Dooley, “Thrust ivrybody—but cut th’ ca-ards.” Check all citations and quotes in the opposing lawyer’s brief. You never know. And the loss to the lawyer isn’t so much that he must fork over money, but that he loses critical credibility with the judge.
Michael P. Maslanka is a professor at the UNT-Dallas College of Law. You can reach him at michael.maslanka@untdallas.edu.
The post Federal Judge in Beaumont Sanctions Employee’s Lawyer for AI Fabrications appeared first on HR Daily Advisor.