Law clerk fired over ChatGPT use after firm’s filing used AI hallucinations

https://arstechnica.com/tech-policy/2025/06/law-clerk-fired-over-chatgpt-use-after-firms-filing-used-ai-hallucinations/

Ashley Belanger Jun 02, 2025 · 4 mins read
Law clerk fired over ChatGPT use after firm’s filing used AI hallucinations
Share this

College students who have reportedly grown too dependent on ChatGPT are starting to face consequences after graduating and joining the workforce for placing too much trust in chatbots.

Last month, a recent law school graduate lost his job after using ChatGPT to help draft a court filing that ended up being riddled with errors.

The consequences arrived after a court in Utah ordered sanctions after the filing included the first fake citation ever discovered in the state hallucinated by artificial intelligence.

Also problematic, the Utah court found that the filing included "multiple" mis-cited cases, in addition to "at least one case that does not appear to exist in any legal database (and could only be found in ChatGPT."

Douglas Durbano, a lawyer involved in the filing, and Richard Bednar, the attorney who signed and submitted the filing, should have verified the accuracy before any court time was wasted assessing the fake citation, Judge Mark Kouris wrote in his opinion.

"We emphasize that every attorney has an ongoing duty to review and ensure the accuracy of their court filings," Kouris wrote, noting that the lawyers "fell short of their gatekeeping responsibilities as members of the Utah State Bar when they submitted a petition that contained fake precedent generated by ChatGPT."

The fake citation may have been easily caught if a proper review process was in place. When Ars prompted ChatGPT to summarize the fake case, "Royer v. Nelson, 2007 UT App 74, 156 P.3d 789," the chatbot provided no details other than claiming that "this case involves a dispute between two individuals, Royer and Nelson, in the Utah Court of Appeals," which raises red flags.

Apologizing and promising to "make amends," the law firm told the court that the law school grad was working as an unlicensed law clerk and had not notified anyone of his ChatGPT use. At the time, the law firm had no AI policy that might have prevented the fake legal precedent from being included in the filing. But after the discovery, the lawyers reassured the court that a new policy had been established, and Bednar's lawyer, Matthew C. Barneck, told ABC4 that the law clerk was fired, despite the lack of a "formal or informal" policy discouraging the improper AI use.

Fake citations can cause significant harms, Kouris noted, including spiking costs to opposing attorneys and the court, as well as depriving clients of the best defense possible. But Kouris pointed out that other lawyers who have been caught using AI to cite fake legal precedent in court have wasted even more resources by misleading the court and denying the AI use or claiming fake citations were simply made in error.

Unlike those lawyers, Bednar and Durbano accepted responsibility, Kouris said, so while sanctions were "warranted," he remained "mindful" that the lawyers had moved to resolve the error quickly. Ultimately, Bednar was ordered to pay the opposition's attorneys' fees, as well as donate $1,000 to "And Justice for All," a legal aid group providing low-cost services to the state's most vulnerable citizens.

College students rely too much on ChatGPT

Barneck told ABC4 that it's common for law clerks to be unlicensed, but little explanation was given for why an unlicensed clerk's filing wouldn't be reviewed.

Kouris warned that "the legal profession must be cautious of AI due to its tendency to hallucinate information," and likely the growing pains of adjusting to the increasingly common use of AI in the courtroom will also include law firms educating recent college graduates on AI's well-known flaws.

And it seems law firms may have their work cut out for them there.

College teachers recently told 404 Media that their students put too much trust in AI. According to one, Kate Conroy, even the "smartest kids insist that ChatGPT is good 'when used correctly,'" but they "can’t answer the question [when asked] 'How does one use it correctly then?'"

"My kids don’t think anymore," Conroy said. "They try to show me 'information' ChatGPT gave them. I ask them, 'How do you know this is true?' They move their phone closer to me for emphasis, exclaiming, 'Look, it says it right here!' They cannot understand what I am asking them. It breaks my heart for them and honestly it makes it hard to continue teaching."

Ars could not immediately reach Bednar or And Justice for All for comment.