top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

When AI Expertise Meets AI Embarrassment: A Stanford Professor's Costly Citation Affair

Writer's picture: AbhivardhanAbhivardhan


In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI.


The case, Kohls v. Ellison, which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice.


Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."


The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice.


Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.


The Ironic Incident


Figure 1: An excerpt from the Order from Kohls v. Ellison.
Figure 1: An excerpt from the Order from Kohls v. Ellison.

The deepfake-related lawsuit in Minnesota took an unexpected turn with the filing of two expert declarations—one from Professor Jevin West and another from Professor Jeff Hancock—for Attorney General Keith Ellison in opposition to a motion for a preliminary injunction. As noted, “[t]he declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy