
In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI.
The case, Kohls v. Ellison, which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice.
Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."
The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice.
Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.
The Ironic Incident

The deepfake-related lawsuit in Minnesota took an unexpected turn with the filing of two expert declarations—one from Professor Jevin West and another from Professor Jeff Hancock—for Attorney General Keith Ellison in opposition to a motion for a preliminary injunction. As noted, “[t]he declarations generally offer background about artificial intelligence (“AI”), deepfakes, and the dangers of deepfakes to free speech and democracy