top of page

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

Subscribe to our newsletter

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in TechinData.in, you may refer to the editorial guidelines for more information.

When AI Expertise Meets AI Embarrassment: A Stanford Professor's Costly Citation Affair

Updated: Nov 16


ree

In a development that underscores the perils of AI in legal proceedings, a Stanford University professor's expert testimony was recently excluded by a Minnesota federal court after it was discovered that his declaration contained fake citations generated by AI.


The case, Kohls v. Ellison, which challenges Minnesota's deepfake law, has become a cautionary tale about the intersection of artificial intelligence and legal practice.


Professor Jeff Hancock, Director of Stanford's Social Media Lab and an expert on AI and misinformation, inadvertently included AI-hallucinated citations in his expert declaration. The irony was not lost on Judge Laura M. Provinzino, who noted that an AI misinformation expert had "fallen victim to the siren call of relying too heavily on AI—in a case that revolves around the dangers of AI, no less."


The incident has sparked broader discussions about evidence reliability, professional responsibility, and the need for robust verification protocols in an era where AI tools are increasingly common in legal practice.


Hence, this legal-policy analysis delves into the incident, and how this being one of many such similar incidents, can help us remain cautioned about the way we look at AI-related evidence law considerations.


                    Want to read more?

                    Subscribe to indicpacific.com to keep reading this exclusive post.

                    bottom of page