Stanford 'misinformation' expert hit by humiliating AI blunder
- View the full results of the US election via our live-updating maps and charts
A Stanford expert on misinformation has admitted to using AI to fabricate evidence in a federal court case.
Professor Jeff Hancock, a leading expert on AI-driven information and the founder of the Stanford Social Media Lab, was brought in by the Minnesota Attorney General Keith Ellison to defend a state law that criminalizes election-related deepfakes. However, Hancock's own expert declaration, which was partially generated by ChatGPT, contained fabricated information.
Plaintiffs, including conservative content creator Christopher Kohls ("Mr. Regan") and Republican Minnesota Rep. Mary Franson, are challenging the 2023 Minnesota law, amended in 2024, as an unconstitutional restriction on speech. Kohls, known for parody videos using voice cloning to mimic Vice President Kamala Harris, filed the lawsuit.
The plaintiffs' lawyers pointed out a specific reference to a nonexistent study by authors Huang, Zhang, and Wang, in Hancock's declaration.
Suspecting he relied on AI to write the 12-page document, they called for the entire document to be dismissed due to potential widespread inaccuracies.
Hancock later confirmed the presence of two more AI-generated 'hallucinations,' which manifest as misinformation in text and visual absurdities for generated images.
The AI fabrications extended beyond existing studies. It also invented a 2023 article by De keersmaecker & Roets and attributed four nonexistent authors to other works.
In an attempt to establish his expertise, Hancock highlighted his co-authorship of a foundational paper on AI-mediated communication and his extensive research on the psychological impact of misinformation.
Professor Jeff Hancock, a leading expert on AI-driven misinformation, admitted to using AI to fabricated evidence in a federal court case
Conservative content creator Christopher Kohls ("Mr Regan") and Republican Minnesota Rep Mary Franson, are challenging the 2023 Minnesota law criminalizing election-related deepfakes
'I have published extensively on misinformation in particular, including the psychological dynamics of misinformation, its prevalence, and possible solutions and interventions,' Hancock wrote.
Hancock employed ChatGPT 4.0 to assist in his research, instructing the AI to generate academic citations for specific points. However, the AI tool inadvertently produced false citations and four made up 'incorrect' authors
'The response from GPT-4.0, then, was to generate a citation, which is where I believe the hallucinated citations came from,' he wrote.
The plaintiffs exposed Hancock's perjury, as he had falsely sworn under oath to the accuracy of his cited sources. Still, Hancock insisted that his errors '[did] not impact any of the scientific evidence or opinions.'
A hearing is scheduled for December 17 to determine the fate of Hancock's declaration, while Stanford University remains silent on potential disciplinary action.
Hancock's case is not an isolated incident. In February, New York lawyer Jae Lee faced disciplinary action for citing a fabricated case in a medical malpractice lawsuit, which was generated by ChatGPT.
Hancock's expert declaration cited sources from nonexistent studies and authors generated by Chat GPT
Lee was referred to the grievance panel of the 2nd US Circuit Court of Appeals on Tuesday after she cited a fabricated case about a Queens doctor botching an abortion in an appeal to revive her client's lawsuit.
The case was subsequently dismissed.