Courts are facing a growing challenge as AI-generated images, audio, and video—often described as deepfakes—become easier to create and harder to verify. Judges and legal experts are warning that the justice system is not yet equipped to handle a surge of disputes over whether digital evidence is authentic or manipulated.
In the U.S., the issue is showing up in multiple ways: experts say even everyday evidence such as phone recordings, security footage, and photos can be questioned, while judges are also penalizing lawyers for filing AI-produced legal claims that turned out to be wrong. At the same time, federal rulemakers are working on proposed approaches that could shape how AI-generated or machine-made evidence is treated in court.
Judges sanction AI mistakes in filings
A Kansas federal judge fined several attorneys a combined $12,000 after court filings in a patent case included quotations and case citations that did not exist and were generated by artificial intelligence. The ruling said that although one lawyer used AI and inserted the inaccurate material, the other lawyers who signed the filings were also responsible for failing to properly review them.
The sanctioned attorneys represented Lexos Media IP in a patent infringement case against online retailer Overstock.com, according to the ruling. The judge, U.S. District Judge Julie Robinson in Kansas City, wrote that lawyers should understand the risks of using unverified generative AI for legal research and the ethical duty to ensure filings are accurate.
The decision described the spread of AI-related errors in legal filings as a growing problem, pointing to the risk that generative AI can produce false information—often called “hallucinations.” The ruling also noted that the court had ordered attorneys in the case to explain why sanctions should not be imposed after problems were found in multiple documents, including citations and quotations that did not exist.
Federal courts weigh new evidence rules
A federal judicial committee agreed to move forward with developing a rule to address evidence produced by machine learning and to begin work on guidance related to claims that audio or video evidence might be a deepfake. The decision came during a meeting of the U.S. Judicial Conference’s Committee on Rules in New York, Reuters reported.
The committee’s discussion reflected a broader push by courts to respond to generative AI tools that can generate text, images, and video. Reuters reported that Chief Justice John Roberts, in an annual report issued Dec. 31, said AI could bring benefits to litigants and judges while emphasizing the need for the judiciary to consider appropriate uses of the technology.
The proposed approach discussed by the committee would focus on the reliability of methods used by computer technologies to produce predictions or inferences from existing data. Reuters reported that one idea under consideration would require some machine-generated evidence to meet standards similar to the reliability rules that govern expert testimony under Rule 702 of the Federal Rules of Evidence.
Committee members also discussed whether courts could face a wave of claims that audio or video evidence is fake, with U.S. Circuit Judge Richard Sullivan expressing doubt that a “tsunami” of such challenges is coming. Still, Reuters reported that members agreed it could be useful to consider drafting a possible rule so the judiciary is not caught unprepared if deepfakes become a major courtroom issue.
Why deepfakes worry investigators and juries
Digital forensics experts told Axios that as deepfakes become more common, traditional media evidence—photos, videos, and audio—can no longer be assumed reliable in the way many people expect. Axios reported that the court system is struggling to adapt, in part because there are not enough forensic analysts available to authenticate evidence that may have been altered by AI.
Axios described how AI-generated evidence could be used in different types of disputes, including a divorce case where a photo background is changed to make a situation look dangerous, or a homicide case where a deepfake video appears to place someone at a crime scene. Axios also reported an example of a fabricated audio clip that led to a wrongful termination claim in Baltimore County.
Experts told Axios that as AI improves, it may become increasingly difficult—possibly even impossible—to prove that a piece of media was altered. Axios reported that courts may need to rely more on digital forensics professionals to identify signs of manipulation, but even expert explanations may not always persuade a jury beyond a reasonable doubt.
What lawyers and the public can do now
One concern raised to Axios is that verification tools for AI manipulation can function like “black boxes,” making it harder to explain how a conclusion was reached and leaving room for disagreement about what the tools show. Axios reported that investigators are trying different methods to assess authenticity, including analyzing supporting details and artifacts tied to the original material.
Axios also reported advice to keep original files—such as voicemails, texts, and photos—because metadata and other supporting artifacts can be important when authenticity is questioned. As judges, rulemakers, and investigators confront the same core problem—trust in digital evidence—the debate is likely to intensify over how courts should screen AI-generated evidence and how lawyers should use AI tools responsibly.
