Your Next Expert Could Be A Robot:  Artificial Intelligence and Proposed Federal Rules of Evidence 707

Recently, the Advisory Committee on Evidence Rules proposed a new rule to address the admissibility of AI-generated evidence.  This has become an increasingly pressing issue for courts nationwide, as parties seek to introduce AI-generated evidence such as:

·       machine output analyzing stock trading patterns to establish causation;

·       analysis of digital data to determine whether two works are substantially similar in copyright litigation;

·       machine learning that assesses the complexity of software programs to determine the likelihood that code was misappropriated.

These AI outputs are often generated without significant human involvement and presented without substantiating expert testimony.  This can cause various problems, including the process’s use for purposes that were not intended (function creep); analytical error or incompleteness; inaccuracy or bias built into the underlying data or formulas; and lack of interpretability of the machine’s process.

Courts have struggled to determine the admissibility of such evidence, particularly because such evidence often lacks indicia of reliability.  As the Committee notes:  “As to machine learning, the concern is that it might be unreliable, and yet the unreliability will be buried in the program and difficult to detect.”  Indeed, even the developers of generative AI platforms cannot explain how it reaches its results

To address this problem, the Committee has proposed Rule 707.  That rule provides:

When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d). This rule does not apply to the output of basic scientific instruments.

Rule 702 requires expert testimony to be based on “sufficient facts or data” and result from “reliable principles and methods.”  It also requires the expert’s opinion to “reflect[] a reliable application of the principles and methods to the facts of the case.”

This solves part of the problem.  Holding AI standard to the Rule 702 standard is better than no standard at all.  And proponents of AI-generated evidence may be able to specify what they input into the AI platform enough to demonstrate that the output is based on “sufficient facts or data.”

However, if no one can know how AI produced the evidence, it is hard to see how said proponents can prove that the AI output comes from “reliable principles and methods.”  And how could anyone prove whether the output “reflects a reliable application of the principles and methods to the facts of the case”? 

Despite these concerns, the Committee voted 8–1 to publish Rule 707 for public comment.  (The Department of Justice opposed publication.)  The Committee emphasized that publication is exploratory, not a presumption of adoption.

The public comment period for Proposed Rule 707 is now open.  It closes February 16, 2026.

Next
Next

DeepSeek: A Game-Changer in AI and the Power of Sparsity