We’re holding AI to a standard to which we’ve never held humans
By Brandon Epstein
This article is part of AI in Digital Forensics, a blog series exploring the impact AI is having in the world of digital investigations.
Key insights
- Focusing on AI error rates ignores the reality that there’s always been a margin of error in investigations
- AI improves evidence collection, shifting investigative time to higher-value analysis
- Designed right, AI could become a trusted tool in the digital forensics toolbox
Digital forensic investigators face extreme pressure to deliver accurate results. The stakes in the field are especially high; an error could mean overlooking potential suspects or missing exculpatory evidence . That’s why the use of artificial intelligence in the field isn’t without controversy. Broader societal concerns around the “hallucinations” and inconsistent results produced by generative AI have raised questions about the use of AI in digital forensic investigations.
AI isn’t a threat to digital forensic investigations. Instead, it presents a significant opportunity to improve upon current practices and to better understand error rates, while keeping human beings focused on what matters most: evaluating evidence to ensure justice is served.
There’s a double standard at the center of the debate
While concerns about the current shortcomings of AI are valid, they highlight a double standard in how we view error rates, inconsistencies, and training in AI versus humans.
We know intuitively that humans won’t always uncover 100% of the case-relevant data. But there is simply no research or testing on how accurate or comprehensive humans are at surfacing all pertinent data, and efforts to empirically assess human error rates in the field aren’t really feasible.
We know that the sheer volume of complex digital data , organizational factors, cognitive bias, burnout, and other circumstances can lead human investigators to make mistakes. Nonetheless, their opinions are (rightfully) used and trusted in legal proceedings every day.
In a complex investigation, we don’t expect two humans looking at the same dataset to return completely identical results. What’s more, when looking at the same dataset with fresh eyes, each of those individuals may come to the same investigative decisions, but surface completely different artifacts to get there. Inconsistencies among investigators aren’t necessarily a reason to disqualify evidence.
And finally, where the testimony of a human investigator is introduced in court, they’re invariably asked about their education, training and expertise—but at a high level. Meanwhile, AI functions are subjected to far more granular scrutiny around source code and specific training data. We know more about how AI systems were trained than how human investigators were.
AI isn’t the biggest risk in digital forensics, but not using AI could be
Given the double standard between AI and humans on matters of error, inconsistency and training, it’s worth highlighting the benefits of incorporating AI in digital forensic investigations. Properly designed, AI can significantly enhance human judgment, even if it should never replace it.
The ability of AI to drive investigative efficiencies at a time when the volume of digital evidence is skyrocketing is clear:
- AI enables investigators to go through much more data much more quickly, and to spot patterns that would otherwise go unnoticed, along with pointing the investigator to the source(s) of that pattern.
- Reducing drudgery and time spent hunting for evidence alleviates burnout.
- It also enables skilled examiners to spend more time expertly evaluating evidence through deep-dive examinations.
The conversation around AI in digital investigations has moved well beyond speculation. In fact, 2026 State of Enterprise DFIR report found that more than two thirds of DFIR professionals are already using AI in their day to day investigative work—more than triple the adoption rate from two years earlier. That level of uptake makes one thing clear: AI is becoming a standard investigative tool. The focus now is on responsible use, explainability, and validation by experienced investigators.
At a broader level, AI actually presents an opportunity to measure, improve, and standardize evidence collection at a more optimal level than in the past.
That will involve designing AI systems on known datasets to establish error rates.
It will require designing systems that prioritize transparency and reliability and incorporate bias mitigation.
And it will require continuously testing and refining those system
We should build trust in AI… while clarifying its role in investigations
As with any profession, digital forensic investigators gain the trust and confidence of their colleagues over time. Most have worked very hard to establish a good reputation. We propose approaching the use of AI in digital forensics in the same way. By consistently demonstrating that these technologies are producing the correct evidence, concerns around training data, bias and explainability should subside over time, and the technology is likely to become an accepted tool in the human investigator’s toolbox.
What’s also key: emphasizing that humans will remain at the center of digital forensic investigations.
We’ll explore how AI can make their job easier in another article.
Brandon Epstein, Technical Forensics Specialist at Magnet Forensics, is a former police detective and co-founder of Medex Forensics, which Magnet acquired in 2024. Brandon specializes in AI and media authentication and is active in many digital forensic community organizations.
Watch Brandon Epstein’s AI Unpacked webinar series on how responsible AI is shaping the future of digital forensics.