Proving what’s real in a synthetic world
Authored by Jeff Rutherford
Originally published in the February 2026 issue of Magnet Unlocked. Want to be the first to see new content? Sign up for our monthly newsletter, Magnet Unlocked.
I was at a conference a couple of weeks ago, and one thing became impossible to ignore. Five or six presentations, different speakers, different research, all orbiting the same core problem: deepfakes, synthetic media, and altered images.
So, the question remains, how do you know whether an image has been modified?
We’re good at suspicion, bad at certainty
Most of us are reasonably confident when something looks wrong. We’re far less confident when we’re asked to explain, in a defensible way, why something should be trusted.
The usual techniques come up quickly. Lighting inconsistencies. Shadow direction. Pixel artifacts. Perspective issues. When those feel inconclusive, attention shifts to metadata: device model, timestamps, GPS coordinates, application tags. Metadata feels authoritative because it reads like a narrative. And it is not just humans; recent studies have shown that AI fighting AI deepfake detection is no better at spotting manipulated content than a random human guess.
The problem is that every one of those signals can be easily altered. AI can rewrite metadata just as cleanly as it can fabricate an image. At that point, relying on metadata alone starts to feel like trusting a witness who knows exactly what you want to hear.
Stop judging images, start examining files
That’s where my thinking has changed. Instead of asking whether an image looks authentic, I’m far more interested in what the file itself can tell us.
This is the space Magnet Verify operates in. At its core, Verify is about provenance through structure. It doesn’t attempt to assess visual realism. It examines how a file is constructed internally.
Every device, operating system, and piece of software assembles media in slightly different ways. Those differences don’t live in the pixels. They live in the structure of the file.
Structure tells a story metadata can’t
If I take a photo with my iPhone, that file has a structure consistent with that device family and operating system. If I then open that same image in Adobe and re‑save it, the structure changes.
This happens even if:
- no pixels are modified
- the image looks identical
- the metadata still claims “iPhone”
The file has been reprocessed, and that process leaves indicators. From a forensic standpoint, the file is no longer in its original state.
This is often the most counterintuitive realization for people. Obvious manipulation is not required for forensic change to occur. Simply opening and re‑saving a file is enough to alter its structure.
Why this isn’t a “deepfake detector”
This is also why I push back on describing Verify as a deepfake detector. That framing is too narrow and, frankly, misleading.
In many cases, the critical question isn’t whether something was generated by AI. It’s whether the file originated from a camera at all. AI generated and heavily processed images frequently show toolchain indicators like FFMPEG as the last thing that touched them. That doesn’t answer questions of intent or context, but it clearly establishes that the file is not a camera original.
That distinction alone can be decisive.
Provenance cuts both ways
Structural analysis isn’t just about disproving authenticity. It can also support it.
Sometimes the structure is consistent with a camera original from a specific device family or operating system. Other times it points to software involvement or reprocessing. In some cases, it helps identify additional leads rather than closing questions.
The value is not in forcing a binary answer, but in narrowing uncertainty.
What this changes in court and beyond
In court, this matters immediately. Consider a body worn camera video where the allegation is that footage was altered. Being able to show that the last process applied to the file was the official extraction workflow, and nothing else, fundamentally changes the discussion. You move from competing testimony to demonstrable file history.
The same logic applies in the private sector. Expense receipts. Accident photos. Insurance claims. Increasingly, decisions are made based on submitted images with no physical verification. Provenance analysis introduces a layer of scrutiny that many of these workflows simply never had.
We are heading somewhere...
The realistic objective is to get better at explaining provenance. To say, clearly and conservatively, how a piece of media came into existence and what touched it along the way.
Tools like Magnet Verify belong in that future. Not as arbiters of truth, but as instruments of consistency. They help examiners ground conclusions in repeatable analysis and explain those conclusions without overreaching.
I think we’re moving toward a world where this kind of analysis becomes expected. Whether in court, corporate investigations, or even on public platforms, the ability to distinguish a camera original from a reprocessed file will carry real weight. We’re not losing the ability to tell truth from fiction. We’re being forced to be more precise about how we establish truth. For those of us in digital forensics, that’s not a crisis. It’s simply the work, evolving to meet the moment.