Asking the right questions: How to get the most out of Intelligent Insights
By Brandon Epstein
AI is already helping digital investigators surface overlooked data in record time. But its effectiveness still depends on the human behind the keyboard, and rightly so. Inside Magnet Review’s Intelligent Insights, powered by Magnet AI, every investigation begins with a simple step: the investigator or attorney asking the system a question.
That question will determine whether the answer you get back is useful. As your investigative partner, our goal is to help you ask better ones.
What is a prompt?
In Intelligent Insights, a prompt is more than just a question. It’s a written instruction (or set of instructions) that guides how the system analyzes data and generates a response. A strong prompt clearly defines the task, provides relevant context, and sets appropriate expectations for the output.
Clear, thoughtful prompts help align AI outputs with real investigative goals, reduce ambiguity, and produce results you can trust.
What can I prompt in Intelligent Insights?
You can prompt Intelligent Insights to generate case summaries with key findings, surface relationships between people, build timelines around specific events, and uncover patterns across the evidence. It works across hundreds of artifact categories, including:
- contacts
- call logs
- messages
- media
- any data source available in Magnet Review
AI doesn’t think like a human. It can’t infer intent or fill in gaps based on what an investigator is thinking. The quality of the result is therefore directly tied to the quality of the prompt.
Intelligent Insights can identify common communication patterns between two contacts, for example, but can’t determine whether a user intentionally clicked on a specific link to download an image. That analysis requires an experienced examiner’s detailed review.
What good AI prompting looks like
If you’ve spent time in a courtroom, crafting a good prompt will feel familiar. It’s a lot like asking a question on direct examination: a clear, focused question, free of leading language, draws out a useful response. Vague or biased questions, on the other hand, can produce unreliable answers or draw objections.
The same principle applies here. Effective prompts use clear, direct instructions that define exactly what the system should analyze and return.
Examples of strong prompts
- Summarize all device activity on April 20, 2026.
- Tell me the three most communicated-with contacts on this device.
- Who did the user call on January 1, 2026?
- Is there any indication of drug trafficking on the device?
- Is there any talk of pawn shops or selling goods on the device?
- Identify any cryptocurrency use.
- Who is Clay Mickle and what is his relationship to Sue Stevens?
- What are the most used apps on this device?
- Compare the conversations across all devices in this case and identify any common themes.
Avoid unnecessary detail, assumptions, or language that suggests a desired outcome. Don’t tell Intelligent Insights what you expect to find. Let the data speak for itself.
What happens when an AI prompt falls short
As a purpose-built tool for digital investigations, Intelligent Insights includes a built-in prompt checker designed to prevent inaccurate or misleading results. The checker flags prompts that are overly vague, leading, biased, or outside the tool’s intended scope, any of which could produce confusing or suboptimal outputs.
When a problematic prompt is identified, Intelligent Insights will not return a result. Instead, it offers suggested alternatives so you can refine your question and decide on the best path forward with confidence.
Examples of AI prompts that fall short, and how to fix them
The table below shows common pitfalls and how to reframe each prompt to get a usable result.
| Prompt | Why it falls short | Ask instead |
| Is there any sign of criminal activity? | Asks the system to render a legal opinion that should be reserved for a qualified human. | Is there any information about a weapon purchase? What messages or files are associated with John Smith during January 2026? |
| Find evidence that Ruth killed Mary. | Contains leading or biasing language instead of asking for an objective response. | Tell me about Ruth’s activity during the morning of March 12, 2026. Are there any indications that Ruth did not like Mary? |
| Find all evidence. | Overly broad and lacks context about the case or what the investigator is looking for. | Summarize device activity around February 3, 2026. Is there any indication of drug trafficking on this device? |
| What was the weather in New York City on the date of the crime? | Asks for information outside of the case data. | Is there any discussion of the weather on March 1, 2026 on the device? Was anything happening in New York around March 1, 2026 on the device? |
| Change the device owner’s name from John Smith to defendant. | Asks the system to alter data. | Not applicable. Intelligent Insights will surface relevant information from case data, but built-in guardrails prevent the system from modifying or deleting it. |
The bottom line
Intelligent Insights is a force multiplier, not a substitute for your investigative judgment and expertise. The better your prompts, the more valuable the insights, and the faster you can get to what matters in your case.
Start modernizing your investigations
With Review Lite, you can experience Magnet Review’s evidence review capabilities, including Intelligent Insights during early access at no cost. You’ll be able to store up to three cases and share them with an unlimited number of reviewers.
Create your free account to get started.
Brandon Epstein, Technical Forensics Specialist at Magnet Forensics, is a former police detective and co-founder of Medex Forensics, which Magnet acquired in 2024. Brandon specializes in AI and media authentication and is active in many digital forensic community organizations.
Watch Brandon Epstein’s AI Unpacked webinar series on how responsible AI is shaping the future of digital forensics.