Forensic examiner, researcher, and blogger Alexis Brignoni has been hard at work. Over the past two years, he’s founded both the Initialization Vectors blog, and a GitHub where you can find repositories for both SQLite queries and DFIR process walkthroughs, as well as individual scripts. He’s also a frequent contributor to DFIR Twitter.
Last month, Alexis contributed three of his scripts to the Magnet Artifact Exchange. (Update: since this blog was posted, we’ve approved 10 additional scripts!) We sat down with him to talk about how the process went, why he focuses on app artifacts, and how he recommends making the time and room for sharing in the DFIR community:
Magnet Forensics: Since starting your blog last year, most of your reviews have focused on apps versus tools or books (though some of each make an appearance). Apps could seem like a Sisyphean task given how many there are – what made you focus on them, vs. another forensic exploration?
Alexis Brignoni: It is a fact that the present and the foreseeable future will be app-based. I know many people that do not own a single computer but do own and use multiple mobile devices. This is a space that all digital forensics-minded folks will have to deal with at some point.
It is true that there are millions of apps available both for iOS and Android where each app might have its own unique way of storing data. Data stored in SQLite databases, JSON files, flat files or a combination of some, or even all, at the same time. Is it a challenge? Certainly, but I believe the community is up to the task.
As I research these apps and how they store data, I think of ways I can translate the analysis into discrete steps that can be used by others, disseminated, and automated. I hope to focus more and more on automation and re-usability in the coming year. Also, if something doesn’t challenge you, why do it?
MF: In addition to your blog, you offer two GitHub repositories for the entire community to share – both SQL queries, and “Choose Your Own DFIR Adventure” Twines (which I thought was a fun and informative walkthrough!). What gap or need did you see in the community that these resources fill, and how has the community responded?
AB: Digital forensic analysis is an ever-evolving field. If we don’t strive to evolve with it, we will be left behind. It is clear there will always be a need for educational resources and places where such timely information can be gathered and shared. If we blindly depend on pushing buttons on tools, without a broader and ever-growing understanding of the current digital forensic landscape, our analysis will be poor and lacking the depth our consumers expect from us.
To do better we have to constantly grow more, learn more, share more. I have been inspired by Phill Moore, Jessica Hyde, Brett Shavers, and others, who as leaders in the digital forensics community have made it a priority to aggregate community knowledge and make it accessible to others.
Personally, I find great pleasure having folks contribute to the SQL repository and hear from others that have benefited from it. The response from the community was great from day one. Sarah Edwards kindly contributed a large number of queries she had developed. The response has been positive from users too. Not too long ago, I heard from someone who was able to recover GPS data from a marathon run by using one of the queries in the repository.
Still, [even] if no one ever thanks us for our efforts, we should continue learning and sharing. In the past, I have described sharing as “its own good and virtue, a moral imperative. The ripple effects of what was shared might never reach back to us, but it is important that we continue throwing those rocks in the water.” Create those ripples. A big rock is not needed. A little one will more than suffice. As these ripples expand away from us the larger they will become.
MF: Tell me a little about what was involved with adapting your scripts as custom artifacts for the Artifact Exchange. What was the process like?
AB: The process was really straightforward. After identifying the database and queries needed to extract the information it is as simple as putting them into the required XML format. As long as the artifact names are unique and the calls to the database successful the artifact will work as intended. In some instances, one can go from query to custom artifact in less than 15 minutes. For analysis that goes beyond SQL queries, one can incorporate Python scripts.
MF: Your contributions to the Magnet Artifact Exchange include Slack for iOS, and VLC and MX Player for Android. Why did you choose to write these as custom artifacts after sharing them already on GitHub, both as stand-alone scripts and as queries in the DFIR SQL Query Repository?
AB: I believe that knowledge that is not shared is knowledge that is lost. To this end, I try to share what little I learn through my personal research and testing with the broader community on as many places I can and in as many formats as possible.
Recently, I have been reading Harlan Carvey’s Investigating Windows Systems. He explains how “it is a good idea to take a break and look back over the work that you just completed, and see what can you ‘bake back into’ your tools and processes.”
I agree completely. If our tools provide a way to automate and bring back to our attention previously gained knowledge we should definitely do so. Custom artifacts are one of those methods we can use to not only avoid having to reinvent the wheel at every new analysis, but as a way of sharing with others at a practical “take it and use it” level.
MF: In a recent blog you talked about how much time research and writing can take. What keeps you coming back to it, and what’s your advice to community members worried about time commitments?
AB: Researching this space is a hobby and something that I love doing, hence I devote a lot of time to it. Still, if you look at my blog, repository, and Twitter feed, they don’t contain anything special or out of the ordinary. They are just a stream of little things I’ve learned in my research that, when put together, are more than I had hoped for.
Everybody has something to contribute. Many of the things I have written about would have never been if someone else hadn’t done the groundwork for it. To be part of the community and to add to that shared knowledge pool, the only thing required is to make what you know available to others. It doesn’t have to take hours a week, just a few minutes to share what you know so others can build on your work. Even if it is something you believe others know already, share it still. Your way of presenting that knowledge might make a world of difference to someone that has never seen it the way you do.
Alexis, thanks for your time and all that you contribute to the community! We look forward to future custom artifacts as well as anything else you’d like to share.
Have questions, or want to submit a guest blog or interview? Email me: email@example.com.