2015 Conference Program


Main Track


Autopsy: Wait, there are still more features?  (slides)

Brian Carrier
Basis Technology
The annual review of all things Autopsy. Learn about Autopsy 4.0, what we’ve added since the last OSDFCon, and get a brief introduction if you haven’t seen Autopsy before.   Autopsy is an open source digital forensics tool with all of the standard features that you need for an investigation. Each release has 10s of thousands of downloads and is being used by law enforcement and corporate investigators around the world.

In this talk, we’ll talk about new features, like collaborating on cases, carving, file alerting, timelines, image gallery, and python scripting. We’ll also touch on the basics like keyword searching, hash analysis, web artifacts, and EXIF. The idea of Autopsy 3 came from the first OSDFCon and this presentation is the annual update to the community to show progress, raise awareness, and get feedback.


Feasting off the Hunt  (slides)

The Volatility Development Team
The Volatility Foundation

Proactive threat hunting is a critical component of a mature information security strategy. Unfortunately, the hunting process does not stop once the adversary has been detected and expelled from your infrastructure. Your ultimate success will be measured by the speed you can reap the spoils of your labor by extending your forensics arsenal with capabilities to detect the attacker’s evolving tools and tactics.

This presentation will start by discussing several high-profile incidents involving new PlugX variants. In order to stay hidden, these variants find and alter specific data in RAM that is outside the normal range of malware techniques (not the typical DKOM, etc).

We will walk through how we determined that these PlugX variants were different from all the others that have been surfacing over the years. As a finishing touch, attendees will see how we reverse engineered the malware and built new Volatility plugins to detect the memory modifications – which can now be used to identify similar behavior in future malware.

We will then move on to a separate malware sample that exploited a local privilege escalation bug. At the time, none of the memory forensics frameworks (or live analysis tools for that matter) were able to detect this specific memory modification. While antivirus/yara signatures may exist to identify the exploit being used, the aftermath (i.e. evidence left in RAM) is generic and can be used with any privilege escalation – so we developed a new Volatility plugin for this as well.

The talk will conclude by discussing the results of the 2015 Volatility Plugin Contest. The results of this contest have consistently pushed the boundaries of memory forensics and provided powerful new techniques for extending your forensics-hunting arsenal.


Python Autopsy: A Quick Intro to Scripting Autopsy  (slides)

Brian Carrier
Basis Technology

This talk is about writing Python modules for Autopsy. It provides an overview of the basic ideas to give you a starting point for writing a module as soon as you get home from the conference (or during it). Writing Python modules for Autopsy allows you to focus on interesting analytics and finding relevant evidence and not having to worry about file systems, carving, ZIP files, or UIs.

This talk will cover the module types in Autopsy, the basics of writing a Python module, and common issues that other developers have found over the past year. It is assumed that audience members have some basic Python skills.

Autopsy is an open source platform that is focused on ease of use and automation. It has several extension points for which plug-in modules can be developed.


New generation timeline tools: A case study and Plaso Parser Workshop  (slides)

Daniel White

A moderately-sized institution of higher learning receives an ominous threat from a shadowy hacker group. A plucky band of misfits, armed only with open source forensic tools is the college’s only hope. What happens next? Will our brave band of heroes be able to stop the cyber terrorists in time?

This talk will give you a good understanding of the new features in the Plaso and Timesketch forensic tools, as well as an insight into some of the analysis processes these tools enable. Rather than just talking about these features, you’ll see how they’re actually deployed in an investigative context.


Track 1


Collaborative Autopsy: Enterprise Open Source Forensics  (slides)

Richard Cordovano
Basis Technology

As case and device sizes increase, collaboration becomes critical, however many labs do not have the resources for such an environment. The open source tool Autopsy now has collaborative features that allow multiple examiners to work on the same case at the same time. This allows cases to be completed more quickly and efficiently.

This talk covers those the new collaboration features of Autopsy. It outlines the basic infrastructure required for databases and central storage, easy configuration, and user experience while using it.

Autopsy is an open source digital forensics platform that is used by thousands of users around the world.


Inferring Past Activity from Partial Digital Artifacts  (slides)

Jim Jones
George Mason University

We will present the research results, experiments, and implementation of an Autopsy plugin which reasons over partial artifacts (sectors in unallocated space) to assess past application usage when the application in question has been uninstalled. Current approaches rely on residual Registry entries for Windows systems and residual allocated files for Windows and other platforms. Such approaches will fail to detect past application activity if the Windows Registry is sufficiently cleaned and residual allocated files are not identified or do not exist.

Uninstalling an application typically creates a number of deleted files, which are partially and possibly fully overwritten. We identify residual partial artifacts by matching sectors from unallocated space to a catalog of sectors known to be associated with specific application activity. All matching sectors are not inferentially equal, so we apply weights to matching sectors based on their frequency in the application activity catalog. We are currently investigating additional weighting techniques, including sector entropy and relative partial artifact location on the media. We use the weighted matching sectors to compute a measure of how likely each full artifact is to have been previously present on the media of interest, and these full artifact likelihoods are then rolled up into specific application or activity likelihoods. For a given data source, the Autopsy plugin returns the likelihood of past application activity for each application in the activity catalog. The current catalog contains 16 common benign applications, but the catalog can be easily expanded and could include malware as well.

This work has been conducted in collaboration with NIST, whose DiskPrinting effort was instrumental in building the application activity catalog. This presentation results from research supported by the Naval Postgraduate School Assistance Grant/Agreement No. N00244-13-1-0034 awarded by the NAVSUP Fleet Logistics Center San Diego (NAVSUP FLC San Diego). the views expressed in written materials or publications, and/or made by speakers, moderators, and presenters, do not necessarily reflect the official policies of the Naval Postgraduate School nor does mention of trade names, commercial practices, or organizations imply endorsement by the U.S. Government.


Rapid Recognition of Blacklisted Files and Fragments on Secondary Storage Media  (slides)

Michael McCarrin and Bruce Allen
The Naval Postgraduate School

Comparing hashes of files is an easy way to find matches quickly. It also has many limitations: it will never find files that are improperly carved, deliberately modified or partially overwritten–in short, anything with even one bit flipped. We designed the Autopsy plugin SectorScope to solve this problem. Using SectorScope, an examiner can keep a local database of millions of files of interest and rapidly scan a disk image against it to find full or partial matches in allocated or unallocated space. An interactive visualization makes it easy to understand and explore the results. Databases can be easily shared and merged to facilitate collaboration.

Attendees will learn how to get started using the plugin and walk through a number of common applications.


Introducing SQUID: A tool to ‘fuzzy match’ SQLite databases; don’t miss evidence because the app updated!  (slides)

Ryan Benson
Stroz Friedberg

The number of applications, on desktop and especially mobile devices, that use SQLite to store data valuable to investigators is increasing rapidly. The frequency at which these programs are updated is also increasing, and these updates often change how data is stored and can break forensic tools. SQLite Unknown Identifier (or SQUID) is an open source utility that locates both exact and near matches between a catalog of ‘known’ SQLite databases and unknown ones. SQUID uses a ranking system to quantify how similar one SQLite DB is to another and shows the user the top matches. Users can also ‘teach’ SQUID new databases from new applications or different versions of existing ones, then search for them.

Since SQLite stores all the relevant information inside one file, and the file doesn’t change across operating or file systems, SQUID can be used to identify SQLite databases from popular applications on a variety of desktop and mobile operating systems. It can be used to:
• Scan file system backups from phones
• Quickly triage database files carved from disk for interesting artifacts
• Perform a quality-assurance check to ensure that a ‘do-it-all’ mobile forensics solution isn’t missing information from a newer version of an application
• Scan a file system to see what SQLite databases are present and may have interesting information

Attendees will learn how SQUID works, how to use it in a variety of scenarios (on both desktops and mobile devices), how to ‘teach’ SQUID new applications, and how to integrate it into their existing DFIR workflow.


Forensic Artifact Correlation via Elastic  (slides)

Matthew Seyer & David Cowen
G-C Partners, LLC

How do you start correlating forensic artifacts from multiple sources when no one tool will give you everything you need? In this talk you will see what can be done when you start centralizing reports from all your favorite tools to help you correlate artifacts via indexing with Elastic and some python scripting. Quickly identify things like what was exfiltrated via external devices, the history of a given file, recent activity, and more.


Track 2


NTFS Unstuck in Time  (slides)

Jon Stewart, Zack Weger
Stroz Friedberg

The $UsnJrnl, the $Logfile, and Volume Shadow Snapshots all record structured information about changes to an NTFS volume over time. Together, these data sources come tantalizingly close to being the holy grail of filesystem forensics. We will first discuss our approach to relating these artifacts together to illustrate how an NTFS volume has changed over time. We will then demonstrate our Python solution for generating this data, which we are open sourcing.


Turbinia: Cloud-scale forensics

Cory Altheide & Johan Berggren

Most discussions of cloud forensics focus on the challenges faced by traditional practitioners in this new environment. Comparatively little time has been spent discussing the forensic analysis *opportunities* raised by cloud computing. These opportunities include:
* Much less disruptive forensic acquisition
* Simpler, faster, and more effective post-incident remediation/restoration
* “Real Forensics” performed in “triage” timeframes (seconds/minutes vs. hours/days)

We will explore this last opportunity by demonstrating our flexible, automated forensic infrastructure and powerful workflows – join us as we show you the possibilities of forensics at scale.


FIDO: Automated Security Incident Response  (slides)

Rob Fry

Fully Integrated Defense Operation (FIDO) plays a important role in the defense of the Netflix corporate network. The premise of FIDO is simple… each year companies are receiving an ever increasing amount of security related alerts. Instead of hiring more analyst to comb through the endless stream of alerts we automate the analysis to combat the barrage of information. Simply put, we integrate and then automate the manual human processes by codifying the logic and process used by threat analysts to provide consistent and reliable results. And by making the code configurable you can customize the categorization, scoring and results of FIDO to accommodate a companies needs.


Live Response Collection Overview  (slides)

Brian Moran
BriMor Labs

The Live Response Collection toolkit is an open-source, freely available tool compiled and streamlined to assist in efficiently gathering data from Windows, OSX, and Linux based operating systems. Additional functionality is built into the Windows system to gather volatile data, extract a memory dump and create forensic images of mounted drives. The tool automates and simplifies the data collection process so more time can be spent performing analysis of the collected data.


Short Updates from Previous Speakers:

  • Mari DeGrazia – What’s Cooking with Google Analytics
  • Jon Stewart – Lightgrep 2015
  • Willi Ballenthin – Updates on Open Source Python Tools
  • Harlan Carvey – What’s Coming in Registry Analysis
  • Doug Koster – Automating the Computer Forensic Triage Process With MantaRay