What’s New or Under Appreciated in Autopsy (slides)
The annual talk about what is new in Autopsy and this year we’re going to spend some time going over some of its under appreciated features. Did you know you could automate your investigation checklist? Did you know you could get an alert each time a file type was found? Did you know that there was an inbox with messages when hash hits are found? This talk will focus on what’s new and the non-standard features of Autopsy, but we’ll also spend a few minutes covering the basics for those who don’t know them.
Autopsy is an open source digital forensics platform that is widely used around the world and focuses on ease of use. Each release has tens of thousands of downloads from around the world.
Clearing the Fog on Cloud Forensics (slides)
Vassil Roussev & Shane McCulley
University of New Orleans
Most work on web app (SaaS) forensics focuses on fiddling with leftovers in the client cache. As our talk will show, this is forensically unsound (incomplete) as it misses rich historical data, such as file revisions. In the case of Google Docs, the entire editing history of the document can be retrieved and played back.
We present a suite of tools that allow:
a) complete, single-tool API-based acquisition from four major cloud drive providers: Dropbox, Box, Google Drive, and OneDrive;
b) acquisition, storage, and playback of GoogleDoc Documents, including historical comments & suggestions, and deleted embedded images;
c) FUSE-based remote mounting and selective acquisition of a cloud drive allowing local tools to be applied directly on cloud data; further, it allows users to create virtual folders based on queries of the rich file metadata (100+ attributes) provided by services and to time travel to the state of the drive as of a particular time;
d) the development of generic cloud forensic tools based on a new API that is similar to POSIX, but provides higher level abstractions to accommodate cloud data; the apps will work with all services for which a service-specific driver is provided (currently: Dropbox, Box, Google Drive, OneDrive).
Autopsy as a Service – Distributed Forensic Compute that Combines Evidence Acquisition and Analysis (slides)
Dan Gonzales & Zev Winkelman
To shrink growing law enforcement digital evidence processing backlogs, we have added parallel ingestion and processing capabilities to Autopsy and Bulk Extractor and have named it the Digital Forensics Compute Cluster (DIGIFORC2). This briefing will describe the design and performance of DIGIFORC2, which will reduce processing time for large and complex forensics investigations. More recently a collaborative branch for Autopsy has been developed that enables more than one forensics investigator to work with evidence from a subject hard drive and to share processed results over a network using a Postgres database, a Solr Core and an ActiveMQ message broker. We extend the capabilities of Autopsy Collaborative to compute clusters and cloud native architectures, so key digital forensics tasks can be accomplished simultaneously by a scalable array of cluster compute nodes. This is done by making use of open-source stream processing applications and other state of the art software tools, such as Apache Spark, Docker and Kubernetes, that can run in a distributed computing environment. The briefing will conclude with a presentation of test results for evidentiary hard drives of different sizes. DIGIFORC2 processing times will be compared with those of a single computer running Autopsy collaborative.
Rekall – We can remember it for you wholesale… (slides)
Rekall is the exciting new memory forensic framework used daily on thousands of machines in an enterprise. In this talk we introduce some of the new features of the framework, and discuss some of the applications in enterprise wide incident response, rapid triaging and automation.
Show me all of your artifact! (slides)
Matthew Seyer & David Cowen
G-C Partners, LLC
Tools show you a lot of the artifact, but, what you really said was “Show me all of the artifact”. Often, representing a forensic artifact as a flat structure (such as csv, xlsx, etc) can cause data to be forgotten. Using SQLite you can store all of the artifact in tool output via JSON objects. With more and more tools utilizing SQLite to store artifact output, this is a handy feature for developers to utilize when parsing artifacts that have nested data structures. In this session we will look at how to store and query forensic artifacts as JSON objects in SQLite from Python. We also take a look at an MFT Parser that will store ALL of the artifact using these JSON objects.
Timeline Analysis in Autopsy (slides)
Timeline analysis can be an important digital forensics analysis technique to figure out what happened on a system. The Autopsy Timeline module was introduced at OSDFCon 2014, providing a native timeline generation and visualization capability to Autopsy. Since then, there have been several improvements and new features added to help integrate timeline analysis into the Autopsy workflow. This talk will provide an overview of the Timeline module emphasizing new and improved features including: event pinning capabilities, the new list view, and a more streamlined visualization in the details view. The Timeline module uses timestamp information from files and other blackboard artifacts that have time stamps, such as web activity.
AFF4: The New Standard in Forensic Image Format, and Why You Should Care (slides)
AFF4 Working Group
This seminar will outline why a new forensic container standard is needed and outline recent efforts to standardize the Advanced Forensic Format 4 forensic container (AFF4). Originally proposed in 2009 by Michael Cohen, Simson Garfinkel, and Bradley Schatz, the AFF4 forensic container supports a range of next generation forensic image features such as storage virtualisation, arbitrary metadata, and partial, non-linear and discontinuous images. Current AFF4 implementations include Rekall, The Pmem suite of Memory Acquisition tools, Evimetry Wirespeed, and Google Rapid Response.
The seminar will present an introduction to the format, outline the current state of adoption within the forensic ecosystem, and announce the availability of open source implementations.
Mozilla InvestiGator: Investigate 1,000 Endpoints in 10s (slides)
A few years ago, the number of systems Mozilla operates outgrew the capabilities of existing forensics and endpoints security tools. The MIG project was started to solve the need to investigate the entire infrastructure in real-time. MIG is a distributed platform composed of agents deployed across Mozilla’s servers. The agents provide remote access to the file system, network and memory of endpoints. MIG is massively parallelized. It can run targeted searches on thousands of endpoints in as short as ten seconds, while allowing for larger scans that take hours to complete. The architecture of MIG is cross-platform and modular. Entirely written in Go, agents can run on Windows, MacOS and Linux. Capabilities can be added via modules that are compiled and shipped with the agents. During the talk, we will discuss how the use of Go simplifies the architecture of MIG, and helps build security tools with minimal cpu and memory footprint.
Efficient Whole Disk Storage and Search (slides)
University of Washington
We describe a software system for systematic whole disk acquisition, storage and search. The system is designed to handle many disks being imaged many times, perhaps once a week or even daily. The acquisition and storage phases use both sector sequence deltas and compression to optimize both time and space requirements. Using trusted boot media, disk contents can be imaged without reliance on potentially infected system software. Use of FUSE enables fully searchable stored disks, allowing traversals by e.g. Sleuthkit for accurate volume system and file system layout. A search feature organized around an AMQP message bus permits efficient responses to Indicator-of-Compromise search queries, in STIX or other formats.
Triage for Windows Systems with Autopsy (slides)
Luca Taennler & Mathias Vetsch
HSR University of Applied Science Rapperswil
We are currently implementing some modules for Autopsy to improve forensic triage for Windows computers.
The main goal was to automatically mark as many files as possible as known-good.
Through verification of Microsoft’s code signing standard AuthentiCode and a comparison against a golden image we achieved to exclude a considerable amount of data.
We developed another module for verification against online malware databases. This allows a shallow statement about the files which are not marked yet.
Additionally we added support for Bitlocker encrypted volumes for the Linux-version of Autopsy.
In our limited time frame of our bachelor thesis we were able to create some useful modules for the Autopsy platform. We would be happy to show our work at #OSDFCon 2016.
C#’ening Your Forensic Tools (slides)
Kroll Cyber Security
This talk and workshop will discuss various free, open source forensic tools including parsers for lnk files, jump lists, Registry hives, amcache, prefetch, shellbags, and more. It will include discussions on the importance of each artifact, discussions on the binary layout of each artifact and examples of using tools to parse the artifacts. The lab will allow for hands on usage of the tools and a deeper exploration of the layout of the artifacts as they exist on disk. Many of the tools allow for exporting data to a variety of formats which allows for integrating things into a larger tool chain.
Additionally, examples of integrating the core parsers will be discussed which allows for users to integrate the parsers in other applications without having to rely on any existing command line or GUI tool.
Osquery: Cross-platform, Lightweight, and Performant Host Visibility (slides)
Osquery was released as an open source product by Facebook in October 2014. It is a host instrumentation tool for Ubuntu, CentOS, and OS X (and now Windows!) osquery makes low-level operating system analytics and monitoring both performant and intuitive.
This talk will walk through why we created osquery, how we use osquery at Facebook to improve our security, and how you can too! We deploy osquery on every laptop and to our entire production server fleet. Come learn how to: use SQL to search for specific artifacts, log differential changes of common persistence methods, collaborate and share detection techniques, configure file integrity monitoring, and enable process auditing. If you want to play along or get started early, check out: https://osquery.io
Who Watches the Smart Watches (slides)
Wearable technology use has accelerated over the past 18 months, to the point of where even most fitness devices now have notification capabilities to allow users to use the device as an extension of their mobile phone. This talk will explore open source parsers for two mobile operating system-independent devices: a Pebble Time and Microsoft Band 2. The talk will highlight functionality and the data that is stored on the device, as well as data stored on the mobile phone with which the wearable technology is associated.
Bringing it all together: Semiautomatic assault analysis (slides)
Things were looking great at Greendale Polytechnique, a moderately-sized institution of higher learning, until they received a mysterious email alerting them to some malicious network traffic…. Greendale reach out to a small team of dedicated cyber operatives to help them understand and contain the threat. The scope is huge, and both time and budget are limited. Will the cyber-experts and their open source tools be enough to save Greendale, or will it be Cybergeddon?
This talk will introduce a new tool that brings together the reach of GRR, the processing depth of Plaso and the analytic and collaborative capabilities of Timesketch. You’ll learn how to rapidly triage large numbers of hosts, track malware across an enterprise, and spin your CPU fans really quickly.