Skip to content

Latest commit

 

History

History
126 lines (87 loc) · 5.41 KB

README.md

File metadata and controls

126 lines (87 loc) · 5.41 KB

Forensics.im Microsoft Teams Parser & Autopsy Plugin 🕵️‍♂️

GitHub License Build Status

Forensics.im is an Autopsy Plugin, which allows parsing levelDB of modern Electron-based Instant Messenger Applications like Microsoft Teams. Unlike the existing levelDB plugin, Forensics.im also parses the binary ldb files, which contain the majority of the entries and allows identifies individual entities, such as messages and contacts, and presets these in Autopsy's blackboard view.

This parser has been tested using:

  • Microsoft Teams 1.4.00.11161 (Windows 10) with a free business organisation
  • Microsoft "Teams 2.0" (Windows 11) 48/21062133356 with a personal organisation

This plugin is an artefact of the Master Thesis Digital Forensic Acquisition and Analysis of Artefacts Generated by Microsoft Teams at the University of Abertay, Dundee, United Kingdom.


Microsoft Teams From a Forensic Perspective

If you are curious about the artefacts that are generate by Microsoft Teams, I would like to refer you to my in-depth blog post on my personal website. It discusses in great details which file are created by Microsoft Teams and how these could be utilised in a forensic investigation.

Demo

Autopsy Module


Quickstart

Autopsy Module Installation

This module requires the installation of Autopsy v4.18 or above and a Windows-based system.

To install the Microsoft Teams parser for Autopsy, please follow these steps:

  • Download the forensicsim.zip folder of the latest available release.
  • Extract the .zip folder onto your computer.
  • Open the Windows File Explorer and navigate to your Autopsy Python plugin directory. By default, it is located under %AppData%\autopsy\python_modules.
  • Create a new forensicsim folder within the python_modules folder.
  • Copy the ms_teams_parser.exe and the Forensicsim_Parser.py to the forensicsim directory.
  • Restart Autopsy to activate the module.

You can test verify that the module has installed successfully by performing the following steps:

  • Start Autopsy.
  • Open/Create a case and add a source.
  • You will find the added modules under the menu Tools-> Run Ingest Modules -> Name of the Data Source.

Standalone Parser Usage

The standalone parser script writes all the processed and identified records into a structured JSON file, which can either be processed by the Autopsy Plugin or in another application.

The main parser script can be used like this:

.\dist\ms_teams_parser.exe -f ".\forensicsim-data\john_doe_old_teams\IndexedDB\https_teams.microsoft.com_0.indexeddb.leveldb" -o "john_doe.json"

Feel free to use the LevelDB files provided in this repository.

The parser has the following options:

Options:
  -f, --filepath PATH    File path to the .leveldb folder of the IndexedDB.
                         [required]
  -o, --outputpath PATH  File path to the processed output.  [required]
  -b, --blobpath PATH    File path to the .blob folder of the IndexedDB.
  --help                 Show this message and exit.

Development

Compiling the utils\main.py to an Executable:

pyinstaller "main.spec"

Utility Scripts for handling LevelDB databases:

dump_leveldb.py

This script allows dumping a Microsoft Teams LevelDB to a json file, without processing it further. The usage is as following. Simply specify the path to the database and where you want to output the JSON file.

usage: dump_leveldb.py [-h] -f FILEPATH -o OUTPUTPATH
dump_leveldb.py: error: the following arguments are required: -f/--filepath, -o/--outputpath

Utility Scripts for populating Microsoft Skype and Microsoft Teams

populate_skype.py

A wee script for populating Skype for Desktop in a lab environment. The script can be used like this:

tools\populate_skype.py -a 0 -f conversation.json

populate_teams.py

A wee script for populating Microsoft Teams in a lab environment. The script can be used like this:

tools\populate_teams.py -a 0 -f conversation.json

Datasets

This repository comes with two datasets that allow reproducing the findings of this work. The testdata folder contains the LevelDB databases that have been extracted from two test clients. These can be used for benchmarking without having to perform a (lengthy) data population.

The populationdata contains JSON files of the communication that has been populated into the testing environment. These can be used to reproduce the experiment from scratch. However, for a rerun, it will be essential to adjust the dates to future dates, as the populator script relies on sufficient breaks between the individual messages.


Acknowledgements & Thanks

  • ccl_chrome_indexeddb Python module for enumerating the * LevelDB* artefacts without external dependencies.
  • Gutenberg Project Part of Arthur Conan Doyle's book The Adventures of Sherlock Holmes have been used for creating a natural conversation between the two demo accounts.