Skip to content

bellingcat/smart-image-sorter

Repository files navigation

Smart Image Sorter 🖼️📁

Bellingcat logo: Discover BellingcatDiscord logo: Join our communityColab icon: Try it on Colab

This repository provides a Python script and graphic user interface for zero-shot image classification using open-source models from Hugging Face's library.

The script organises images into labelled folders based on the classification results. It can be used as a command-line tool or as a user interface in Jupyter Notebook.

You can test this tool with a set of 32 images extracted by Bellingcat from Telegram groups. The images are available in the imgs/ folder.

Features

  • Zero-shot image classification using Hugging Face's models.

  • Supports batch processing of images.

  • Organises images into folders based on their labels.

  • Option to copy or move images after classification.

  • Generates a CSV file with classification results.

Instructions

Requires Python 3.10.

  1. Clone the repository.

  2. Follow the instructions to install Pytorch: https://pytorch.org/get-started/locally/

  3. Install the other required packages using pip install -r requirements.txt. Or use Poetry: poetry install.

  4. Run the script replacing the arguments as needed:

python classifier.py --source="imgs/" --destination="labelled/" --labels="cat,object" --operation="copy" --output_file="output.csv" --batch_size=32 --verbose=True

Arguments

--source: Path to the source directory containing the images. Default is imgs/.

--destination: Path to the destination directory for classified images. Default is labelled/.

--labels: Comma-separated list of labels for classification.

--operation: Operation to perform on images after classification: copy or move. Default is copy.

--model: Model name for zero-shot classification. If not provided, the most downloaded model for zero-shot image classification on Hugging Face will be used.

--output_file: Path to the CSV file for saving classification results. Default is output.csv.

--batch_size: Number of images to process in a batch. Default is 32.

--verbose: Detailed output showing progress and model used. Default is True.

Graphical User Interface (GUI)

You can run the tool entirely from the command line, but if you want to use the GUI locally, make sure to enable the Jupyter Notebook extension for widgets:

jupyter nbextension enable --py widgetsnbextension --sys-prefix
jupyter nbextension install --py widgetsnbextension --user
jupyter nbextension enable widgetsnbextension --user --py

Alternatively, you can run the tool on your browser using Google Colab, which handles the GUI automatically. You can refer to our guide here on how to do this.