Skip to content

Latest commit

 

History

History
83 lines (55 loc) · 3.2 KB

PIPELINES.md

File metadata and controls

83 lines (55 loc) · 3.2 KB

Luigi Pipeline for Decollaging and Uploading FlowCam Images

Overview

This Luigi pipeline is designed to process large .tif images generated by a FlowCam device. The pipeline breaks down these large images into smaller "vignette" images, adds metadata (e.g., latitude, longitude, date, and depth) to the resulting images, and then uploads the processed images to a specified destination (e.g., an S3 bucket or an external API).

The pipeline is structured as a series of Luigi tasks, each handling a specific step in the workflow:

  1. Reading Metadata: Parses .lst files to extract metadata.
  2. Decollaging: Extracts individual images from large .tif files.
  3. Uploading: Uploads processed images to a specified endpoint.

Pipeline Architecture

The pipeline consists of the following Luigi tasks:

1. ReadMetadata

  • Purpose: Reads the .lst file to extract metadata for image slicing.
  • Input: .lst file generated by the FlowCam device.
  • Output: A .csv file (metadata.csv) containing parsed metadata.

2. DecollageImages

  • Purpose: Uses metadata to slice a large .tif image into smaller vignette images.
  • Input: The metadata.csv file generated by ReadMetadata.
  • Output: Individual vignette images with EXIF metadata, saved in the specified output directory.

3. UploadDecollagedImagesToS3

  • Purpose: Uploads processed vignette images to a specified S3 bucket or an external API.
  • Input: Processed vignette images generated by DecollageImages.
  • Output: A confirmation file (s3_upload_complete.txt) indicating successful uploads.

4. FlowCamPipeline (Wrapper Task)

  • Purpose: A wrapper task that runs all the above tasks in sequence.
  • Dependencies: It manages the dependencies and order of execution of the entire pipeline.

Setup and Installation

  1. Installation and dependencies

Follow the [main README][README.md] to create a python environment and install our dependencies into it

  1. Setup JASMIN credentials

    If using S3 for uploading, make sure your AWS credentials are set in a .env file in the root directory:

    AWS_ACCESS_KEY_ID=your_access_key
    AWS_SECRET_ACCESS_KEY=your_secret_key
    AWS_URL_ENDPOINT=your_endpoint_url

Running the pipeline

  1. Start the object store API

The pipeline uses the separate object_store_api to manage data in s3.

Please see the README in that project for different modes of running it. Shortest version is:

  • git clone https://github.com/NERC-CEH/object_store_api.git
  • pip install -e .[all]
  • Add .env file with your credentials to object storage as above
  • fastapi run --workers 4 src/os_api/api.py
  1. Start the Luigi Central Scheduler

Path to --logdir is optional, if you don't have permissions to write to /var/log

luigid --background --logdir=./logs
  1. Run the Pipeline Script

    python -m luigi --module pipeline.pipeline_decollage FlowCamPipeline \
     --directory /path/to/flowcam/data \
     --output-directory /path/to/output \
     --experiment-name test_experiment \
     --s3-bucket your-s3-bucket-name