This Luigi pipeline is designed to process large .tif
images generated by a FlowCam device. The pipeline breaks down these large images into smaller "vignette" images, adds metadata (e.g., latitude, longitude, date, and depth) to the resulting images, and then uploads the processed images to a specified destination (e.g., an S3 bucket or an external API).
The pipeline is structured as a series of Luigi tasks, each handling a specific step in the workflow:
- Reading Metadata: Parses
.lst
files to extract metadata. - Decollaging: Extracts individual images from large
.tif
files. - Uploading: Uploads processed images to a specified endpoint.
The pipeline consists of the following Luigi tasks:
- Purpose: Reads the
.lst
file to extract metadata for image slicing. - Input:
.lst
file generated by the FlowCam device. - Output: A
.csv
file (metadata.csv
) containing parsed metadata.
- Purpose: Uses metadata to slice a large
.tif
image into smaller vignette images. - Input: The
metadata.csv
file generated byReadMetadata
. - Output: Individual vignette images with EXIF metadata, saved in the specified output directory.
- Purpose: Uploads processed vignette images to a specified S3 bucket or an external API.
- Input: Processed vignette images generated by
DecollageImages
. - Output: A confirmation file (
s3_upload_complete.txt
) indicating successful uploads.
- Purpose: A wrapper task that runs all the above tasks in sequence.
- Dependencies: It manages the dependencies and order of execution of the entire pipeline.
- Installation and dependencies
Follow the [main README][README.md] to create a python environment and install our dependencies into it
-
Setup JASMIN credentials
If using S3 for uploading, make sure your AWS credentials are set in a .env file in the root directory:
AWS_ACCESS_KEY_ID=your_access_key AWS_SECRET_ACCESS_KEY=your_secret_key AWS_URL_ENDPOINT=your_endpoint_url
- Start the object store API
The pipeline uses the separate object_store_api to manage data in s3.
Please see the README in that project for different modes of running it. Shortest version is:
git clone https://github.com/NERC-CEH/object_store_api.git
pip install -e .[all]
- Add
.env
file with your credentials to object storage as above fastapi run --workers 4 src/os_api/api.py
- Start the Luigi Central Scheduler
Path to --logdir
is optional, if you don't have permissions to write to /var/log
luigid --background --logdir=./logs
-
Run the Pipeline Script
python -m luigi --module pipeline.pipeline_decollage FlowCamPipeline \ --directory /path/to/flowcam/data \ --output-directory /path/to/output \ --experiment-name test_experiment \ --s3-bucket your-s3-bucket-name