Skip to content

UniBwTAS/YOLOPoint

Repository files navigation

YOLOPoint

Joint Keypoint and Object Detection

This is a complimentary repository to our paper YOLOPoint. The code is built on top of pytorch-superpoint and YOLOv5.

example_output filtered_trajectory

Installation

Requirements

  • python >= 3.8
  • pytorch >= 1.10
  • accelerate >= 1.14 (needed only for training)
  • rospy (optional, only for deployment with ROS)
$ pip install -r requirements.txt

Optional step for deployment with ROS:

$ pip install -r requirements_ros.txt

Huggingface accelerate is a wrapper used mainly for multi-gpu and half-precision training. You can adjust the settings prior to training with (recommended for faster training) or just skip it:

$ accelerate config

Pretrained Weights

Download COCO pretrained and KITTI fine-tuned weights:

COCO pretrained n s m l
KITTI n s m
COCO (experimental) s
KITTI (experimental) s

[New] Experimental weights follow a lighter YOLOv8-like architecture, were trained with InfoNCE loss and seem to have improved accuracy for keypoint matching. However, this has not yet been thoroughly evaluated and is not part of the paper.

Data Organization

For pretraining weights on COCO, your file structure should look like this:

YOLOPoint/
├── datasets/
│   ├── coco/
│   │   ├── images/
│   │   │    ├── train/
│   │   │    └── val/
│   │   ├── labels/
│   │   │    ├── train/
│   │   │    └── val/
│   │   └── coco_points/

Be sure to use the COCO2017 split! Also store your pseudo-ground truth keypoint labels in ./datasets/coco/coco_points. See Keypoint Labels for more info on obtaining gt keypoints.

Keypoint Labels

Generate your own pseudo ground truth keypoint labels with

$ python src/export_homography.py --config configs/coco_export.yaml --weights weights/YOLOPointS.pth.tar --output_dir path/to/output_dir

OR use the shell script to download and place them in the appropriate place:

$ sh download_coco_points.sh

Training

  1. Adjust your config file as needed before launching the training script.
  2. The following command will train YOLOPointS and save weights to logs/my_experiment/checkpoints. If you would like to try training the new experimental model set the --model flag to YOLOPointv52.
$ accelerate launch src/train.py --config configs/coco.yaml --exper_name my_experiment --model YOLOPoint --version s
  1. Broadcast Tensorboard logs.
$ sh logs/my_experiment/run_th.sh

Inference

Example if you're not using ROS:

$ python src/demo.py --config configs/inference.yaml --weights weights/YOLOPointS.pth.tar --source /path/to/image/folder/or/mp4

Example if you are using ROS:

First build the package and start a roscore:

$ catkin build yolopoint
$ roscore

You can either choose to stream images from a directory or subscribe to a topic. To visualize object bounding boxes and tracked points, set the --visualize flag.

$ rosrun yolopoint demo_ROS.py src/configs/kitti_inference.yaml directory '/path/to/image/folder' --visualize

Alternatively, you can publish bounding boxes and points and visualize them in another node. Before publishing, the keypoint descriptor vectors get flattened into a single vector and then unflattened in a listener node.

$ rosrun yolopoint demo_ROS.py src/configs/kitti_inference.yaml ros '/image/message/name' --publish
$ rosrun yolopoint demo_ROS_listener.py '/image/message/name'

Citation

@InProceedings{backhaus2023acivs-yolopoint,
  author    = {Anton Backhaus AND Thorsten Luettel AND Hans-Joachim Wuensche},
  title     = {{YOLOPoint: Joint Keypoint and Object Detection}},
  year      = {2023},
  month     = aug,
  pages     = {112--123},
  Note      = {ISBN: 978-3-031-45382-3},
  Publisher = {Springer},
  Series    = {Lecture notes in computer science},
  Volume    = {14124}
}

About

Joint Keypoint and Object Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages