This repository contains the code developed by the Saxton Transportation Operations Laboratory (STOL) for object detection, tracking, and conflict point estimation on an edge device using live video streams. We utilized OpenCV to capture video from various sources, including cameras, video files, or RTSP streams, and implemented YOLOv8 TensorRT for object detection, along with BYTETrack for object tracking. Key features of this codebase include:
- Object detection
- Object tracking
- Calculation of speed, heading, and distance
- Conflict point estimation and time-to-collision (TTC) calculation
- Output transmission via UDP protocol
- Hosting a web app for visualizing results through a web interface
The edge device used for this project is the NVIDIA JETSON ORIN NANO. Note: Different versions of MUST sensors are sold with various JETSON devices. Please ensure your MUST sensor includes this platform or a more advanced one.
- NVIDIA CUDA: 12.2
- NVIDIA TensorRT >= 8.6.2
Clone repository and submodules
git clone --recurse-submodules https://github.com/monjurulkarim/YOLOv8_Object_Tracking_TensorRT.git
Create new enviroment
conda create -n yolov8_ds python=3.8
Activate enviroment
conda activate yolov8_ds
Go to refs/YOLOv8-TensorRT
and install requirements for exporting models
cd refs/YOLOv8-TensorRT
pip3 install -r requirements.txt
pip3 install tensorrt easydict pycuda lap cython_bbox
pip3 install flask
Install python3-libnvinfer
sudo apt-get install python3-libnvinfer
The following steps (1-3) are optional because we have already shared our converted weights which are ready to be used. You don't need to convert the weights.
-
Save the YOLOv8 weights in folder
models/to_export
-
Export YOLOv8 ONNX model
In refs/YOLOv8-TensorRT
run the following command to export YOLOv8 ONNX model
python3 export-det.py \
--weights ../../models/to_export/yolov8n.pt \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--opset 11 \
--sim \
--input-shape 1 3 640 640 \
--device cuda:0
The output .onnx
model will be saved in models/to_export
folder, move the model to models/onnx
folder
mv ../../models/to_export/yolov8n.onnx ../../models/onnx/yolov8n.onnx
- Export YOLOv8 TensorRT model
In refs/YOLOv8-TensorRT
run the following command to export YOLOv8 TensorRT model
python3 build.py \
--weights ../../models/onnx/yolov8n.onnx \
--iou-thres 0.65 \
--conf-thres 0.25 \
--topk 100 \
--fp16 \
--device cuda:0
The output .engine
model will be saved in models/onnx
folder, move the model to models/trt
folder
mv ../../models/onnx/yolov8n.engine ../../models/engine/yolov8n.engine
Build OpenCV
bash build_opencv.sh
srcs/yolov8_bytetrack_starlab.py
: This is the main script responsible for running and generating all required results. Users need to specify different parser arguments using flags (e.g., --show
, --vid
). Running this script starts the web app, allowing users to visualize the results in a browser within the local network.
srcs/template/index.html
: This file contains the HTML structure of the web app used for visualization.
srcs/config.py
: contains different parameters
models/
: contains the weights
For more details we added additional comments in the code.
**Go to `srcs` folder**
```bash
cd srcs
Run YOLOv8 + BYTETrack
To run with live data:
python3 yolov8_bytetrack_stol.py
Running this code will start an web app which can be accessed using any browser. Please make sure to define your devices IP address in the templates/index.html
at line # 71.
To run with raw, pre-recorded data:
python3 yolov8_bytetrack_stol.py --vid $(raw_video_path) --udp_save --output_filename $(desired_udp_file_header) --recorded_time --timestamp_file $(timestamps_csv_path)