- Traditional DMS solutions rely on post-processing procedure to improve detection ability. However, RAPID possesses a greater ability to judge with an end-to-end and frame-level prediction.
- We utilize DDPM to generate possible future driver poses and determine whether the driver is distracted by clustering, which enables recognition of undefined actions.
- In order to be put into practice, privacy protection is a problem to be solved. Based on human pose keypoints, RAPID could not only protect drivers' privacy but also support rapid inference.
During the experiment, in order to recognize any driver distraction behavior that is not predefined, we design a variety of normal and abnormal driving behaviors. The normal driving behaviors include not only mechanical operations with both hands on the steering wheel but also permissible non-distracting actions such as adjusting glasses and changing posture. As for abnormal driving, we design at least ten different behaviors, as shown in the following table.
Our original dataset is uploaded in folder original_sktDD whose single file includes one driver's one view. In order to reproduce our results, folder sktDD can be utilized directly. In our dataset, column PersonID means different views (0/1/2 means rearview mirror/passenger-side window/dashboard view).
conda env create -f environment.yaml
conda activate rapid
python train_RAPID.py --config train.yaml
Past frame number k can be changed in train.yaml
(discussion in III.A).
-
Testing your own training results
Fill in
load_ckpt
incheckpoints/sktDD/train_experiment/config.yaml
and run:
python test_RAPID.py --config checkpoints/sktDD/train_experiment/config.yaml
-
Reproducing our result
Run:
python test_RAPID.py --config test.yaml
- You can view the result images in the directory
./pictures
.
We referenced the repos below for the code.