The world in present days is going through a tough time and every country a suffering from the present pandemic caused by covid-19 and also at present there is no 100% curable vaccine of this disease. Covid-19 is spreading in rapid rate form one person to another. To minimize the spreading of this diseases WHO (world health org) provided various guideline like wearing mask, maintaining social distancing, avoiding any public gathering. But people are not maintaining those guide line mainly in the countries like Bangladesh, India, Pakistan and many other under developed countries where people live from day to day earning. Keeping that entire thing in our mind we tried to develop a surveillance system that monitor if they are taking enough precautions to save themselves from Covid-19. This surveillance system monitor people are having basic personal protective gear like mask, face shield, PPE, gloves not only that this surveillance system will able to detect if any person is violating the guide line and our system is able to take picture of the violator not only that this surveillance system will able the live detection that will help the people who are monitoring the people.
A wide range of custom functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny implemented in TensorFlow, TFLite and TensorRT.
- Counting Objects (total objects and per class)
- Print Info About Each Detection (class, confidence, bounding box coordinates)
- Crop Detections and Save as New Image
If there is a custom function you want to see created then create an issue in the issues tab and suggest it! If enough people suggest the same custom function I will add it quickly!
# Tensorflow CPU
conda env create -f conda-cpu.yml
conda activate yolov4-cpu
# Tensorflow GPU
conda env create -f conda-gpu.yml
conda activate yolov4-gpu
# TensorFlow CPU
pip install -r requirements.txt
# TensorFlow GPU
pip install -r requirements-gpu.txt
Make sure to use CUDA Toolkit version 10.1 as it is the proper version for the TensorFlow version used in this repository. https://developer.nvidia.com/cuda-10.1-download-archive-update2
YOLOv4 comes pre-trained and able to detect 80 classes. For easy demo purposes we will use the pre-trained weights. Download pre-trained yolov4.weights file: https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT
Copy and paste yolov4.weights from your downloads folder into the 'data' folder of this repository.
If you want to use yolov4-tiny.weights, a smaller model that is faster at running detections but less accurate, download file here: https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
The only change within the code you need to make in order for your custom model to work is on line 14 of 'core/config.py' file. Update the code to point at your custom .names file as seen below. Note: If you are using the pre-trained yolov4 then make sure that line 14 remains coco.names.
To implement YOLOv4 using TensorFlow, first we convert the .weights into the corresponding TensorFlow model files and then run the model.
# Convert darknet weights to tensorflow
## yolov4
python save_model.py --weights ./data/yolov4.weights --output ./checkpoints/yolov4-416 --input_size 416 --model yolov4
# Run yolov4 tensorflow model
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/kite.jpg
# Run yolov4 on video
python detect_video.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --video ./data/video/video.mp4 --output ./detections/results.avi
# Run yolov4 on webcam
python detect_video.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --video 0 --output ./detections/results.avi
If you want to run yolov3 or yolov3-tiny change --model yolov3
and .weights file in above commands.
Note: You can also run the detector on multiple images at once by changing the --images flag like such --images "./data/images/kite.jpg, ./data/images/dog.jpg"
You can find the outputted image(s) showing the detections saved within the 'detections' folder.
Video saves wherever you point --output flag to. If you don't set the flag then your video will not be saved with detections on it.
The following commands will allow you to run yolov4-tiny model.
# yolov4-tiny
python save_model.py --weights ./data/yolov4-tiny.weights --output ./checkpoints/yolov4-tiny-416 --input_size 416 --model yolov4 --tiny
# Run yolov4-tiny tensorflow model
python detect.py --weights ./checkpoints/yolov4-tiny-416 --size 416 --model yolov4 --images ./data/images/kite.jpg --tiny
The following commands will allow you to run your custom yolov4 model. (video and webcam commands work as well)
# custom yolov4
python save_model.py --weights ./data/custom.weights --output ./checkpoints/custom-416 --input_size 416 --model yolov4
# Run custom yolov4 tensorflow model
python detect.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --images ./data/images/car.jpg
Here is how to use all the currently supported custom functions and flags that I have created.
I have created a custom function within the file core/functions.py that can be used to count and keep track of the number of objects detected at a given moment within each image or video. It can be used to count total objects found or can count number of objects detected per class.
To count total objects all that is needed is to add the custom flag "--count" to your detect.py or detect_video.py command.
# Run yolov4 model while counting total objects detected
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/dog.jpg --count
Running the above command will count the total number of objects detected and output it to your command prompt or shell as well as on the saved detection as so:
To count the number of objects for each individual class of your object detector you need to add the custom flag "--count" as well as change one line in the detect.py or detect_video.py script. By default the count_objects function has a parameter called by_class that is set to False. If you change this parameter to True it will count per class instead.
To count per class make detect.py or detect_video.py look like this:
Then run the same command as above:
# Run yolov4 model while counting objects per class
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/dog.jpg --count
Running the above command will count the number of objects detected per class and output it to your command prompt or shell as well as on the saved detection as so:
Note: You can add the --count flag to detect_video.py commands as well!
I have created a custom flag called INFO that can be added to any detect.py or detect_video.py commands in order to print detailed information about each detection made by the object detector. To print the detailed information to your command prompt just add the flag --info
to any of your commands. The information on each detection includes the class, confidence in the detection and the bounding box coordinates of the detection in xmin, ymin, xmax, ymax format.
If you want to edit what information gets printed you can edit the draw_bbox function found within the core/utils.py file. The line that prints the information looks as follows:
Example of info flag added to command:
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/dog.jpg --info
Resulting output within your shell or terminal:
Note: You can add the --info flag to detect_video.py commands as well!
I have created a custom function within the file core/functions.py that can be applied to any detect.py or detect_video.py commands in order to crop the YOLOv4 detections and save them each as their own new image. To crop detections all you need to do is add the --crop
flag to any command. The resulting cropped images will be saved within the detections/crop/ folder.
Example of crop flag added to command:
python detect.py --weights ./checkpoints/yolov4-416 --size 416 --model yolov4 --images ./data/images/dog.jpg --crop
Here is an example of one of the resulting cropped detections from the above command.
I have created a custom function to feed Tesseract OCR the bounding box regions of license plates found by my custom YOLOv4 model in order to read and extract the license plate numbers. Thorough preprocessing is done on the license plate in order to correctly extract the license plate number from the image. The function that is in charge of doing the preprocessing and text extraction is called recognize_plate and can be found in the file core/utils.py.
Disclaimer: In order to run tesseract OCR you must first download the binary files and set them up on your local machine. Please do so before proceeding or commands will not run as expected!
Official Tesseract OCR Github Repo: tesseract-ocr/tessdoc
Great Article for How To Install Tesseract on Mac or Linux Machines: PyImageSearch Article
For Windows I recommend: Windows Install
Once you have Tesseract properly installed you can move onwards. If you don't have a trained YOLOv4 model to detect license plates feel free to use one that I have trained. It is not perfect but it works well. Download license plate detector model and learn how to save and run it with TensorFlow here
The license plate recognition works wonders on images. All you need to do is add the --plate
flag on top of the command to run the custom YOLOv4 model.
Try it out on this image in the repository!
# Run License Plate Recognition
python detect.py --weights ./checkpoints/custom-416 --size 416 --model yolov4 --images ./data/images/car2.jpg --plate
The output from the above command should print any license plate numbers found to your command terminal as well as output and save the following image to the detections
folder.
You should be able to see the license plate number printed on the screen above the bounding box found by YOLOv4.