Skip to content

ierolsen/YOLO-License-Plate-Detection-Web-App

Repository files navigation

LISENCE PLATE DETECTION USING YOLO

plate_detector

85


Installations:

1- Create an Environment:

For Windows:

conda create -n plate_detector python=3.9
activate plate_detector

2-Install Libraries

pip install -r requirements.txt

Project Architecture

Labeling -> Training -> Save Model -> OCR and Pipeline -> RESTful API with Flask

Train Images -> Labeling -> Data Preprocessing -> Train Deep Learning Model -> Model Artifacts -> Make Object Detection -> OCR - Text Extract -> Model Pipeline -> Model Artifacts -> RESTful API with Flask


Image Annotation Tool

https://github.com/heartexlabs/labelImg

Download this repo and follow the instructions,

1

By following the instructions, it is way easy to install.

2


Label Image

After opening the tool, click Open Dir and choose the directory where the dataset are, then the tool will automatically load the images. When you get the image in the GUI, click Create RectBox and draw it then Save the xml files into the dataset.

3


Section 1 - XML to CSV

Notebook: 01_xml_to_csv.ipynb

In this section, I will parse the label information and save them as csv .

4

5

I will apply all these steps for all images.

6


Section 2 - Data Processing

Notebook: 02_Object_Detection.ipynb

2.1- Load and Get Image Filename

In this section, firstly, I will get the images path in order to read images. In the last section, I only saved path of xml files, that is not enough to read images using OpenCV.

7

As you can see in the line6, I am able to get the path of one of the images.

8

2.2- Verify Labeled Data

In this part, what I will do is that I will try to draw the bounding box in order to be sure if the values are correct.

I am starting this step by reading first image of the image list. And then, I will draw the bounding box. Here is the result.

9

2.3- Data Preprocessing

Here, I will load the images and convert them into array using Keras. Also I will normalize the data which are labels and images.

10

As you can see, bounding box coordinates are normalized. I will apply these steps to all coordinates and at additional, I will normalize the images.

11

2.4- Train Test Split

In this section, I will split the data as train and test by splitting 80% train size. Before splitting the data, values must be np.array.

12

2.5- Deep Learning Model

In this section, I will train a model for prediction. But I am not going to train a model from scratch, instead I will use Transfer Learning or as known as pre-trained models which are MobileNetV2, InceptionV3, InceptionResNetV2.

I am starting with import all necessary libraries that I will be using.

13

2.6- Building Neural Network

14

Some Explanations:

  • inception_resnet.trainable = False means, I will use the same weights for this project.

  • The reason why last Dense layer is 4 is, it is our number of labels.

2.7- Compiling the Model

15

2.8- Training

16

2.9- Opening TensorBoard Log File

In order to check log file, type this to the terminal

tensorboard --logdir="logs"

It will direct us to the localhost.

17


Section 3 - Pipeline for Object Detection

Notebook: 03_Make_Prediction.ipynb

In this section, I will load the model and create a prediction pipeline, and also I will see that how the model draw the bounding boxes.

As first step, I am starting with loading the model.

3.1- Load Model

18

For testing the model, I will randomly open an image and it will be tested by the model. The idea is that the model will predict and draw the coordinates of the plate. Basically, the model will return the bounding box

3.2- Testing the Model

In this part, I will test the model in 2 different ways one of which are by using original images that has its own original sizes, other way is that I will use reshaping image which has 224,224. I will check the results how they will affect.

19

20

3.2.1- Prediction

21

There it is, I got the prediction of the coordinates but these coordinates are normalized, I must convert back the results from normalization.

3.2.2- Denormalization

From the Section-2,

22

I made a normalization above the image, it is so simple to denormalization the image again. For this, I will multiple with the original height and width.

That’s all!

23

Now, I got the denormalized coordinates.

3.3- Draw Bounding Box

24

The model predicts it really well for this image. In order to get better result, I have to feed to Neural Network with a lot of data.

3.4- Pipeline

25

26

As it is seen, the model can not predict well.

NOTE: I have retrained my model again, and nothing changed. The only way to solve it is feeding the model with more data.


Character Recognition - OCR (Optical Character Recognition)

In this section, I will crop the plate and read their characters using PyTesseract*. But when I do this, I will use the proper image which is the first one, because the model only works well on this image :))

27

3.5- Crop Bounding Box

28

3.6- Extract Text from the Plate

29

As you can see, PyTesseract can not extract the text well too, it is because of the angle of the plate. In this kind of situations, PyTesseract can not work properly. In the next part, I may be fixing it.


Section 4 - Web App Using Flask

app. py

In this section, I will be developing a web app where we can upload car plates for detecting and reading them. For this, I will use Flask.

In order to make it clear, I am not planing to explain what I am doing with HTML or etc. I will only explain the python side of Flask.

4.1- Creating First Flask App

In order to test the installation of Flask, I will create a quick app which says “Hello World”.

30

Then, by typing python app.py to the cmd, it will return a localhost where we can see out outputs.

And here is the result:

31

As can be seen, everything looks cool.

4.2- Bootstrap Installation

In order to import Bootstrap into your project, you should firstly create a directory called templates. after this step, you should also create one html file called layouts.html or you can name it base.html it is up to you..

After completing these steps, go to official website of Bootstrap which is https://getbootstrap.com/docs/5.2/getting-started/download/ and copy the CDN links of CSS and JS.

32

Then, paste them into head tags of layouts.html file you have created before.

33

4.3- NAVBAR

34

I have had to design the navbar inside of layout.html file, because navbar will be contained whole pages of this project. It is one of the base things.

35

4.4- FOOTER

36

4.5- INDEX File

Index file is a page which it will be my main page where I can load the images etc. I will inheritance the layout.html file for getting NAVBAR and FOOTER and also Bootstrap.

For this I will add     {% block body %}             {% endblock %}         Inside of the layout.html and I will create a new html file called index.html as I said before, it will be my main page where I will upload the images and get the results.

37

Here is the result

38

4.6- UPLOAD FORM

39

40

4.7- FILE UPLOAD

In this section, I will code the part which will be running when Upload Button is clicked.

I am expecting a data which is an image as POST form. I need to receive the data and save that into a folder that is called static.

For that I will create one more folder called static and it will also have one more folder called upload.

Most important thing here is, all these files have to be created inside of working directory where Flask App is.

41

First I started with defining BASE_PATH which is current working directory and then defined UPLOAD_PATH where images are saved.

When an images are uploaded, it is POST method, so inside of if statement I checked this then I got the name of the image which is image_name where I defined inside of UPLOAD FORM in index.html file. If you check the third image above, you will see the name=image_name.

Then I got the filename of the image for the next step which is saving, and as last step, I defined save method.

And, when I try to load an image, here is what happens in the static/upload file.

42

4.8- EDITING THE DEEP LEARNING MODEL

In this section, using the uploaded images, I will make prediction using trained deep learning model which I have done before.

For this process, I need this function,

25

43

I have changed these covered parts, let me explain why.

The function will get two parameters, first one i path other one is filename, I will upload more than one images, so the model will predict them, in order to save them properly it is necessary. Other reason is that I will also save drawn rectangle images. That is also why.

44

I have also defined one more function for OCR. This function will be used in OCR process as you can guess. This function will also get two parameters like in last function, I will also save the Cropped license Plate into static/roi folder which I have created before.

4.9- IMPLEMENTING THE DEEP LEARNING MODEL TO THE FLASK APP

First, I import my OCR function: from deeplearning import OCR

I run the app. py and upload a image to see the output:

45

As can be seen, the model predict it ( as you may remember in previous sections, the model can not work well because of that fed with less data.) but my main idea was that learning how to implement a Deep Learning model to Flask App. so this project will be so informative for me.

46

I also want you to realize these covered parts of the image, the drawn bounding box, cropped plate and uploaded images are saved into these folders. At least, the algorithm works well! :) I wish the model too..

4.10- DISPLAY OUTPUTS in HTML PAGE

In this section, I will display these images which are original image and drawn bounding box image.

For this, inside of render_templates I will add a parameter which is upload. If it is True, it will display the results.

47

Here I specified the variables in order to use in the index.html file.

48

In this index.html page, I created some tables for putting the images.

49

And the we are able to see the prediction! I will also add Cropped License Plate and its text version.

50

I have also decided to try with YOLO.

Let’s see..


Section 5 - License Plate Detection with YOLOv5

Notebook: 05_YOLO.ipynb

The biggest problem of this project is the accuracy of the model. The model has really low precision in detecting the license plates. In order to solve this problem, I will use YOLOv5 which is most powerful object detection model.

One of the biggest differences in YOLO is bounding box format. X and Y positions are preferred to the center of the bounding box.

51 Source: https://haobin-tan.netlify.app/ai/computer-vision/object-detection/coco-json-to-yolo-txt/

52

In the labeling process, this is what the format must be. I need to prepare the data in this way and center position of X, center position of Y these refers to the bounding box and width and height of the bounding box.

5.1- Data Preparation

First, I am starting with importing the libraries that I am going to use and read my data which is label.csv using pd.read_csv()

Then, I will get the information inside of the xml file which I have created before.

53

Using xml library I got the information which are filename, width and height then I combine them with the df.

In the next step, I will calculate center_X, center_Y, width and height.

54

Also, the folder structure of YOLO must be like this:

├── data
│   ├── train
│   │   ├── 001.jpeg
│   │   ├── 002.jpeg
│   │   ├── 001.txt
│   │   ├── 002.txt

│   ├── test
│   │   ├── 101.jpeg
│   │   ├── 102.jpeg
│   │   ├── 101.txt
│   │   ├── 102.txt

According to this schema, I need to create two folders which are called train and test. Inside of train I shoul put all images and their label information.

First, I will split the dataframe into train and test. I have 225 images, 200 images will be in the training folder, others will be in the test folder.

55

Then I will copy every images inside of the folders.

56

57

For test folder, whole steps are same, only change train to test.

As next step, I will also create a yaml file. It is required for training process.

58


5.2- Training

NOTE:

I have trained a new model using YOLOv5 in Google Colab due to the free GPU Service. Now, I will explain what and why I did during the process.

As first step, I opened a new notebook and set my current working directory. After these I checked if I am in correct directory which I was. Then I cloned YOLOv5 repo from GitHub and then I installed the requirements.

59

After the installation, I changed my current working directory ad I set it inside of yolov5 file. Then, by typing this magical word, the training process started

!python train.py --data data.yaml --cfg yolov5s.yaml --batch-size 8 --name Model --epochs 100

60

61

As you can see in this detail, 100 epochs took 0.720 hours, also best model has saved into there: runs/train/Model/weights/best.pt

Then in order to use the model, I exported it as onnx format using it with OpenCV.

62

I would also like to share some images from validation.

63

64

65

Results are much much much better!

I also want to share PR_Curve which is Precision Curve

66

5.3- Using the Trained YOLO Model and Prediction

In this part, I will define some functions for prediction using the trained YOLO Model, which I did.

I am starting with defining the sizes. That’s what the YOLO Model uses.

67

Here, I will load YOLO Model using OpenCV functions.

68

Then, I will resize the image by adding np.zeros as what YOLO format requires.

69

And, before the final step, I will get the predictions

70

But there is something that I must warn!

Totally, detection has 6 feature which are center_x, center_y, w, h, confidence, probability.

By using these features, I will filter detection based on confidence probability score

71

Here is outputs,

72

But now they are in np.array format, I should turn them into list

73

As the final step, I have to apply Non Maximum Suppression on Bounding Box using OpenCV

74

And, I am ready for drawing Bounding Box

75

He we go, it looks great! As we can realize easily, YOLO did great job!

5.4- Editing the Functions (Clean Code)

Now, what I will do is that I will put all steps together.

76

77

78

79

Now all functions are created, and they are ready for the test!

80

In the another image, it also did great job!

5.5- Extract Text with PyTesseract

Now I will apply pytesseract in order to extract text but I will do this separately from these function that I have created. I will define another functions for these steps.

For this, I am defining one more function called extract_text

81

And I will use this function where drawings are happening, which is drawings function. Because in order to get text from license plate I have to have roi here, it means bounding box. After getting text, I will show them in the image, it will be inside of a rectangle.

82

And I will update the prediction function due to that I changed drawings function

83

And result:

84

85

Releases

No releases published

Packages

No packages published

Languages