Rice is by far the most significant food crop for people in low- and lower-middle-income nations, out of the three primary crops (rice, wheat, and maize). Although both rich and poor people eat rice in low-income nations, the poorest people consume comparatively little wheat and are thus heavily influenced by rice prices and availability.
Rice is a vital and often irreplaceable staple in many Asian countries, particularly among the impoverished. Rice accounts for about half of the food expenditures and a fifth of total family expenditures for Asia's extreme poor, who subsist on less than 1.25 per-day_on_average. This group alone spends 62 billion (in purchasing power parity) on rice each year. Rice is vital to the food security of many of the world's impoverished.
1.INTRODUCTION
2.ABOUT THE DATASET
3.IMPORT NEEDED LIBRARIES
4.PREPARING THE DATASET
5.CREATING THE VALIDATION SETS
6.CREATING THE TRAINNG SET FOR EACH CLASS
7.CREATING THE DATAFRAME FOR "DATA","TRAIN" & "VALIDATION" , BY RESTTING THE INDEX ACCORDINGLY
8.CHECKING THE VALUE_COUNTS
9.PREPROCESSING THE DATASET
10.VISUALISATION
11.SETTING UP & TEST THE AUGUMENTATIONS
- DEFINING THE TRANSFORM PARAMETER
- GETTING AN IMAGE TO TEST TRANSFORMATIONS
- TEST THE TRANSFORMATION
12.BUILDING THE DATA GENERATORS
- TRAIN GENERATOR
- BUILDING THE FUNCTION
- VAL GENERATOR
- TEST GENERATOR
13.MODEL BUILDING ARCHITECTURE
14.TRAIN THE MODEL
- EVALUATE THE MODEL ON THE VAL SET
- LOADING THE TRAIN MODEL
15.PLOTTING THE CURVES
16.MAKE A PREDICTION ON THE VAL SET
17.CONFUSION MATRIX & CLASSIFICATION REPORT
18.TESTING OUR MODEL WITH RANDOM PICTURE DOWNLOADED FROM GOOGLE
19.CONCLUSION:
-
This dataset contains 120 jpg images of disease infected rice leaves. The images are grouped into 3 classes based on the type of disease. There are 40 images in each class.
-
Classes
- Leaf smut
- Brown spot
- Bacterial leaf blight
- NUMPY
- PANDAS
- SKLEARN
- TENSORFLOW
- MATPLOTLIB
- CV2
- Creating The Dataframe Containing all the Images
- Creating The 3 List Of Classes
Transform the target Here we will do one hot encoding to the target classes.
Note
→ These csv files will allow us to use Pandas chunking to feed images into the generators..
val_loss: 1.0603946447372437
val_acc: 0.9333333373069763
We can see from the graph that the loss is decreasing and the accuracy is increasing with the increase in the epochs
[2 1 0 1 0 0 2 1 0 0 2 1 1 1 2]
[2 1 0 1 0 0 2 1 0 0 2 2 1 1 2]
precision recall f1-score support
bacterial_leaf_blight 1.00 1.00 1.00 5
brown_spot 0.83 1.00 0.91 5
leaf_smut 1.00 0.80 0.89 5
accuracy 0.93 15
macro avg 0.94 0.93 0.93 15
weighted avg 0.94 0.93 0.93 15
- We have used 25 images from bacterial blight,brown spot class and 24 from leaf smut class for training (104 training images)
- We have used 5 images from each class for validation (15 validation images)
- Created an image directory
- Fine tuned a MobileNet model that was pre-trained on imagenet.
- Used Adam optimizer, categorical crossentropy loss and a constant learning rate of 0.0001
- We have used callbacks such as EarlyStopping, ReduceLROnPlateau, ModelCheckpoint,LearningRateScheduler
- We didn't use the pre-processing method that was applied to the imagenet images that were used to pre-train Mobilenet. Instead we normalized all images by dividing by 255.
- Performed image augmentation using the Albumentations library. Image augmentation helped to reduce overfitting, improved our model performance and helped the model to generalize better. We predicted the random images from Google to check the working of our model