This project applies MobileNetV2 for detecting strawberry diseases using a deep learning model. The original code has been modularized for professional deployment, including Dockerization.
models/
contains the MobileNetV2 model code.scripts/
contains training, evaluation, and utility scripts.notebooks/
contains the original Jupyter notebook.main.py
runs the primary functionality for making predictions.Dockerfile
and.dockerignore
are used for containerization.
- Docker
- Python 3.9
-
Clone the repository:
git clone https://github.com/yourusername/MobileNetV2-Project.git cd MobileNetV2-Project
-
Install dependencies:
pip install -r requirements.txt
-
Build and run the Docker container:
docker build -t mobilenetv2-project . docker run -p 8080:8080 mobilenetv2-project
To train the model on your dataset, use the following command:
python scripts/train.py --data data/dataset_path --epochs 300 --batch_size 16 --learning_rate 0.001
--data
: Path to the training dataset.--epochs
: Number of training epochs (default is 300).--batch_size
: Batch size for training (default is 16).--learning_rate
: Learning rate for the optimizer (default is 0.001).
To evaluate the trained model on a test dataset, use the following command:
python scripts/evaluate.py --data data/test_dataset --model models/mobilenetv2.pth
--data
: Path to the evaluation dataset.--model
: Path to the trained model file (mobilenetv2.pth
).
This script will print a classification report, display a confusion matrix, and calculate accuracy to help assess the model's performance.
To use the trained model to make predictions on a new image:
python main.py --image path/to/sample_image.jpg
--image
: Path to the image for which you want to make a prediction.
The Dockerfile
is used to create a Docker container for the project. It sets up the Python environment, installs dependencies, and runs main.py
.
- .dockerignore: Specifies which files and directories should be ignored when building the Docker image (e.g., cache files, notebook checkpoints).
- .gitignore: Specifies which files should be ignored by Git (e.g., compiled Python files, environment directories).
The evaluation script (evaluate.py
) provides the following metrics:
- Accuracy: The ratio of correctly predicted samples to the total samples.
- Classification Report: Precision, recall, F1-score, and support for each class.
- Confusion Matrix: A visualization of how well the model is classifying each class, helping identify misclassifications.