This codebase uses Keras, TensorFlow, and PyTorch to scan a video feed from the ZEDD stereo camera dual vision and identify diseases.
- Create a virtual environment in the code folder:
virtualenv env
orpython3 -m venv env
. - Activate the virtual environment:
source env/bin/activate
. - Install the required packages:
pip3 install -r requirements/requirements.txt
. - Run the code:
python3 sharingan.py
.
This codebase uses Keras, TensorFlow, and PyTorch to scan a video feed from the ZEDD stereo camera dual vision and identify diseases.
- Create a virtual environment in the code folder:
virtualenv env
orpython3 -m venv env
- Activate the virtual environment:
source env/bin/activate
- Install the required packages:
pip3 install -r requirements/requirements.txt
- Run the code:
python3 sharingan.py
- Do not create a virtual environment as it will cause core dumps. Instead, just install Python3.
- Install the required packages:
pip3 install -r requirements/requirements_jetson.txt
- Create folders with names like categories.json
- Insert the correct pictures in the folders
- Run
train.py
ortomato-leaf-disease-classification.ipynb
(after populating labeled data as in the notebook)
For help with the notebook, visit https://github.com/divyansh1195/Tomato-Leaf-Disease-Detection-.git to train data for use with ZEDD.
To test the trained model, load the Keras model into the Flask app in line 37, then start the Flask app to use test pictures by uploading and evaluating results.
[Include logs or output from the project, if applicable.]