Skip to content

Latest commit

 

History

History
44 lines (39 loc) · 1.56 KB

README.md

File metadata and controls

44 lines (39 loc) · 1.56 KB

Sign-Language-Recognition

A project to recognize sign language using OpenCV and Convolutional neural network

Requirements

  • OpenCV
  • Numpy
  • Keras
  • sklearn
  • Google Colab

Dataset

You can create your own dataset and train your model or use our pretrained model to recognizer letters. The dataset consists of 19200 images i.e. 800 images per letter except 'J' and 'Z'.

How to RUN

Training your own model (Optional)

Creating Dataset

  1. Specify the path for storing images in capture.py
  2. execute the following command
python capture.py
  1. Enter the letter for which you want to capture the images.
  2. Place your hand inside the green rectangle
  3. Press 'C' to start the capturing process
  4. Repeat step 2-5 for all the letters except 'J' and 'Z'

Training Model

  1. Specify the path for parent folder of images in upload_array.py
  2. Execute the following command
python upload_array.py
  1. Upload the generated .npy files on Google Drive
  2. Run recogModel.ipynb
  3. Download the h5 file from your Google Rrive

Recognizing Sign Language

  1. Specify the path of h5 file in recog.py
  2. Execute the following command
python recog.py
  1. Press 'C' to start recognition
  2. Press 'Q' to QUIT