Skip to content

Latest commit

 

History

History
21 lines (15 loc) · 1014 Bytes

README.md

File metadata and controls

21 lines (15 loc) · 1014 Bytes

AI_Visual_Stream

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

Once the hands are tracked, there are series of functionalities that will be implemented. Some of them are:

  1. Loading images (like diagrams, use cases and workflows)
  2. ‘3-Dimensional graphics‘ to visualize the advanced curves that are hard to display on 2-D surfaces.
  3. Result analysis - Most of the video conferencing platforms don’t provide this, The system shows the result analysis of how each student performed and display them graphically with charts

IMG

IMG

Mobile View:

IMG