Auto-AVSR: Lip-Reading Sentences Project
-
Updated
Apr 16, 2024 - Python
Auto-AVSR: Lip-Reading Sentences Project
Visual Speech Recognition using deep learing methods
A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.
Online Knowledge Distillation using LipNet and an Italian dataset. Master's Thesis Project.
Deep Visual Speech Recognition in arabic words
Deep Visual Speech Recognition in arabic words
EMOLIPS: TWO-LEVEL APPROACH FOR LIP-READING EMOTIONAL SPEECH
Visual Speech Recognition for Multiple Languages
LipReadingITA: Keras implementation of the method described in the paper 'LipNet: End-to-End Sentence-level Lipreading'. Research project for University of Salerno.
Implementation of "Combining Residual Networks with LSTMs for Lipreading" in Keras and Tensorflow2.0
In this repository, I try to use k2, icefall and Lhotse for lip reading. I will modify it for the lip reading task. Many different lip-reading datasets should be added. -_-
Strong Gateway using Speech Processing ,3D Vision and Language processing . Deployed using Django
Visual speech recognition with face inputs: code and models for F&G 2020 paper "Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition"
Speaker-Independent Speech Recognition using Visual Features
Python toolkit for Visual Speech Recognition
"LipNet: End-to-End Sentence-level Lipreading" in PyTorch
Add a description, image, and links to the visual-speech-recognition topic page so that developers can more easily learn about it.
To associate your repository with the visual-speech-recognition topic, visit your repo's landing page and select "manage topics."