Paper Title
Hand Gesture Recognition and Sentence Generation for Empowering the Speech-Impaired

Abstract
This combines Media Pipe for real-time recognition of hand gestures with LSTMs/BiLSTMs that convert such gestures into text. Media Pipe will provide efficient hand movement tracking while using LSTMs to process the sequential data of gestures over time. Further use of BiLSTMs enhances the context by considering past and future sequences of gestures. It allows easy transformation of hand gestures into text in real time, enabling users to provide a robust form of communication to those affected by speech. This method allows users to communicate using body gestures, which are further interpreted into meaningful text or speech. Keywords - Hand Gesture Recognition, Sentence Generation, Speech Impairment, MediaPipe, LSTM, Bidirectional LSTM, Deep Learning, Non-Verbal Communication, Gesture-to-Text Translation, Sequential Modeling.