Paper Title
Lip Movement Detection and Word Identification

Abstract
Communication is one of the fundamental parameters in human interaction, and it is also impacting our lives. It allows individuals to express themselves and understand other’s perspectives as well. However, oral communication can significantly challenge individuals with hearing impairment. Even a noisy environment can pose a challenge to verbal communication. Lip-reading is a practical communication skill that enables individuals to understand speech using visual cues. With the development of deep learning and computer vision, improving lip reading accuracy and overcoming limitations are now possible. The main objective of the project is to be able to identify the words spoken by a user, with a video input from the user This project takes the collective image processing model and acquires the precision of the words that are being spoken by the user. The one and only constraint in this model is that it cannot identify the words spoken from any angle. We have collected the dataset in such a way that the whole dataset has the same background and the same angle, no changes.