Paper Title
Enhanced Gesture Recognition Through Hand Gesture and Text Integration
Abstract
A Hand Gesture recognition and text-to-gesture translation, aimed at facilitating natural human-computer interaction. The system consists of two main components:
1) Hand gesture recognition
2) Text-to-gesture translation.
In the 1.Hand gesture recognition component, gestures made by users are captured by a camera and converted into text using machine learning algorithms such as Support Vector Machines (SVM), K-Nearest Neighbour (KNN) and Random Forest(RF). These algorithms are trained on a dataset of hand gesture images to accurately classify and recognize various hand gestures. The detected gestures are then displayed as text on the screen, providing real-time feedback to the user.
In the 2. Text-to-gesture translation component, users input text commands into a graphical user interface (GUI), and the system generates corresponding hand gestures using a deep learning model, specifically ResNet50. This model is trained on a dataset of text-to-gesture mappings to learn the relationship between textual input and corresponding gestures. The generated gestures are then displayed on the GUI, allowing users to interact with the system using natural language commands