Feedback System using Real Time Analysis of Facial Expressions and Audio Data
In recent two decades Human Machine Interaction has been the topic of interest for most researchers. Emotion Recognition seems to be most promising way to interact with machine these days. The traditional way of taking feedback through forms are deteriorating these days and it is really a hectic for both attendees as well as the organisers. Undoubtedly, you rarely get a 100% response rate and people don't really like filling out forms but other than that, the feedback form continues to be a valuable asset. If the emotions and the audio responses of the attendees could be tracked in real-time, you wouldn't even need feedback forms. Thanks to facial emotion tracking and audio analysis, feedback forms could be a thing of the past.
This paper focuses on generating the feedback using real time analysis of facial expressions and non-verbal expressions like applause, laughter. This technique uses haar cascades to detect facial expressions using real time image processing and audio processing using PyAudio to identify the audio events in real time.
Keywords - Human Machine Interaction, Emotion Recognition, Facial Expression, Image Processing, Facial Expression, Haar Cascades, PyAudio