MUMBAI, India, Feb. 6 -- Intellectual Property India has published a patent application (202641007326 A) filed by Shalini J; V Inbarasu; J Rakshita Bai; S Bharathi; and M Sakthivel, Erode, Tamil Nadu, on Jan. 24, for 'emotion-aware multilingual sign language interpretation system using deep learning.'
Inventor(s) include Shalini J; V Inbarasu; J Rakshita Bai; S Bharathi; and M Sakthivel.
The application for the patent was published on Feb. 6, under issue no. 06/2026.
According to the abstract released by the Intellectual Property India: "The present invention relates to an emotion-aware sign language interpretation system in the field of artificial intelligence and assistive communication technologies. The invention addresses the problem of enabling effective communication between hearing- and speech-impaired individuals and non-sign language users by providing real-time, vision-based interpretation of sign language. The system captures live video input and employs convolutional neural networks for hand gesture recognition and long short-term memory networks for sentence-level interpretation of continuous gestures. Facial emotion detection is integrated to enhance contextual understanding of communication intent. The interpreted text is translated into regional languages, particularly English to Tamil, and converted into speech using text-to-speech synthesis. The invention provides a low-cost, scalable, and practical assistive communication solution suitable for deployment in healthcare, educational, and public service environments."
Disclaimer: Curated by HT Syndication.