MUMBAI, India, Jan. 9 -- Intellectual Property India has published a patent application (202541112352 A) filed by R. Ashwini; K. Mekala Devi; T. S. Valarmathi; and S. A. Engineering College, Chennai, Tamil Nadu, on Nov. 17, 2025, for 'smart wearable glove for real-time sign language to speech and text conversion using embedded sensors and machine learning.'

Inventor(s) include R. Ashwini; K. Mekala Devi; and T. S. Valarmathi.

The application for the patent was published on Jan. 9, under issue no. 02/2026.

According to the abstract released by the Intellectual Property India: "Sign language is an essential communication method for individuals with hearing or speech impairments, yet its interpretation often requires trained personnel or visual interaction, creating barriers in daily communication with non-sign language users. This research presents the design and development of a smart wearable glove capable of real-time sign language-to-speech and text conversion using embedded sensors and machine learning algorithms. The proposed system integrates flex sensors, . inertial measurement units (IMUs), and pressure/contact sensors strategically placed on the glove to capture finger bending, wrist orientation, and hand movement patterns corresponding to.sign gestures. These sensor signals are processed by a microcontroller, where feature extraction and preprocessing are performed before the data is transmitted to a machine learning-based classification model. The model-trained using supervised learning techniques-recognizes individual sign gestures with high accuracy and converts them into their corresponding alphanumeric or word outputs. The recognized gesture is then translated into audible speech output and real-time text display on a connected interface, enabling seamless communication without the need for an interpreter. To ensure usability, the system emphasizes low power consumption, wireless connectivity, and ergonomic design for continuous wear. Experimental results demonstrate strong recognition accuracy across diverse users and environments, with minimal latency from gesture detection to speech/text output. The project highlights the potential of wearable Internet of Things (IoT) and machine learning technologies to enhance inclusivity and accessibility for the deaf and mute community, offering a portable, low-cost, and scalable solution for everyday communication."

Disclaimer: Curated by HT Syndication.