MUMBAI, India, Jan. 9 -- Intellectual Property India has published a patent application (202541134049 A) filed by Karpagam Academy Of Higher Education; Karpagam Institute Of Technology; Priyadharshini R; and Madhumita S, Coimbatore, Tamil Nadu, on Dec. 31, 2025, for 'hand gesture recognition for human computer interaction.'

Inventor(s) include Priyadharshini R; and Madhumita S.

The application for the patent was published on Jan. 9, under issue no. 02/2026.

According to the abstract released by the Intellectual Property India: "Hand Gesture Recognition (HGR) has emerged as a powerful and intuitive method for enabling natural Human-Computer Interaction (HCI), offering users a contactless and highly accessible means of interacting with digital systems. This invention presents a real-time hand gesture recognition system that integrates advanced computer vision techniques with deep learning architectures to accurately detect, classify, and interpret both static and dynamic gestures. The system begins with video input captured through a standard camera, followed by a robust preprocessing pipeline utilizing OpenCV for background removal, hand segmentation, and noise reduction. Spatial features from hand images are extracted using a convolutional neural network (CNN), while temporal dependencies within gesture sequences are modeledusing recurrent neural networks (RNNs) such as LSTM or GRU. These complementary feature streams are fused and processed by a classification module employing fully connected layers and a softmax function to generate gesture predictions with high accuracy. The proposed system is designed for real-time performance, making it suitable for practical interactive applications. Postprocessing techniques including prediction smoothing and command mapping enhance stability and convert recognized gestures into system commands. A comprehensive training pipeline incorporating dataset acquisition, data augmentation, transfer learning, and model validation ensures robust generalization across diverse lighting conditions, backgrounds, and user hand variations. Deployment is supported on both edge devices and cloud platforms, enabling low-latency inference while allowing remote updates and continuous learning. The applications of this invention span across sign language translation, virtual and augmented reality interfaces, smart home device control, assistive technologies, and gaming environments. By eliminating the need for physical controllers or wearable sensors, the system significantly enhances accessibility, reduces touch-based interaction, and promotes hygienic operation in public and healthcare settings. The invention also supports future scalability through multimodal fusion with voice recognition or sensor data, making it a versatile and forward-compatible solution for next-generation HCI systems."

Disclaimer: Curated by HT Syndication.