MUMBAI, India, Jan. 2 -- Intellectual Property India has published a patent application (202541122982 A) filed by Vellore Institute Of Technology, Vellore, Tamil Nadu, on Dec. 5, 2025, for 'multi-modal deep learning framework for real-time american sign language recognition.'

Inventor(s) include Dr. Prakash R; Ahzam Afaq; Mohammad Isa Mirza; and Kushal Raghuwanshi.

The application for the patent was published on Jan. 2, under issue no. 01/2026.

According to the abstract released by the Intellectual Property India: "The present disclosure provides a multi-modal deep learning system for real-time American Sign Language recognition. The system includes a video capture module for acquiring real-time video frames from webcam input, an image preprocessing module for resizing frames to 224 224 pixels and normalizing pixel values to 0-1 range, and a skeletal landmark extraction module for detecting hand landmarks. The system further includes a first machine learning model processing preprocessed image frames to generate image-based gesture predictions with confidence scores, a second machine learning model processing skeletal landmark data to generate landmark-based gesture predictions with confidence scores, a multi-modal fusion module combining predictions from both models, a confidence-based selection mechanism selecting predictions based on confidence scores, and an output module generating recognized American Sign Language gesture results."

Disclaimer: Curated by HT Syndication.