MUMBAI, India, Jan. 2 -- Intellectual Property India has published a patent application (202541123267 A) filed by Vellore Institute Of Technology, Vellore, Tamil Nadu, on Dec. 6, 2025, for 'real-time sign language interpretation system using deep learning.'
Inventor(s) include Dr. Shalini L; and Ashish Kumar Pandey.
The application for the patent was published on Jan. 2, under issue no. 01/2026.
According to the abstract released by the Intellectual Property India: "The present disclosure provides a real-time sign language interpretation system comprising a camera module configured to capture video input of hand gestures, a hand detection module configured to detect hand regions within video frames, an image preprocessing module configured to process detected hand regions for classification, a trained deep learning model configured to classify hand gesture images into corresponding sign language characters, and a text conversion module configured to convert classified gestures into text output. The hand detection module utilizes MediaPipe framework for hand landmark detection. The image preprocessing module resizes detected hand regions to 300x300 pixels and applies white background normalization. The trained deep learning model comprises a convolutional neural network recognizing American Sign Language and Indian Sign Language alphabet gestures with recognition latency of less than one second per frame."
Disclaimer: Curated by HT Syndication.