MUMBAI, India, Jan. 9 -- Intellectual Property India has published a patent application (202511119845 A) filed by Nupur Singh; Swarnim Tiwari; Pranjal Kumar Rao; Harsh Vikram Singh; Deepinder Kaur; and Priya Singh, Ghaziabad, Uttar Pradesh, on Dec. 1, 2025, for 'sign2text: a deep learning model for indian sign language to text conversion.'

Inventor(s) include Nupur Singh; Swarnim Tiwari; Pranjal Kumar Rao; Harsh Vikram Singh; Deepinder Kaur; and Priya Singh.

The application for the patent was published on Jan. 9, under issue no. 02/2026.

According to the abstract released by the Intellectual Property India: "The present invention pertains to a system and methodology for translating Indian Sign Language (ISL) gestures into meaningful textual representations using deep learning-based visual recognition. The system acquires image frames or video footage of hand gestures using a camera and preprocesses them to accentuate gesture-specific features. A convolutional neural network (CNN) or MobileNet architecture is utilized to extract spatial features from the processed frames. At the same time, a Transformer or Vision Transformer (ViT) model analyzes temporal variations associated with dynamic gestures. The outputs from both models are integrated within a hybrid module to produce a consolidated feature representation, which is subsequently classified into the respective ISL gesture category. A language model then translates the predicted gesture into coherent text, thereby providing real-time communication support for individuals with hearing or speech impairments. This invention delivers an accessible, precise, and scalable ISL-to-text solution that operates effectively with standard camera input, obviating the need for specialized hardware."

Disclaimer: Curated by HT Syndication.