MUMBAI, India, Jan. 2 -- Intellectual Property India has published a patent application (202541124576 A) filed by Usha Rani Macigi, Tirupati, Andhra Pradesh, on Dec. 10, 2025, for 'a multi-layer perceptron model for real time american sign language recognition for edge device.'

Inventor(s) include N. V. Muthu Lakshmi; Sunitha Kanipakam; K. Manjula; and G. Prathyusha.

The application for the patent was published on Jan. 2, under issue no. 01/2026.

According to the abstract released by the Intellectual Property India: "The ultimate purpose of human existence is often considered as finding happiness and satisfaction. People will live happily only when they communicate their feelings, emotions and opinions to others, further right to freedom of expression is a fundamental right but people who are suffering from hearing problem, cannot communicate. This may lead to isolation of them from the societal activities. So, to address this issue, sign language came into existence which is mainly to communicate, get education, get employment and get social inclusion. In this research, an efficient methodology for American Sign Language (ASL) recognition using hand gestures is proposed. To find an efficient model, several experiments were conducted on ASL datasets which consist of images, where each image represents signs of several words, alphabets, numbers, etc., Finally, Multi-Layer Perceptron outperformed other models in terms of accuracy and real time translation and this model is bundled in edge devices such as android phones, but here, the input is live video through camera by the user. This study focused mainly on recognition of legal words, as hearing people are unable to solve their legal issues because they cannot communicate effectively with their signs to the police or lawyers to raise their issues. In this methodology, the MediaPipe framework is used to extract landmarks from good quality sign-images, pre-processing is done with normalization techniques which ensures consistency in scale and orientation to make generalize model for various types of users in different environments. Experimental results demonstrated high classification accuracy, validated the model's performance and its potential for a range of practical applications in accessibility and human-computer interaction is presented. Finally, proposed model was deployed on a Mobile device and proved its accuracy in real time translation with three different hardware devices such as Central Processing Unit (CPU), Graphics Processing Unit (GPU) and Tensor Processing Unit (TPU)."

Disclaimer: Curated by HT Syndication.