MUMBAI, India, Feb. 13 -- Intellectual Property India has published a patent application (202641008074 A) filed by Ravindra College Of Engineering For Women, Kurnool, Andhra Pradesh, on Jan. 27, for 'emotion recognition from human speech using ai.'
Inventor(s) include P M Priyanka; Kota Lakshmi Prasanna; Talari Umadevi; Satri Tabita; and Sekhar Pranitha.
The application for the patent was published on Feb. 13, under issue no. 07/2026.
According to the abstract released by the Intellectual Property India: "The MELD (Multimodal Emotion Lines Dataset), to construct a web-based model for spoken emotion recognition. The dataset, which consists of text, audio, and images, is used to identify emotions in conversational data. Pre-processing methods such image scaling, noise reduction, gray conversion, and normalization are applied to the incoming data. Text tokenization for natural language data and audio feature extraction techniques such as Mel Frequency Cepstral Coefficients (MFCC) are used. Following pre-processing, the data is divided into training and testing sets, and deep learning models-such as CNN and DenseNet121-are used to classify the emotions. web-based model for spoken emotion recognition using the MELD (Multimodal Emotion Lines Dataset), a multidimensional dataset. The dataset is used to detect emotions in conversational data and includes text, audio, and images. The entering data is subjected to pre-processing techniques such picture scaling, noise removal, gray conversion, and normalization. Mel Frequency Cepstral Coefficients (MFCC), a technique for extracting audio features, and text tokenization for natural language data are employed. After pre-processing, the data is separated into training and testing sets, and the emotions are classified using deep learning models like CNN and DenseNet121."
Disclaimer: Curated by HT Syndication.