MUMBAI, India, Jan. 9 -- Intellectual Property India has published a patent application (202511116372 A) filed by Dr. Pawan Bhambu; Dr. Narayan Singh; Dr. Himanshu Sharma; and Dr. Prashant Sharma, Jaipur, Rajasthan, on Nov. 24, 2025, for 'neural network compression technique for low-power edge devices.'

Inventor(s) include Dr. Pawan Bhambu; Dr. Narayan Singh; Dr. Himanshu Sharma; and Dr. Prashant Sharma.

The application for the patent was published on Jan. 9, under issue no. 02/2026.

According to the abstract released by the Intellectual Property India: "The present invention discloses an advanced hybrid neural network compression technique specifically designed to enable efficient execution of deep-learning models on low-power edge devices. Traditional neural networks require substantial computational resources and memory capacity, making their deployment on constrained platforms such as IoT modules, wearable sensors, embedded controllers, and battery-operated devices highly challenging. To address these limitations, the proposed invention integrates four complementary compression strategies-structured pruning, layer-wise quantization, tensor factorization, and knowledge-based distillation-into a unified framework. Structured pruning systematically removes redundant neurons, filters, and computational blocks, thereby reducing model size without altering the overall architecture. Quantization transforms high-precision floating-point parameters into low-bit integer representations to significantly lower memory consumption and computational overhead. Tensor factorization decomposes complex weight tensors into low-rank components, enabling faster execution on devices with limited arithmetic capabilities. To compensate for potential information loss during compression, the system employs a teacher-student distillation approach that restores essential representational capacity and preserves predictive accuracy."

Disclaimer: Curated by HT Syndication.