MUMBAI, India, Feb. 27 -- Intellectual Property India has published a patent application (202641017668 A) filed by Ramachandra College Of Engineering, Eluru, Andhra Pradesh, on Feb. 17, for 'a neural network approach for fusing multiple data sources for deep multimodal learning for image restoration.'

Inventor(s) include Dr. B. Sarada; L L S Maneesha; and Victor Babu Penumala.

The application for the patent was published on Feb. 27, under issue no. 09/2026.

According to the abstract released by the Intellectual Property India: "In this invention, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Unlike other deep learning methods, our network is inspired by a new multi-modal convolutional sparse coding (MCSC) model, whose key feature is that it can automatically split the common information shared among different modalities from the unique information specific to each modality. We call this network CU-Net, where CU stands for common and unique information splitting network. The CU-Net has three modules: unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. We conduct extensive experiments to evaluate our method on various MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus image fusion, and multi-exposure image fusion. Numerical results verify the effectiveness of the proposed method."

Disclaimer: Curated by HT Syndication.