MUMBAI, India, Jan. 2 -- Intellectual Property India has published a patent application (202541124245 A) filed by Dr. Parvathraj Kittur Mrutyunjaiah Mathad; Dr. Shyamily Perungunnapala Veetil; A. Sevuga Pandian; Dr. Chandru Jathar; Dr. Ashok K Chikaraddi; Mrs. Shubhangi Pramod Patil; Mrs. Rajanee Gajkumar Kavathekar; Swathi Kumari Hatrinja; Priyadarshini Puravankara; and Dr. Suvarna G Kanakaraddi, Mangaluru, Karnataka, on Dec. 9, 2025, for 'explainable reinforcement learning algorithm for autonomous decision optimization.'

Inventor(s) include Dr. Parvathraj Kittur Mrutyunjaiah Mathad; Dr. Shyamily Perungunnapala Veetil; A. Sevuga Pandian; Dr. Chandru Jathar; Dr. Ashok K Chikaraddi; Mrs. Shubhangi Pramod Patil; Mrs. Rajanee Gajkumar Kavathekar; Swathi Kumari Hatrinja; Priyadarshini Puravankara; and Dr. Suvarna G Kanakaraddi.

The application for the patent was published on Jan. 2, under issue no. 01/2026.

According to the abstract released by the Intellectual Property India: "Explainable Reinforcement Learning Algorithm for Autonomous Decision Optimization 2.Abstract Explainable Reinforcement Learning (XRL) represents a rapidly evolving domain that aims to enhance the transparency and trustworthiness of autonomous decision-making systems. Reinforcement Learning (RL) algorithms enable agents to learn optimal behaviors by interacting with complex and uncertain environments. However, most high-performing RL models function as black boxes, offering limited visibility into why certain actions are chosen. This lack of interpretability restricts their deployment in critical real-world applications where human oversight, accountability, and safety are essential. To address these limitations, this research proposes an Explainable Reinforcement Learning Algorithm designed to optimize autonomous decision processes while simultaneously providing meaningful explanations of the learned policy and behavior.The proposed approach integrates interpretable machine learning techniques with RL to generate human-understandable insights into state-action mappings, reward justification, and sequential decision strategies. Techniques such as hierarchical policy representation, feature attribution, and visualization of value functions are incorporated to enhance interpretability at multiple layers of decision flow. The algorithm ensures that every autonomous action can be traced to logical reasoning or environmental cues that influenced the choice. Experimental validation demonstrates that the system maintains strong performance metrics in dynamic environments while significantly improving transparency.Additionally, user-centric interpretability measures are applied to evaluate the clarity and usefulness of explanations provided to operators and stakeholders. Results indicate improved user trust, reduced uncertainty, and better decision auditability without compromising adaptability or reinforcement learning efficiency. This research contributes to bridging the gap between intelligent autonomy and human understanding, paving the way for safer, accountable, and ethically aligned deployment of RL in domains such as autonomous vehicles, smart manufacturing, robotics, and defense systems. Overall, the proposed XRL framework supports optimized decision-making with enhanced explainability to promote widespread adoption of autonomous technologies. Keywords Explainable Reinforcement Learning, Autonomous Decision-Making, Interpretability, Policy Transparency, Human-Centric AI, Trustworthy Autonomous Systems."

Disclaimer: Curated by HT Syndication.