MUMBAI, India, Jan. 9 -- Intellectual Property India has published a patent application (202511105370 A) filed by Sharda University, Greater Noida, Uttar Pradesh, on Oct. 31, 2025, for 'artificial intelligence driven enterprise resource planning system.'
Inventor(s) include Rayyaan Wani; Mohd Zaid; and Jyoti Pruthi.
The application for the patent was published on Dec. 12, under issue no. 50/2025.
According to the abstract released by the Intellectual Property India: "The present disclosure generally relates to educational technology systems. More specifically, the present disclosure relates to an artificial intelligence driven enterprise resource planning (ERP) system. BACKGROUND OF THE DISCLOSURE [0002] Educational campuses today are dynamic ecosystems that generate and rely on vast amounts of interconnected information. This requires the integration of advanced computational techniques with interactive platforms that deliver context-aware guidance, improving both accessibility and efficiency. The system is designed to provide context-aware responses, ensuring that users receive outputs that are both timely and actionable. This marks a significant advancement in making academic processes more seamless, as the system adapts to the dynamic needs of the campus environment. [0003] The system consolidates diverse services into a single, intelligent channel. This system makes information more accessible, effectively removing the barrier between institutional databases and students. By doing so, the system transforms the user experience into an interactive, efficient, and personalized engagement with academic resources. A major novelty of the system lies in its integration of intelligent navigation with data-driven planning. For large or multi-building campuses, locating classrooms, laboratories, or administrative offices often becomes a challenge, particularly for new students or visitors. The system addresses this by providing real-time visual guidance, making movement across the campus intuitive and efficient. At the same time, the underlying engine optimizes the way data is accessed and delivered. By anticipating high-demand queries and managing them intelligently, it minimizes delays and ensures smoother performance during peak times. This dual ability offering both accurate physical navigation and optimized digital access creates a unique advantage that enhances academic efficiency and reduces everyday friction for students and faculty alike. [0004] The present invention actively learns user patterns, preferences, and academic behavior over time. As a result, the system can generate proactive reminders, timely alerts, and personalized recommendations that align with each individual's academic journey. The platform ensures that the user is guided in a way that is both relevant and supportive. This emphasis on personalization brings a human-like quality to digital academic planning, fostering better engagement and improved decision-making. By merging artificial intelligence, adaptive personalization, and immersive interfaces, the system stands apart as an innovative solution that redefines how students and institutions interact with educational resources. [0005] Conventional academic enterprise resource planning (ERP) systems are primarily designed as repositories of structured institutional information such as timetables, attendance records, examination schedules, and administrative notices. While these systems provide access to necessary datasets, they are inherently static in nature and incapable of delivering real-time, context-aware insights. Students and faculty are required to interact with rigid database queries or pre-defined dashboards, which cannot accommodate human-like questions and dynamic conditions such as time, location, or personalized academic constraints. The absence of natural query handling creates inefficiencies, as users are forced to manually retrieve and interpret raw data. This limitation of existing solutions demonstrates a clear need for intelligent mechanisms capable of understanding natural language queries and automatically applying relevant contextual parameters. [0006] In conventional systems, attendance, course management, results, and event-related information are distributed across different platforms, requiring users to repeatedly log in and switch between applications. This creates friction and reduces accessibility, particularly in high-demand academic scenarios. Some prior works have attempted to consolidate information into unified dashboards, yet these remain limited to static datasets and lack personalization or adaptability. Furthermore, none of the conventional enterprise resource planning (ERP) frameworks offer seamless indoor navigation support, which is critical for large and complex campus infrastructures. [0007] Another limitation in prior art relates to non-optimized data handling and a lack of proactive engagement with users. Existing systems do not leverage personalization or behavioural learning, which means alerts and notifications remain generic rather than tailored to individual students. The absence of proactive support mechanisms limits their ability to act as intelligent academic assistants. The currently available systems do not integrate advanced technologies such as artificial intelligence-driven personalization or augmented reality-based navigation. [0008] Thus, in light of the above-stated discussion, there exists a need for an artificial intelligence driven enterprise resource planning (ERP) system with augmented reality. SUMMARY OF THE DISCLOSURE [0009] The following is a summary description of illustrative embodiments of the invention. It is provided as a preface to assist those skilled in the art to more rapidly assimilate the detailed design discussion which ensues, and is not intended in any way to limit the scope of the claims which are appended hereto in order to particularly point out the invention. [0010] According to illustrative embodiments, the present disclosure focuses on an artificial intelligence driven enterprise resource planning (ERP) system with navigation that overcomes the above-mentioned disadvantages or provides the users with a useful or commercial choice. [0011] An objective of the present disclosure is to enhance overall student experience by combining artificial intelligence driven decision support with immersive augmented reality visualization, making academic planning intuitive and efficient. [0012] An objective of the present disclosure is to deliver personalized insights by learning individual student patterns such as class preferences, attendance risks, and activity trends. [0013] An objective of the present disclosure is to integrate real-time augmented reality visualization based indoor navigation for guiding students across multi-building campuses with accuracy and ease. [0014] An objective of the present disclosure is to provide a natural language interface that interprets student queries in human-like form and converts them into structured, actionable responses. [0015] An objective of the present disclosure is to optimize data access through predictive caching and preloading mechanisms, ensuring faster response times during high-demand periods. [0016] An objective of the present disclosure is to generate proactive alerts and reminders about class changes, upcoming deadlines, low attendance risks, and event notifications. [0017] An objective of the present disclosure is to unify enterprise resource planning (ERP) datasets such as timetables, grades, attendance, and events into a seamless, intelligent student support layer. [0018] An objective of the present disclosure is to ensure real-time adaptability by updating class schedules, faculty absences, or location changes instantly in the system. [0019] An objective of the present disclosure is to reduce user dependency on multiple portals by consolidating all academic services into a single artificial intelligence driven application. [0020] An objective of the present disclosure is to support predictive academic planning by analysing attendance trends, timetables, and student behaviour to forecast risks and suggest optimal strategies for maintaining compliance with academic requirements. [0021] In light of the above, in one aspect of the present disclosure, a system for enterprise resource planning (ERP) with artificial intelligence (AI) and real-time augmented-reality (AR) navigation is disclosed herein. The system comprises a user interface integrated into a user device, the user interface configured to receive queries. The system includes a communication network configured to transmit data between the several components of the system. The system also includes a processing unit connected to the user interface via the communication network, the processing unit configured to interpret queries and deliver enterprise resource planning (ERP), wherein the processing unit further comprises a data-input module configured to receive input data from the user interface, a data-pre-processing module configured to preprocess input data by normalizing live camera-feed data, encoding categorical variables, handling missing inputs, and synchronizing time and location information for accurate navigation, a voice query interpretation module configured to process spoken queries received via a user interface, an image query interpretation module configured to analyse captured visual input and extract relevant features for responding to image-based queries, a text query interpretation module configured to process typed queries received via a user interface, an navigation module configured to render indoor navigation arrows, signs, and distance markers on a live camera view and to determine user position, a response optimization module configured to efficiently retrieve and combine data with real-time contextual information to generate the most relevant answer to a user query, an alert-generation module configured to monitor system data and trigger real-time notifications based on schedule changes, attendance risks, or other critical events, a personalization module configured to learn and adapt to user-specific patterns, preferences, and academic requirements to deliver customized recommendations and reminders and an output module configured to transmit the processed data and context-based notifications to the user interface. [0022] In one embodiment, the system further comprises a cloud database configured to store and update enterprise resource planning (ERP) information, including but not limited to timetables, attendance records, faculty schedules, holiday calendars, and location data. [0023] In one embodiment, the user interface is further configured to display real-time query responses, personalized notifications, and augmented-reality navigation cues directly on the live camera feed for guiding the user within the campus. [0024] In one embodiment, the user interface further comprises a chatbot configured to interact with users by receiving text and voice-based queries, providing real-time timetable updates, class-location guidance, attendance-risk notifications, personalized reminders, and retrieving historical academic records. [0025] In one embodiment, the processing unit further comprises a user authentication module configured to authenticate users, manage secure access credentials, and provide role-based permissions for retrieving and updating enterprise resource planning data (ERP), thereby enabling personalized attendance-risk calculations for each authenticated student. [0026] In one embodiment, the navigation module is further configured to integrate with indoor positioning technologies, including IoT sensors, Wi-Fi triangulation, and Bluetooth Low Energy (BLE) beacons, to determine a user's precise location within multi-building campuses. [0027] In one embodiment, the response optimization module is further configured to implement a smart cache mechanism that predicts and temporarily stores frequently requested data, including but not limited to morning class schedules or attendance records before examinations, in fast-access memory, to provide faster real-time responses to student queries. [0028] In one embodiment, the alert-generation module is further configured to analyse historical attendance and timetable records to forecast future attendance percentages and automatically alert a student when projected attendance is predicted to fall below a predefined threshold. [0029] In one embodiment, the personalization module is configured to learn user-specific patterns, preferences, and academic requirements, and to generate customized recommendations, reminders, and context-based guidance to enhance user engagement and efficiency. [0030] In light of the above, in one aspect of the present disclosure, a method for providing enterprise resource planning (ERP) services with artificial intelligence (AI) and navigation is disclosed herein. The method comprises receiving text, voice based queries, and live camera-feed data via the user interface integrated into a user device. The method includes transmitting data between the several components of the system via a communication network. The method also includes capturing live images and video for generating image-based queries and for supporting navigation via a camera unit. The method also includes capturing voice input for initiating voice-based queries via a microphone unit. The method also includes converting the processed responses into audible outputs, including navigation guidance, attendance alerts, and personalized reminders via a speaker unit. The method also includes processing data to interpret queries and deliver enterprise resource planning (ERP) via a processing unit comprising several modules. The method also includes receiving the input data from the user interface via a data input module. The method also includes preprocessing input data by normalizing live camera-feed data, encoding categorical variables, handling missing inputs, and synchronizing time and location information for accurate navigation via a data-preprocessing module. The method also includes processing spoken queries to analyse captured voice input, and extract intent for further processing via a voice query interpretation module. The method also includes analysing captured visual input to process live images or video from the camera unit, extract relevant features, and interpret image-based queries for navigation via an image query interpretation module. The method also includes processing text queries, identifying key entities and intent, and converting them into structured commands via a text query interpretation module. The method also includes rendering indoor navigation arrows, signs, and distance markers on a live camera view and determining user position via a navigation module. The method also includes efficiently retrieving and combining data with real-time contextual information to generate the most relevant answer to a user query via a response optimization module. The method also includes monitoring system data to trigger real-time notifications on schedule changes, faculty absences, or low-attendance risks via an alert-generation module. The method also includes learning student-specific patterns to generate customized reminders and recommendations via a personalization module. The method also includes transmitting the processed data and context-based notifications to the user interface via an output module. [0031] These and other advantages will be apparent from the present application of the embodiments described herein. [0032] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below. [0033] These elements, together with the other aspects of the present disclosure and various features, are pointed out with particularity in the claims annexed hereto and form a part of the present disclosure. For a better understanding of the present disclosure, its operating advantages, and the specified object attained by its uses, reference should be made to the accompanying drawings and descriptive matter in which there are illustrated exemplary embodiments of the present disclosure. BRIEF DESCRIPTION OF THE DRAWINGS [0034] To describe the technical solutions in the embodiments of the present disclosure or in the prior art more clearly, the following briefly describes the accompanying drawings required for describing the embodiments or the prior art. Apparently, the accompanying drawings in the following description merely show some embodiments of the present disclosure, and a person of ordinary skill in the art can derive other implementations from these accompanying drawings without creative efforts. All of the embodiments or implementations shall fall within the protection scope of the present disclosure. [0035] The advantages and features of the present disclosure will become better understood with reference to the following detailed description taken in conjunction with the accompanying drawing, in which: [0036] FIG. 1 illustrates a block diagram of a system for enterprise resource planning (ERP) with artificial intelligence (AI) and real-time augmented-reality (AR) navigation, in accordance with an exemplary embodiment of the present disclosure; [0037] FIG. 2 illustrates a method for providing enterprise resource planning (ERP) services with artificial intelligence (AI) and real-time augmented-reality (AR) navigation, in accordance with an exemplary embodiment of the present disclosure; [0038] FIG.3 illustrates the workflow of query submission in the artificial intelligence driven enterprise resource planning (ERP) driven navigation system, in accordance with an exemplary embodiment of the present disclosure; [0039] FIG.4 illustrates the flowchart of an intelligent query handling process in the system, in accordance with an exemplary embodiment of the present disclosure; and [0040] FIG. 5 illustrates a flow diagram of the response processing and predictive learning phase of the system, in accordance with an exemplary embodiment of the present disclosure. [0041] Like reference, numerals refer to like parts throughout the description of several views of the drawing. [0042] An enterprise resource planning (ERP) is illustrated in the accompanying drawings, which, like reference letters, indicate corresponding parts in the various figures. It should be noted that the accompanying figure is intended to present illustrations of exemplary embodiments of the present disclosure. This figure is not intended to limit the scope of the present disclosure. It should also be noted that the accompanying figure is not necessarily drawn to scale. DETAILED DESCRIPTION OF THE DISCLOSURE [0043] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure. [0044] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without some of these specific details. [0045] Various terms used herein are shown below. To the extent a term is used, it should be given the broadest definition that persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing. [0046] The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. [0047] The terms "having", "comprising", "including", and variations thereof signify the presence of a component. [0048] Referring now to FIG. 1 to FIG. 5 to describe various exemplary embodiments of the present disclosure. FIG. 1 illustrates a block diagram of a system for enterprise resource planning (ERP) with artificial intelligence (AI) and real-time augmented-reality (AR) navigation, in accordance with an exemplary embodiment of the present disclosure. [0049] The system 100 may include a user interface 102, a communication network 110, a camera input unit 106, a microphone unit 108, a speaker unit 136, and a processing unit 112. [0050] In one embodiment of the present invention, the system 100 further comprises a cloud database 138 configured to store and update enterprise resource planning (ERP) information, including but not limited to timetables, attendance records, faculty schedules, holiday calendars, and location data. [0051] In one embodiment of the present invention, the system 100 employs an ERP database implemented using PostgreSQL, hosted within a cloud database 138infrastructure to store and manage structured institutional data, including timetables, attendance, academic records, faculty information, and administrative details. [0052] The user interface 102 is integrated into a user device 104. The user interface 102 is configured to receive queries in different forms, including text, voice, and image based input, making it simple and natural for users to communicate with the system 100. [0053] In one embodiment of the present invention, the user device 104 may include a smartphone, a tablet, a laptop, a personal computer, a desktop computer, a smart device, or any other internet-enabled electronic device, which may be capable of supporting user interaction and secure communication with the system 100. [0054] In one embodiment of the present invention, the user interface 102 is further configured to display real-time query responses, personalized notifications, and augmented-reality navigation cues directly on the live camera feed for guiding the user within the campus. [0055] In one embodiment of the present invention, the user interface 102 further comprises a chatbot 140 configured to interact with users by receiving text and voice-based queries, providing real-time timetable updates, class-location guidance, attendance-risk notifications, personalized reminders, and retrieving historical academic records. [0056] In one embodiment of the present invention, the chatbot 140 provides multi-language support to allow users to interact with the system 100 in their preferred language. [0057] In one embodiment of the present invention, the chatbot 140 is configured to operate via voice, text, and image based functionalities. [0058] The communication network 110 is configured to transmit data between the several components of the system 100. The communication network 110 seamlessly transmits real-time information between the user interface 102, the cloud database 138, and the processing unit 112. [0059] In one embodiment of the present invention, the communication network 110 may be both wired and wireless. [0060] In one embodiment of the present invention, the communication network 110 may include, Wi-Fi, Bluetooth, Ethernet, cellular networks such as 2G, 3G, 4G, and 5G, Wide Area Network (WAN), Local Area Network (LAN), and Virtual Area Network (VAN), Metropolitan Area Network (MAN), serial communication protocols, and universal serial bus (USB) interfaces for an input/output connectivity. Embodiments of the present disclosure are intended to cover all types of communication technologies and networks, including known, related art, and/or later developed technologies. [0061] The processing unit 112 is connected to the user interface 102 via the communication network 110. The processing unit 112 is configured to interpret queries and deliver enterprise resource planning (ERP). The processing unit 112 further comprises several modules, including a data-input module 116, a data-pre-processing module 118, a voice query interpretation module 120, an image query interpretation module 122, a text query interpretation module 124, a navigation module 126, a response optimization module 128, an alert-generation module 130, a personalization module 132, and an output module 134. [0062] In one embodiment of the present invention, the processing unit 112 may include, but not limited to, a microcontroller, a microprocessor, a computing device, a development board, an application-specific integrated circuit (ASIC), a system-on-chip (SoC), and so forth. [0063] The data-input module 116 is configured to receive input data from the user interface 102. The data-input module 116 receives different kinds of inputs from the user interface and forwards them for further processing. The data-input module 116 acts as the initial gateway of the processing unit 112 for capturing all relevant academic and contextual information entered by users. [0064] The data-pre-processing module 118 is configured to preprocess input data by normalizing live camera-feed data, encoding categorical variables, handling missing inputs, and synchronizing time and location information for accurate navigation. The data pre-processing module 118 ensures that the raw input received is normalized by reducing noise, standardizing formats, and aligning data with system 100 requirements. It encodes categorical variables such as course codes, classroom labels, or faculty identifiers into a structured format that can be efficiently processed. It also handles missing or incomplete data by applying appropriate corrections or approximations, ensuring continuity in the workflow. The data-pre-processing module 118 synchronizes time and location metadata, which is essential for accurate navigation and context-aware responses. [0065] In one embodiment of the present invention, the processing unit 112 further comprises an authentication module 114, that is configured to authenticate users, manage secure access credentials, and provide role-based permissions for retrieving and updating enterprise resource planning data, thereby enabling personalized attendance-risk calculations for each authenticated student. [0066] The voice query interpretation module 120 is configured to process spoken queries received via a user interface. The voice query interpretation module 120 uses natural language processing (NLP) to convert audio into text, parse sentence structures, and extract intent along with key entities. The voice query interpretation module 120 applies contextual understanding, such as distinguishing between a general inquiry and a time-sensitive request. This enhances accessibility and convenience, allowing students or faculty to interact with the enterprise resource planning (ERP) system 100 hands-free and in natural language, thereby reducing the need for repetitive logins or manual searches. [0067] The image query interpretation module 122 is configured to analyse captured visual input and extract relevant features for responding to image-based queries. It analyses live images or videos, extracting relevant features that correspond to the user's query. The image query interpretation module 122 identifies landmarks, building layouts, or classroom labels, enabling the system to overlay navigation cues. By transforming visual input into actionable data, this module adds a layer of interactivity and efficiency, making navigation and information retrieval more intuitive and less text-heavy for the user. [0068] The text query interpretation module 124 is configured to process typed queries received via a user interface. The text query interpretation module 124 specializes in handling typed inputs from the user interface. It interprets structured and unstructured queries, extracts the intent behind them, and converts them into machine-readable commands. By ensuring that textual data is accurately parsed and mapped, this text query interpretation module 124 guarantees that students and faculty receive precise and unambiguous responses. [0069] In one embodiment, the voice query interpretation module 120 and the text query interpretation module 124 utilise natural language processing models, including but not limited to BERT, GPT variants, or custom large language models (LLMs) fine-tuned on academic ERP data to accurately interpret spoken and typed queries by understanding intent, extracting contextual meaning, and converting them into structured commands for further processing. [0070] The navigation module 126 is configured to render indoor navigation arrows, signs, and distance markers on a live camera view and to determine user position. The navigation module 126 determines the user's current position by integrating indoor positioning technologies. Once the location is established, the navigation module 126 overlays digital navigation cues such as arrows, signs, and distance markers directly onto the live camera feed of the user's device. These visual cues provide step by step, context-aware directions that guide the user toward classrooms, laboratories, or administrative offices without the need for manual map interpretation. [0071] In one embodiment of the present invention, the navigation module 126 is further configured to integrate with indoor positioning technologies, including IoT sensors, Wi-Fi triangulation, and Bluetooth Low Energy (BLE) beacons, to determine a user's precise location within multi-building campuses. IoT sensors may provide localized signals for detecting user presence in specific zones, while Wi-Fi triangulation leverages multiple access points to calculate approximate coordinates based on signal strength and latency. Bluetooth Low Energy (BLE) beacons enable fine-grained, short-range positioning that enhances accuracy in places including but not limited to corridors, staircases, or large auditoriums. By combining these inputs, the navigation module 126 can provide real-time, seamless, and highly accurate location tracking, which is then used to render augmented-reality guidance such as arrows, distance markers, and contextual signs on the user device 104. [0072] The response optimization module 128 is configured to efficiently retrieve and combine data with real-time contextual information to generate the most relevant answer to a user query. The response optimization module 128 retrieves required data from the integrated system and augments it with real-time contextual information such as ongoing schedules, attendance rules, or event updates. By intelligently filtering and combining multiple data sources, the response optimization module 128 avoids redundancy and minimizes delays, thereby delivering concise and accurate outputs. The response optimization module 128 thus plays a critical role in maintaining the responsiveness and reliability of the system, ensuring that information is delivered not only quickly but also with maximum contextual accuracy. [0073] In one embodiment of the present invention, the response optimization module 128 is further configured to implement a smart cache mechanism that predicts and temporarily stores frequently requested data, including but not limited to morning class schedules or attendance records before examinations, in fast-access memory, to provide faster real-time responses to student queries. [0074] The alert-generation module 130 is designed to continuously monitor academic and scheduling data to identify conditions that require immediate attention. Once such an event is identified, the module generates real-time alerts that are transmitted to the user through notifications or audio prompts. Its proactive functionality ensures that users are always informed of critical developments without the need for manual checking. [0075] In one embodiment of the present invention, the alert-generation module 130 is further configured to analyse historical attendance and timetable records to forecast future attendance percentages and automatically alert a student when projected attendance is predicted to fall below a predefined threshold. [0076] The personalization module 132 enhances the overall user experience by tailoring system responses to individual needs and patterns. It learns from a student's usage behavior, academic records, and preferences, adapting over time to provide customized recommendations. The personalization module 132 continuously learns from user interactions, academic schedules, and historical data to build a dynamic profile for every student. The personalization module 132 also generates intelligent reminders and alerts that align with the user's daily routine and academic requirements. [0077] In one embodiment of the present invention, the personalization module 132 is configured to learn user-specific patterns, preferences, and academic requirements, and to generate customized recommendations, reminders, and context-based guidance to enhance user engagement and efficiency. [0078] The output module 134 is configured to transmit the processed data and context-based notifications to the user interface 102. [0079] In one embodiment, the system is not limited to a university environment but can be applied across multiple domains, including schools, corporate training facilities, and government institutions, thereby enabling efficient management of schedules, personalized assistance, and augmented reality based navigation in diverse organizational settings. [0080] FIG. 2 illustrates a method for providing enterprise resource planning (ERP) services with artificial intelligence (AI) and real-time navigation, in accordance with an exemplary embodiment of the present disclosure. [0081] The method 200 may include the following steps: [0082] At step 202, receiving text, voice based queries, and live camera-feed data via the user interface 102 integrated into a user device 104. [0083] At step 204, transmitting data between the several components of the system 100 via a communication network 110. [0084] At step 206, capturing live images and video for generating image-based queries and for supporting navigation via a camera unit 106. [0085] At step 208, capturing voice input for initiating voice-based queries via a microphone unit 108. [0086] At step 210, converting the processed responses into audible outputs, including navigation guidance, attendance alerts, and personalized reminders via a speaker unit 136. [0087] At step 212, processing data to interpret queries and deliver enterprise resource planning (ERP) via a processing unit 108 comprising several modules. [0088] At step 214, receiving the input data from the user interface 102 via a data input module 116. [0089] At step 216, preprocessing input data by normalizing live camera-feed data, encoding categorical variables, handling missing inputs, and synchronizing time and location information for accurate navigation via a data-pre-processing module 118. [0090] At step 218, processing spoken queries to analyse captured voice input, and extract intent for further processing via a voice query interpretation module 120. [0091] At step 220, analysing captured visual input to process live images or video from the camera unit 106, extract relevant features, and interpret image-based queries for navigation via an image query interpretation module 122. [0092] At step 222, processing text queries, identifying key entities and intent, and converting them into structured commands via a text query interpretation module 124. [0093] At step 224, rendering indoor navigation arrows, signs, and distance markers on a live camera view and determining user position via a navigation module 126. [0094] At step 226, efficiently retrieving and combining data with real-time contextual information to generate the most relevant answer to a user query via a response optimization module 128. [0095] At step 228, learning student-specific patterns to generate customized reminders and recommendations via a personalization module 132. [0096] At step 230, transmitting the processed data and context-based notifications to the user interface 102 via an output module 134. [0097] FIG.3 illustrates the workflow of query submission in the artificial intelligence driven enterprise resource planning (ERP) driven navigation system, in accordance with an exemplary embodiment of the present disclosure. [0098] The method 300 may include the following steps: [0099] At step 302, a student submits a query in text or voice format via the user interface 102. [0100] At step 304, the raw query is transmitted to the natural language processing engine, where the query is processed to extract intent and entities. [0101] At step 306, the extracted intent and entities are transmitted to the context engine, where context filters such as time, location, and attendance rules are applied to generate a filtered query. [0102] At step 308, the filtered query is forwarded to the query optimizer, which prepares it for execution by checking available sources. [0103] At step 310, the query optimizer checks the cache layer to see if the requested data is already stored. [0104] At step 312, if a cache hit occurs, the system 100 retrieves cached results directly from the cache layer. [0105] At step 314, if a cache miss occurs, the query is directed to the enterprise resource planning (ERP) system 100, which retrieves the requested data. The cache layer is then updated with the new results. [0106] At step 316, the optimized and enriched results are returned, formatted, and combined with augmented reality rendering instructions for display on the user device 104. [0107] FIG.4 illustrates the flowchart of an intelligent query handling process in the system 100, in accordance with an exemplary embodiment of the present disclosure. The process begins at 402, where a student submits a query in either text or voice format. This input is received by the client interface on a web or mobile device at 404, which acts as the entry point for communication. The query is then passed to the natural language engine at 406, where it is processed and further analyzed at 408 to extract the intent and relevant entities. Once identified, the query moves to the context engine at 410, which applies contextual parameters such as time, location, and attendance to ensure accuracy. Following this, the query optimizer at 412 refines the request before it is directed to the cache layer for validation. At 414, the system 100 checks whether the requested information exists in the live or predictive cache. If a cache hit occurs, the cached results are immediately returned to the user, whereas a cache miss triggers a request at 416 for fresh data from the enterprise resource planning (ERP) system 100 at 418. The system 100 generates the required information and sends the results at 420, which are then stored back into the cache at 422 for future optimization. Finally, at 424, the processed and enriched results are returned to the user, ensuring a seamless and efficient query response experience. [0108] FIG. 5 illustrates a flow diagram of the response processing and predictive learning phase of the system, in accordance with an exemplary embodiment of the present disclosure. [0109] The method 500 may include the following steps: [0110] At step 502, the system initiates the response phase by returning either cached or freshly retrieved results for processing. [0111] At step 504, those results are refined into optimized and enriched outputs to ensure accuracy and contextual relevance. [0112] At step 506, a response formatter prepares the refined outputs into multiple presentation formats, including concise textual content and augmented-reality payloads. [0113] At step 508, the formatted outputs are packaged and forwarded for rendering. [0114] At step 510, the augmented reality renderer generates navigation overlays such as arrows, markers, and contextual labels synchronized with the prepared text. [0115] At step 512, the rendered overlays are aligned with the current positioning and session state to ensure on-screen cues match the user's real environment. [0116] At step 514, the augmented reality elements are displayed with text-based outputs, creating a combined presentation ready for user interaction. [0117] At step 516, a cache layer monitors usage patterns, logging how queries and results are accessed. [0118] At step 518, these logs are analyzed by a predictive learning engine, which identifies behavioral trends and recurring needs. [0119] At step 520, the system updates its learned models with these insights, enhancing its ability to anticipate future requests. [0120] At step 522, the query optimizer incorporates these updates to improve subsequent responses, ensuring faster, more context-aware performance. [0121] At step 524, the query optimizer leverages those insights to fine-tune future query handling, completing a cycle of continuous improvement. [0122] In the best mode of operation, the user initiates interaction by providing input in the form of text, voice-based queries, or live camera-feed data through the user interface 102. The captured data is transmitted securely to other system 100 components via the communication network 110. The camera unit 106 captures real-time images or video to support image-based queries and augmented-reality navigation, while the microphone unit 108 records spoken queries for voice-based interpretation. The processing unit 112 coordinates the core operations by interpreting queries and delivering enterprise resource planning (ERP) functions through specialized modules. The data input module 116 acts as the entry point for receiving user-provided data, which is then pre-processed by the data-pre-processing module 118 to normalize live camera-feed data, encode categorical variables, handle missing inputs, and synchronize time and location metadata for accurate context-awareness. Spoken queries are processed by the voice query interpretation module 120, while captured visual input is analyzed by the image query interpretation module 122 to extract navigation-related features. Text-based queries are parsed by the text query interpretation module 124, which identifies intent and key entities to structure commands for further processing. The speaker unit 136 delivers audible feedback, including navigation guidance, alerts on attendance, and personalized reminders, ensuring seamless multimodal communication with the user. [0123] Once the queries are structured, the navigation module 126 generates augmented-reality overlays such as arrows, distance markers, and directional signs on the live camera feed to guide the user indoors with high precision. In parallel, the response optimization module 128 retrieves enterprise resource planning data (ERP), attendance records, schedules, and other contextual information, combining them in real time to generate the most relevant answer to each query. The alert-generation module 130 continuously monitors data streams to detect risks such as schedule changes, faculty absences, or attendance shortages, and immediately generates context-based notifications. The personalization module 132 learns from individual student behavior, patterns, and preferences to provide tailored recommendations, study reminders, and alerts. Finally, all processed outputs and notifications are transmitted back to the user via the output module 134, completing a seamless cycle of intelligent query resolution and adaptive assistance. [0124] The proposed system offers significant advantages by streamlining daily activities and improving overall efficiency in academic and professional environments. It enables users to access relevant information quickly through multiple modes of interaction, reducing delays and enhancing convenience. It allows users to get the right information quickly using different ways of interaction, which saves time and effort. The system 100 also gives reminders and alerts, helping users to stay on track and avoid missing important tasks or events. It is capable of providing guidance that suits the individual needs of each user, which improves usefulness and comfort. The intelligent processing of the system 100 ensures accurate, context-aware responses, while adaptive learning continuously improves performance, making the system more reliable, user-friendly, and responsive over time. [0125] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it will be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims. [0126] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, computer software, or a combination thereof. [0127] The foregoing descriptions of specific embodiments of the present disclosure have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present disclosure to the precise forms disclosed, and many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described to best explain the principles of the present disclosure and its practical application, and to thereby enable others skilled in the art to best utilize the present disclosure and various embodiments with various modifications as are suited to the particular use contemplated. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but such omissions and substitutions are intended to cover the application or implementation without departing from the scope of the present disclosure. [0128] Disjunctive language such as the phrase "at least one of X, Y, Z," unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to be present. [0129] In a case where no conflict occurs, the embodiments in the present disclosure and the features in the embodiments may be mutually combined. The foregoing descriptions are merely specific implementations of the present disclosure, but are not intended to limit the protection scope of the present disclosure. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims."
Disclaimer: Curated by HT Syndication.