MUMBAI, India, Feb. 27 -- Intellectual Property India has published a patent application (202641019111 A) filed by Praveen Kumar Koppanati, Pondicherry, India, on Feb. 19, for 'ai-driven data quality correction engine for large-scale databases.'
Inventor(s) include Praveen Kumar Koppanati.
The application for the patent was published on Feb. 27, under issue no. 09/2026.
According to the abstract released by the Intellectual Property India: "The suggested framework is implemented by following a number of steps once the dataset is preprocessed to manage data quality anomalies. The first step is to identify the quality anomaly that needs fixing and then pick out the relevant attributes that are associated with it. Then, we choose the rows where the specified feature does not show any quality anomalies. After that, we use a filtering method to keep just the records that are most closely related to the quality dimension that we are addressing. After that, the XGBoost model is trained using the chosen rows. At startup, the XGBoost model is configured with the aforementioned parameters, including the normalization method and booster type. Training the model entails fine-tuning its parameters while making use of XGBoost's robust prediction capabilities to ascertain dataset correlations and trends. The whole dataset, comprising the rows containing quality anomalies, is used to train the model when the training phase is over. The XGBoost model uses the imputed information and attributes to do prediction-based quality anomaly correction or approximation, whichever is appropriate for the given quality anomaly. This method improves the dataset's correctness and dependability by fixing or compensating for data quality shortcomings."
Disclaimer: Curated by HT Syndication.