Breaking News

From Wrong Advice Of AI Chatbot To Death, IIT Delhi Created A Record By Creating Fairness Assessment Tool

A
Aarav Sharma
Contributor
February 19, 2026

IT Delhi has developed an AI fairness assessment tool called “Nishpaksh”. Image AI Generated

Lokesh Sharma, New Delhi. The rapidly increasing use of Artificial Intelligence (AI) has made the technology accessible to the common people, but it has also raised new questions about its accuracy and reliability. In recent years, there have been several cases where information provided by AI chatbots proved to be inaccurate or misleading.

In the United States, an elderly man mistook an AI chatbot for a real woman and set out to meet her, but died on the way. Similarly, an incident in which a foreign woman named Victoria was advised by an AI system to commit dangerous suicide has sparked a serious debate on the reliability of technology.

Due to which deficiency will the AI ​​response be incorrect?

Experts say that AI responses can sometimes be biased and inaccurate due to incomplete information, biased data or lack of context. Amidst this growing concern, emphasis is being laid on the need for not only fast and smart AI, but also fair and trustworthy AI.

For this, IIT Delhi has created a new AI fairness assessment tool named “Nishpaksh” (beta version), which is designed as per the 2023 AI Fairness Standard (TEC 57050:2023) of the Center for Telecommunication Engineering. This tool primarily tests supervised learning models based on tabular data and helps detect whether an AI system is biased against any class, community or category in its decisions.

Professor Ranjita Prasad said that it will be available to the common people in two to three months. He said that the special feature of the system is its three-stage evaluation process. In the first stage, the risk level of the AI ​​system is assessed, such as the application area, the degree of human dependence on decisions, and the potential adverse consequences of incorrect results. In the second stage, appropriate fairness metrics and thresholds are determined based on risk, so that more stringent criteria can be applied in high-risk cases.

In the last stage, fairness is analyzed based on the selected parameters, making the entire process transparent and auditable. Experts believe that such standard tools will be very useful for developers, auditors and regulatory bodies, as they provide scientific understanding of the behavior and potential limitations of AI models.

The tool is available in open-source beta form on AI Kosh and GIT Hub, allowing researchers and institutions to test the fairness of their systems. With increasing reliance on AI in the future, such fairness and accuracy testing tools will become essential. They will not only increase trust in the technology but also ensure that AI-based decisions are equitable, safe and responsible for all parts of society.

Also read: Delhi MCD Commissioner forms new team, departments of many officers changed

Chyawanprash is a part of every Indian's life, it reduces diseases and increases energy.

Share this news