Recently a new set of formal guidelines were established as Kerala High Court bans AI in Judicial functions and decision making of the district courts in the state, in view of the increasing availability of such software tools. With implementing such policies, the Kerala High Court has set a national precedent, becoming the first state to introduce such measures to ensure justice, fairness and transparency in judicial decisions.

Kerala High Court bans AI in Judicial Functions analysis by Nikita Soni
Nikita Soni, LLM (Law and Tech) National Law School of India University (NLSIU) Banglore

Policy advisory as Kerala High Court bans AI

The policy advises to exercising with extreme caution because, as we have seen, the indiscriminate use of AI tool might result in the negative consequences, including violation of privacy rights, data security breaches, and erosion of trust in the judiciary. The policy advises to use the AI solely as an assistive tool, strictly allowed only for some specific purposes, urging people to use AI with more responsibility and avoid its use legal reasoning and judicial decision-making.

Kerala High Court bans AI in Judicial Functions analysis by Nikita Soni , AI can only assist not decide

The policy also stated that any discrepancy to this policy shall lead to disciplinary proceedings. The new guidelines are applicable to members of the district judiciary in the state, the staff assisting them, and any interns or law clerks working with them in Kerala. These guidelines are introduced to avoid the use of AI in the judicial decision-making and to prevent the injustice in the judicial decision-making. There are numerous instances that have shown that the use of AI in decision-making or in judicial proceedings has backfired.

I believe these guidelines are introduced to regain the faith of ordinary people in the judiciary. In this article, I attempt to highlight drawbacks of AI use in the judiciary and explain why such guidelines are crucial and should be adopted by more states in the country, as they are a pressing need for the Indian judiciary as well. 

AI Hallucinations and risk of wrong Justice

There are numerous instances where AI tools often produce inaccurate or misleading results. When we give a prompt to AI and expect an answer, there are high chances to get receiving an incorrect or false response, specially when we talk about the law. This generally happens because these tools are trained on a wide range of subjects rather than specific topics, and sometimes it lacks sufficient information to provide accurate answer to our queries. The paper “Artificial Intelligence and Law: An Overview” by Harry Surden, discusses that how AI tools are not adequately trained and lack the human ability to think and apply reasoning, which can lead to some bias or indiscriminate answers.

The paper argues that in the law field, which requires human cognitive abilities, emotions and understanding to interpret the laws and languages, AI tools cannot be considered that efficient and should not be used in the judicial decision-making. These policy guidelines are emphasizing the assistive use of AI tools and advocate the restricted and responsible use, which is the pressing need of the hour. 

There have been numerous instances where it has been seen that a judge has relied on incorrect judgements or wrong sections or facts. for example, in the case of “Buckeye Trust v. Principal Commissioner of Income Tax (ITA No.1051/Bang/2024)” a case that doesn’t even exist, AI generated it with a citation as well. The case was cited in court, and the decision was announced based on of this non-exiting judgement. Eventually it was discovered that the judgement did not exist, and the ruling was dismissed. 

Buckeye Trust v. Principal Commissioner of Income Tax (ITA No.1051/Bang/2024)” a case that doesn’t even exist, fake case cited in court

The issue of hallucination is not a new, but I believe it has been present since the early days of the system. Another example when you give a prompt to AI find some specific sections to write answers in your exams, and it either leads you to non-existent sections or provides sections unrelated to your questions. Likewise, using these tools for justice delivery and taking reference for that matter, would result in injustice for all the parties involved, which ultimately affect the constitutional integrity of our courts in delivering just and fair decisions.

Algorithm Bias – The lesson from COMPAS

Beyond the issues like hallucinations and misinformation, if we discuss AI and its Algorithms, it often exhibits algorithmic bias. There are some countries which have integrated the use of AI tool in judicial decision-making, such as China and USA. However, the question arises: does this integration guarantee transparency and fairness in the justice delivery system?

In USA the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) tool is used as an assistive tool in the decision-making process. A report by the NGO, ProPublica, highlighted the bias, which revealed that it suggests that a black person is twice as likely to commit a crime compared to a white person.

This decision-making process evidently influenced by the racial bias, which should not be paying a role in judicial decision-making. As a result, it led to an unjust outcome for a Black individual accused of theft. For instance, in a class of thirty students from law and technology, a question was posed: would you prefer your case to be pending before the court for four months, or have a judge use COMPAS and deliver a decision in four weeks?

The reasoning varied among students, but they unanimously agreed that, if their client was white, they would opt for COMPAS, knowing it would be advantageous for them, and if client was black, they would reject its use, as they know about its bias. A student of from Law and Tech class might know how the COMPAS operates, the algorithm it uses, and the potential outcome, but the same cannot be expected from an ordinary person, who may not even understand the legal proceedings of the court.

These AI tools are prone to biased decisions because they rely on the data, which is either provided by the humans or the LLM models, who learns on their own, and bias in decision-making which results in unfair judgements. The data fed to these AI tools, or the LLM (large language models), which learn autonomously, may contain historical, cultural, social stereotypes and social group biases, which also can be influenced by the society, community, and environment in which they are developed. All these factors can lead to an unfair judgement, which might be in contradiction with our Constitution.

The Accountability Vacuum

The recommendations or results produced by AI tools cannot be guaranteed to be completely accurate, transparent or fair. If any error occurs, it is unclear who shall be held responsible for that. In India, there is no comprehensive legal framework that clearly addresses and define the accountability for the mistakes made by the generative AI tools. For instance, a tool used in India to assess the likelihood of a person becoming an offender after getting the bail in a rape case, the tool, while generating such results must consider the locality, background, culture, caste and race of that person, which could lead to highly unfavourable outcome for some individual.

There are people who always eager to seek Intellectual Property protection for their AI creation, but it lacks accountability when these tools produce incorrect or misleading results; no one step ahead to claim the responsibility. In judicial decisions, if an AI tool is used and the outcome is incorrect, then who should be held accountable?

Is it the person who has created that tool, the person who input the data, the judge who relied on the AI tool for the decision-making, or the government who permitted its use? In India, there is no comprehensive legal framework to address the accountability of the AI generated outcomes, which is a pressing need of the hour not only in the legal field but across all sectors. 

Conclusion why Kerala High Court bans AI

The Provided guidelines as Kerala High Court bans AI in Judicial Functions serve as a national benchmark for similar regulations. While the assistive use of AI can be allowed, but the indiscriminate use can result in discrimination and bias. These guidelines are proving to be a crucial initiative taken by the Kerala Government. There are numerous instances that highlighted the limitation of AI tools in judicial decision-making, often leading unfavourable and unjust outcomes.

The issues, such as AI hallucinations, incorrect responses, bias, and accountability have eroded public trust in Indian judiciary, and conflict with the Indian constitution’s fundamental principle of providing free and fair justice for all. The guidelines will set a guiding principle for other states and encourage them to develop similar measures.  Currently, India lacks specific, comprehensive and exclusive laws addressing AI algorithm bias and the accountability of AI-generated decisions. These guidelines aim to limit the AI usage and serve as a guiding framework in the absence of comprehensive legislation.

About Author

Nikita Soni is a dedicated LLM student specializing in Law and Technology, with expertise in Data Privacy, Intellectual Property Rights (IPR), and Corporate Law. Currently pursuing her Master’s at the prestigious National Law School of India University (NLSIU) in Bengaluru, she has built a strong foundation in legal research, drafting, and case analysis.

Based in Jaipur, Soni brings practical experience from roles at ManpowerGroup and other organizations, blending academic rigor with real-world application in emerging legal fields. Her work focuses on the dynamic intersection of technology and law, addressing critical issues like data protection and IP in the digital age. Passionate about innovation in legal practice, she aims to contribute to policy and advisory roles shaping India’s tech-legal landscape.

Leave a Reply

Your email address will not be published. Required fields are marked *