The Use of Artificial Intelligence in Justice: An Automated Judge?
Milagros Denise Tallarico (*)
The use of artificial intelligence in justice: an automated judge?
Summary: I.- Introduction. II.- The case ‘‘Aguilera, Nancy Bibiana and Others v. Aguas Santafesinas S.A. s/ Medida Autosatisfactiva” - Court of First Instance in Civil and Commercial Matters, Rosario, Argentina. III.- Use in judicial prediction. IV.- Challenges of automation in judicial decision-making and application of the Generative AI Acceptable Use Protocol (IAGen) - Court of Justice of the Province of San Juan. V.- Conclusions.
I.- Introduction
Artificial intelligence (AI) has ceased to be a futuristic technology and has become a tangible and growing tool in various sectors—among them, the Judiciary. Justice, traditionally an area where human judgment and detailed analysis have been fundamental, has not been immune to this transformation. Increasingly, judicial systems are adopting AI-based solutions to improve efficiency and accuracy in decision-making.
From prediction algorithms suggesting rulings to research support systems processing large volumes of data, AI is rapidly advancing as a key tool to optimize the administration of justice. However, this progress raises a series of questions about the role that AI should play in a legal system where impartiality and fairness are fundamental. In this context, the role of the judge remains crucial, as decision-making must preserve human judgment, ensuring that technology complements, but does not replace, the magistrate’s ability to apply the law fairly and equitably.
II.- The case: ‘‘Aguilera, Nancy Bibiana and Others v. Aguas Santafesinas S.A. s/ Medida Autosatisfactiva”
A recent precedent regarding the application of artificial intelligence in the Argentine judiciary is the referenced case, which shows the first experiences of the Argentine justice system integrating emerging technologies like AI in resolving disputes.This ruling demonstrates how technology— in certain cases— can be used to optimize the management of judicial cases, even in complex situations, and sets a precedent for future implementations of technological tools within the country’s judicial system.
The case, pending before the Civil and Commercial Court of the 14th Nominación in Rosario (Argentina), accepted the request for an autosatisfactiva measure made by a woman who sued a sanitation company for the interruption of potable water service, ordering its restoration during the ongoing litigation. The judge in the case used Meta's artificial intelligence tool within WhatsApp to verify the company’s non-compliance.
The plaintiffs reported that since the installation of a casino, they had experienced a significant decrease in the water service, severely affecting their quality of life. Despite making multiple complaints to the supervisory authority, the water pressure remained insufficient, making "daily life unsustainable.” For this reason, they initiated legal action against the company to obtain service in optimal conditions.
The company responded to the lawsuit, arguing that, although the water pressure was not optimal, it had carried out the necessary treatment according to the terms, deadlines, and modalities established by the regulations governing its public service relationship, addressing the complaints filed against it, and that the pressure had increased to 3 MCA. It claimed that the internal installations of the plaintiff were in poor condition, which, in its view, worsened the situation.
In its substantive analysis, the Court observed that, "(…) by resorting to Meta AI, the artificial intelligence button on WhatsApp, in response to the question ‘Does the right to access water comply with a pressure of less than 1 mca?’ the response was no, understanding that the right to access water implies not only having an adequate amount, but that the water be safe, acceptable, affordable, and accessible, with pressure being an important factor to ensure equitable and safe access to water.”
It adds, "(…) developing its argument, it is pointed out that a pressure as low as 1 mca or less can significantly limit households' ability to access water effectively, which can have negative impacts on health, hygiene, and overall well-being, among other considerations. It is important to recall that the inquiries made by the judge on websites (in this case, WhatsApp) reflect his task of verifying the facts and arguments provided by the parties in the case.”
It also states that, "(…) access to information that inherently has characteristics of publicity and notoriety does not constitute an overreach in judicial activity, so it cannot be reproached. It should not be overlooked that the rules of experience and sound judgment always have a predominant role when judging, and the judge cannot be unaware of notorious facts or social reality, especially in cases like the present where the vulnerability of one of the parties (consumer) requires a proactive attitude from the judge in pursuit of effective judicial protection.”
The Court concludes that, "(…) it is clearly evident that the defendant failed to provide an essential service like access to water, an issue that cannot be remedied solely through the discounts applied, nor by shifting responsibility to third parties (in this case, the State), as the defendant claims (especially when it has made the same argument for over 10 years and has not provided any evidence to support that it has prompted the activity of the third party to whom it attributes the failure).”
In light of this, the Court ruled in favor of the request and ordered the defendant company to adopt all necessary measures to ensure the plaintiff’s access to water for human use and consumption.
III.- Judicial prediction
The recent ruling leaves us with a fundamental question about the risks associated with judicial prediction. By incorporating AI tools into decision-making, the judicial system takes a step toward modernization and efficiency, but it also raises delicate issues concerning fairness, transparency, and the ability of algorithms to address the complexity of each case. This type of decision prompts us to reflect on how technology might influence the outcomes of trials and whether the benefits promised by automation truly outweigh the potential dangers of a dehumanized or biased justice system.
To define it, judicial prediction refers to the use of AI-based tools to analyze large volumes of data, such as previous rulings, judge characteristics, and other relevant factors, in order to forecast the possible outcomes of a legal case. It is an application of AI that uses advanced algorithms to analyze historical data and patterns in past cases. These AI systems can identify patterns in previous rulings and apply mathematical models to estimate the probabilities of an outcome, the duration of a trial, or even the decisions of courts.
The issue with the development of systems to predict judicial decisions lies in the fact that judges handle mental assumptions, information, and experiences that differ from those of the parties involved. Without human judgment, the use of AI in judicial prediction presents several risks that need to be carefully evaluated. One of the most important is bias in the data. AI algorithms learn from the data they are trained on, and if this data contains historical biases, such as racial, gender, or socioeconomic discrimination, the system may replicate and amplify these biases. This could lead to unjust decisions that favor certain groups over others, perpetuating inequalities in the judicial system.
Another significant risk is the lack of transparency in many systems, especially those using deep learning techniques. These models are often considered "black boxes,” as it is difficult to understand how they arrive at their conclusions. In the judicial sphere, this lack of clarity could generate distrust in the system, as the parties involved would not know why a specific prediction or decision was made, potentially affecting the perception of justice.
Moreover, the excessive use of these tools could lead to the dehumanization of justice. Without the ability to understand human dimensions, decisions might become colder and more automatic, failing to adequately reflect the particularities of each case, which could affect the fairness of the judicial process.
On the other hand, the risk of excessive dependence on technology is another important challenge. If judges and lawyers rely too heavily on the predictions generated by AI systems, we could base our decisions solely on these results, neglecting other crucial aspects of the case. This dependency might limit the ability of legal professionals to apply critical and nuanced judgment, potentially compromising the quality of justice.
IV.- Challenges of automation in judicial decision-making
The idea of an automated judge, a system capable of making decisions without direct human intervention, is beginning to gain ground. However, this evolution presents several challenges that must be carefully considered.
One of the major challenges posed by automation in judicial decision-making is the professional training of judicial employees in the use of advanced technologies, as specialized training becomes essential to ensure that officials, from judges to administrative staff, can adequately handle these tools without compromising the quality and fairness of the judicial process.
Training in artificial intelligence for judicial employees should not be limited solely to technical aspects. The transparency of algorithms and, above all, algorithmic biases are two of the most crucial topics to address when analyzing this aspect. It is extremely important that legal professionals not only understand how algorithms work but also their ethical, legal, and social implications. A judge or lawyer, when interacting with automated systems, must be able to identify when an algorithm is functioning correctly and when it may be biased or misinterpreting the data.
Professional training should include the analysis of potential risks and limitations of artificial intelligence, such as the reproduction of biases or the lack of contextualization in certain decisions. Furthermore, it is important for judicial employees to understand the role that AI plays within the judicial system, recognizing that while these systems may assist in decision-making, the final word must still come from a human.
To ensure effective implementation, clear and understandable user guides must be developed. These guides should provide detailed instructions on how to use automated tools, highlighting the most important aspects such as data input, result interpretation, and control mechanisms to verify the accuracy of automated decisions. User guides should not only focus on the technical functioning but also on the ethical and responsible use of the technology. This includes personal data protection and ensuring transparency in processes involving AI so that judicial decisions made with the support of these tools are understandable and verifiable by the parties involved.
A clear example of this is the one adopted by the Court of Justice of the Province of San Juan, which recently approved the "Protocolo de Uso Aceptable de la IA Generativa (IAGen)" (Protocol for Acceptable Use of Generative AI)for the use of generative artificial intelligence in the judicial field, mandatory for all judicial agents in the province. This protocol emphasizes the importance of confidentiality and security, prioritizing the protection of privacy, confidentiality, and integrity of information, establishing that officials must anonymize data before entering it into AI systems.
Moreover, continuous training is also a key component in this process. Artificial intelligence and associated technologies are constantly evolving, so judicial employees must have regular update programs to stay informed about technological advances and their legal implications. It is essential to develop a comprehensive approach that combines technical, legal, and ethical aspects, providing not only tools for the practical use of these technologies but also for critical reflection on their implications within the judicial realm.
Finally, the implementation of automated systems should not be understood as a replacement for human work, but as a complementary tool. For this reason, training should promote a mindset of collaboration between humans and machines, ensuring that judicial employees have the necessary skills to interpret, supervise, and, when necessary, correct the results generated by artificial intelligence. Will the future of justice be a coexistence between technology and humans, or could automation jeopardize the fundamental principles of law?
V.- Conclusions
Dehumanization in justice, as a consequence of the automation of decisions through artificial intelligence, is one of the greatest risks we must face in our professional practice due to the integration of these technologies into the judicial system. While algorithms can process data quickly and provide predictions based on patterns, they lack the ability to interpret the complexity of human situations, where emotions, personal circumstances, and social contexts play a crucial role.
The lack of understanding of the nuances of each case could lead to decisions that do not adequately reflect the social reality and the specific needs of the individuals involved. In this sense, the use of artificial intelligence could undermine fundamental principles of law, such as equity and social justice, by failing to account for the subjective elements that can be decisive in a trial. Justice cannot be reduced to a simple application of rules; it must be capable of adapting to the complexity of human life, something that AI, no matter how advanced, cannot fully achieve.
Moreover, this risk of dehumanization also endangers trust in the judicial system. The lack of human interaction in judicial decisions may generate distrust and create the impression that decisions are being made by a "machine” that does not understand the realities of the people involved, which undermines the legitimacy of the judicial system.
In summary, dehumanization in justice not only compromises the quality of judicial decisions but also the very perception of justice itself. It is essential that, when integrating technologies like artificial intelligence into the judicial field, it is guaranteed that the role of the judge remains central—not only to apply the law but to ensure that every decision is made with a full understanding of the human circumstances underlying each case.
(*) Milagros Denise Tallarico. Lawyer, graduated from the National University of Lomas de Zamora (UNLZ). Legal Analytics, Torcuato Di Tella University (UTDT). Semi-Senior Associate at Alfaro Abogados.