Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Madhumitha C, Sandhiya Krishnan S
DOI Link: https://doi.org/10.22214/ijraset.2024.65041
Certificate: View Certificate
In the current technological landscape, artificial intelligence is rapidly evolving and reshaping the way we operate. Nevertheless, AI also presents challenges that necessitate human involvement and unique problem-solving approaches. This study explores the potential future impact of Human to AI Judges. It begins by global approach on AI as automated decision-making authority, the acceptance models that helps to adopt such advanced technology in India and also examining AI\'s legal personality. The submission of evidence is a critical aspect of any adjudication process. Therefore, it is essential to authenticate such evidence through a reliable mechanism to ensure the secure collection of evidence. The use of chain of custody and blockchain technology has been widespread for this purpose, although countries like China have differing methods for utilizing such technology. However, India currently lacks such efficiency. Predictive analysis has been employed to gather data and develop algorithms. However, AI lacks the cognitive ability to evaluate cases based on emotions by relying solely on these algorithms. The paper also delves into the inability of AI judges to detect emotions in criminal and family cases. The appeals process has been evaluated, as individuals who disagree with a judge\'s decision typically lodge an appeal. The issue of accountability is a significant concern, prompting discussions about whether the programmer or AI should bear the responsibility. This research paper also explores the appealability of decisions made by AI judges, offering a viable solution to address the research question.
I. INTRODUCTION
The way the legal system has developed shows how much has changed from the past to the present to the future. In the past, judges handled difficult issues entirely by themselves using manual resources such physical law books and legal magazines. This approach resulted in laborious research and possible prejudices based on individual judgements. But technological developments, especially in the area of artificial intelligence, have completely changed this environment. AI systems now help judges by gathering and evaluating large volumes of legal material quickly, making statutes and case law more easily accessible, and streamlining case management so that judges may make decisions more quickly and intelligently. Future developments show that there is a great deal of room for additional AI integration in the legal system. AI automated decision-making might handle routine cases, clearing backlogs and speeding up case resolutions while minimising prejudice on the part of humans through algorithmic consistency. Improved predictive analytics may provide information about patterns in case results, aiding in the strategic decision-making of both judges and attorneys. Furthermore, by assisting litigants with complicated legal procedures, technologies like chatbots and virtual assistants may enhance access to justice. Ultimately, as AI continues to evolve, its integration into the judiciary promises a more efficient, speedy, and impartial legal system, though careful implementation and ethical oversight will be essential to uphold the integrity of justice.
It is important to know adaptation of AI as automated or assistive judge on global perspective. The Ministry of Justice in Estonia has announced the use of AI judges to handle cases with claims of less than 7000 Euro. These trials will take place exclusively online, with the parties to the suit submitting their documents through online platforms. The AI judge will make decisions based on the submitted documents. This method has been specifically applied to matters involving contract litigation, focusing on termination arrangements and unpaid claims, and has been in use since 2006. In China, Smart court and Internet court also use AI judges to adjudicate disputes. Participants can register their cases online and resolve their matters through digital court hearings. In the US, AI systems like COMPAS are used to assist judges in making bail decisions and sentencing recommendations by predicting the likelihood of recidivism. These systems are not fully autonomous AI judges, but they play a supportive role in the decision-making process. In the UK, systems like Luminance and RAVN help law firms and courts by analysing legal documents, conducting due diligence, and enhancing decision-making efficiency. Brazil has introduced AI systems in its courts, such as Victor, an AI tool developed by the Supreme Court, to sift through thousands of legal cases and identify those with precedent-setting value.
The Dubai International Financial Centre (DIFC) Courts have integrated AI into their dispute resolution process, particularly in commercial disputes, but the final decisions still lie with human judges. In Ukraine, AI is used to analyse court decisions and identify patterns in sentencing, aiming for more consistency across the judicial system. However, AI has not yet reached a level of fully automating judicial decisions.
AI's role as an automated decision maker in India is uncertain, but the likelihood of its adoption is higher compared to other countries that are also rapidly attempting to embrace it. Social acceptance is crucial, especially considering that the judiciary is a key institution in India. Several models are used to understand the acceptance patterns of emerging technologies. The acceptance of AI as a decision-making authority can be facilitated by a combination of various models rather than relying on a single model. These models include the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Theory of Planned Behaviour (TPB), Consumer Acceptance Technology (CAT), and Value-Attitude Model (VAM).
TAM focuses on perceived usefulness and ease of use, which can be evaluated through user feedback and training sessions to enhance familiarity with the technology. UTAUT incorporates social influence and facilitating conditions, involving influencers or stakeholders to advocate for AI technology and highlighting the success of early adopters to improve social acceptance. TPB emphasizes attitudes, perceived norms, and perceived control over the application of AI, which can be addressed through workshops, discussions on ethical implications, and collecting consistent feedback to foster a sense of control. CAT emphasizes individual differences, suggesting that user segmentation and personalized support can contribute to user acceptance of AI advancements. VAM emphasizes identifying core values, developing positive attitudes, and encouraging behavior to promote technology acceptance. For example, the decline of checks due to fraudulent activity can be addressed by identifying core values such as lack of transparency and trust, promoting positive attitudes towards detecting fraud with enhanced technology, and encouraging behavior through easy access to the justice system.
The legal concept of AI personhood, as a derivative legal entity, suggests that AI may have limited legal rights based on its creators' recognition, utilizing legal fiction theory. This approach establishes accountability by attributing responsibility for AI actions to human operators, while also enabling legal interactions. Legal fiction permits AI to be considered a legal entity for practical purposes, addressing ethical and moral concerns without implying consciousness. This framework has the potential to influence legislative changes and aid in defining clear standards for liability, ultimately guiding the integration of AI into society while ensuring ethical and responsible technological use.
II. LITERATURE REVIEW
In this paper, the author discusses how AI is going to bring changes in the legal system and has done an empirical study by surveying judges in this aspect. The author especially focused on Automated Adjudication and Legal Principles. The result of the survey found that Judges are reluctant about the modification in their profession. And they also obligated that AI requires training in legal literacy. However, it lacks legal writing and social institution roles. For getting fair outcomes AI should be used in prior stages of adjudication. And Judges also emphasised that AI involvement is a technological legal development through which easy access to justice will be seen. In this study judges strongly disagree with the AI in the aspect of analysing evidence, evaluating the arguments made by advocates and rendering decisions for the cases that appeared. The author of the paper concluded that AI would help to assist the Judges rather than adjudication. (Andreia Martinho, 2024)[1]
The paper explores ethical concerns such as bias, equality, transparency, accuracy, and the shift in accountability. It specifically focuses on the shortcomings of predictive analysis in criminal risk assessments. The researchers utilized a doctrinal research approach, drawing upon various relevant articles. The primary emphasis was on the United States' perspective. (Harry Surden,2020)[2]
The author of the paper delved into the potential of digital court tools and systems to improve efficiency, participation, and accessibility, while also highlighting their capacity to exacerbate injustices and undermine fundamental legal principles. The paper discusses specific technologies that have been deployed in England and Wales. The author's conclusion emphasizes that technology alone cannot effectively address the complex issue of access to justice when policy frameworks fail to integrate measures to counteract the erosion of legal aid. (Jane Donoghue,2017)[3]
The topic of this paper is the integration of AI in the legal system and its alignment with the right to a fair trial. The author provides an in-depth analysis of AI's application in different legal procedures, including pre-trial processes, hearings, post-trial activities, and documentation. Additionally, the author thoroughly explores the various aspects of the right to a fair trial, such as access to court, timely proceedings, independence, impartiality, and equality of parties. The potential achievement of these aspects through the utilization of advanced AI technologies is also discussed. (Kalliopi Terzidou, 2022)[4]
The focus of this article is on the application of the Evidence Act to electronic records. It discusses different legal cases regarding the acceptance of electronic records as evidence. The author's conclusion is that term ‘original’ under section 65B(1) does not limit its scope to primary or secondary data, which weakens the importance of evidence admissibility under this law. (G.V. Mahesh Nath, 2017)[5]
This paper raises the question of whether an advanced Digital Forensics Investigation Framework (DFIF) is necessary for effectively prosecuting digital crimes in a court of law. It emphasizes the importance of the framework in maintaining the integrity of evidence throughout the investigation process. The paper provides a descriptive analysis of recent trends in cybercrime attacks and delves into the associated field of Cyber Forensics. Furthermore, it outlines the processes and outputs of different phases in the DFIF, drawing from previously proposed frameworks and presenting a comparative analysis of all frameworks. (Kumaran Shanu Singh, Annie Irfan, Neelam Dayal,2019)[6]
III. RESEARCH PROBLEM
The particular problem in AI as automated decision-making authority will be based on evaluation on emotion based legal issues, evidentiary value of submitted evidence and accountability and the appeal process is raised.
IV. RESEARCH QUESTION
V. RESEARCH HYPOTHESIS
If there a problem in reliability of evidence submitted, then Advanced Evidence Authentication system like enhance cha shall be utilised. If AI is difficult to adjudicate emotion-driven cases in criminal and family law then it should be applied to white-collar crimes, financial offences, contractual issues, environmental conflicts, and tax matters where emotions do not influenced by the decision-making process. If the humanized adjudication at higher level of courts or tribunals and shift of accountability is enhanced then, the problem of appealability from AI automated judge is cured.
VI. RESEARCH METHODOLOGY
In this research study we used doctrinal approach and comparative analysis to substantiate and validate our research problem. We collected data from various articles and used it for giving feasible a solution for the issues raised.
VII. RESEARCH METHOD
In this article the author used primary and secondary data to collect data to substantiate the research problem. The primary data is collected from survey results, interview transcripts, observation notes etc. Subsequently, the secondary data is collected from various literature reviews which provides a medium through which the problem is substantiated. Various models have been referred to mediate the problem of acceptance of such technology. Also, the personhood theory is explained to mitigate the problem of legal personality to AI automated technology. The predictive analysis model is utilised to identify the patterns and relationships in criminal cases.
VIII. SCOPE AND LIMITATION
This study explores how AI functions as an automated decision-making authority within the Indian judicial system, with a focus on its capacity to verify evidence, the potential risks of blockchain technology and chain of custody, the challenges AI encounters when dealing with emotion-based cases in comparison to human judges, and the validity, dependability, and answerability of AI systems utilized in lower courts. The research is confined to India and encompasses developments from the previous five years, which may not encompass historical perspectives. Moreover, it might encounter limitations related to the size and diversity of the sample due to limited data availability, and the depth of analysis may be restricted by access to proprietary AI technologies. Lastly, the findings may be impacted by self-reported data from legal professionals, introducing potential biases that could influence the applicability to other jurisdictions.
A. ISSUE 1
Whether AI faces any potential challenges while handling cases with regard to emotion, especially considering that even human judges face these difficulties in adjudicating cases in a virtual environment?
Emotions are complex psychological and physiological condition. Which are expressed as feelings and expressions. They are the outcome of stimuli occurred inside as well as outside the person surroundings. Human behaviour, social interaction and decisions are greatly influenced by emotion and its gravity differs from person to person. Different individual expresses their emotions in different way. Even humans were struggling to understand the emotion of others. So, it is tough to AI to determine the emotion of the human being. When AI plays the role of adjudication it was purely based on the algorithms which are feed to them as a samples and data. These data were human made judgements and guidelines given by them and sometimes it also includes law. Judges were made the judgement only according to the state of mind of the person who involved in those cases. Based on those data AI will solve the cases. When the issue before AI is not feed as a data it was unable to solve the case. Predictive analytical model has been used. Utilizing AI for predictive analytics entails employing machine learning algorithms and models that learn from data over time. These models are equipped with past data to determine correlations and patterns. Once trained, the models are used to make predictions about future outcomes using new, unseen data. This process is based on well-informed estimations derived from reliable data-driven insights rather than guesswork. Incorporating AI into predictive analytics transforms raw data into actionable intelligence. For instance, by analysing past customer behaviour, a predictive model can forecast future purchasing trends. Similarly, in the field of healthcare, AI-powered models can predict patient outcomes, assisting healthcare providers in devising proactive treatment plans[7].
Data, algorithms, and prediction are the main components of this model. For the effective predictive analysis there should be proper, accurate, comprehensive as well as relevant data should be uploaded, and data is the solid foundation for such model. If any imprecise data fed, then model would be collapsed and unreliable one for further use. Based on the data it will form algorithms which is brain of the AI. These algorithms would be changed based on the patterns detected from the data. Further prediction would be made.
1) Criminal Law
In the past and present, human judges have been the primary authorities responsible for making decisions and passing verdicts in the criminal justice system[8]. Despite various technological advancements, the use of AI in decision-making for criminal cases has not been widespread. While AI has been used as an assistant in judicial processes in many countries, only a few have employed it as an automated decision-maker, and even then, primarily for civil cases. The complexity and sensitivity of criminal cases, the need to assess witness and defendant credibility, and the gravity of the crimes are all reasons why in-person proceedings have been traditionally favoured.
In traditional courts it mainly focusses on actus reus, mens rea and causation to prove the occurrence of the crime. They are the key elements of the of the criminal law to determine the criminal liability of the person i.e. accused[9]. Here the accused is liable if only these elements are proved. When the accused not having these elements, then he would be acquitted form the case. For proving these elements, they need to prove there is an emotional connection to the crime.
Here the emotion includes one’s motive to commit such crimes. It may be anger, revenge, fear or jealously. In establishing criminal liability, AI encounters difficulties in assessing the mental state of the accused as well as human’s emotional background to do such offence. The mental states differ from individual to individual and from case to case. The term "state of mind" pertains to a person's cognitive and emotional condition, as well as the rationale or motivations behind their actions, particularly in the context of committing a crime[10]. These factors typically vary depending on the individual and the circumstances. This state of mind or intention which draws a line betwixt criminal behaviour and mere mistake of the accused. It will play the major role in punishing the accused.[11] It is implausible that AI algorithms designed to resolve such disputes will be much effective while calculating such intention. These are the factors which may change from case to case.
The field of criminal justice often involves intricate social and ethical issues.AI may struggle to fully comprehend the emotional and psychological complexities that human judges take into account, such as remorse, potential for rehabilitation, or individual circumstances which vary from case to case. Human judges also take into account the circumstances around the crime and consider the mental state of the accused. The severity and nature of the offense may lead to varying sentencing outcomes. If the judge deems that the accused has committed a less serious offense, they may impose a minimal sentence. Additionally, judges may consider the rehabilitation of the accused to prevent future offenses and provide the accused with an opportunity to transform into a better person. AI lacks the capacity for empathy, which is often crucial in decisions related to sentencing and rehabilitation-oriented justice.
During the COVID period in India, the virtual courts system played a dominated role as well as proved to be more effective than in previous decades. Court proceedings took place in a virtual setting, with judges, advocates, witnesses, attorneys, and accused individuals participating via video conferencing in a synchronised manner. Evidences were submitted electronically as E-evidences, and judges made rulings based on this. This system was more suitable for handling civil cases than criminal cases. Human judges often encounter challenges when resolving conflicts related to criminal cases, particularly concerning the authenticity and credibility of the evidence presented by the involved parties. Lack of personal interaction which are used to assess the credibility of the witness or accused. Here they lacked of assessing one’s own behaviour and emotion.
Even some of the advocates argued that way of behaving of accused or witness during cross-examination may be not effective as same as done in an open court. It can tutor by the advocate or any other person. Here evidentiary value is questionable. Only in physical mode non-verbal hints like facial contortion and body postures can be used to detect the truthfulness as well as false statement made by the person. But the same can’t be judged in virtual mode. [12]
2) Civil cases (Family cases)
Normally in civil cases matter of proving purely based on the documents and involving the credibility of such documents. And in these cases, by relaying on evidence submitted and provisions for such suits are involved to adjudicate. So, it is normally not affected by the role of emotion. Where as in Family law, it is a personal law and it differs from one religion to another and the issues always revolve as a complex problem and involves unique situation. In Family cases emotion plays a crucial role in understanding, dealing, and resolving disputes. These cases are highly personal, and it requires effective interpretation by the adjudicators to ensure impartial outcome. Family disputes are delicate and extremely fact-specific one.[13] It involves various matters like child’s custody, maintenance, divorce proceedings as well as allocation of shares over property for the benefit of weaker spouse to live a basic life. In divorce cases it possesses emotions like grief, betrayal, anger. In child custody cases it involves distress, possessiveness, uneasiness for losing bond between the child. In domestic violence cases trauma, emotional vulnerability is back bone of domestic abuse. Here the victim’s expressions are crucial to prove the culpability of the abuser. Where it is psychologically proven that humans are having capacity to understand it and it is beyond what AI can do. But they also fail to do in some cases. Then how their verdicts are used as samples to formulate the algorithms? Every case varies from each other. It makes a challenge to AI for relay the data and to form algorithms. Available data may vary from its accuracy as well as quality is questionable. AI was lack in interpreting the data fed to them and critical thinking.[14] When data is fed to AI, it may lead to breach of data or there may be inherent bias which result in prejudice verdict.[15]
From this it is very clear that AI cannot be used as a judge in criminal cases. Though it can be used in some of the criminal cases where emotion didn’t play the major role. Cases like white collar crimes, financial crimes, contractual matters, environmental disputes, tax cases, strict liability cases. In these cases, emotions were not considered as a main principle. Here it involves only based on set of procedure to prove guilty and not merely based on the emotion.
White collar crimes are non-violent crimes, those are committed by individual, businesses, corporation, government officials for the purpose of financial gains. There are different kinds of crimes like Money Laundering, Fraud, Bribery and Corruption, etc which will come under the ambit of it. These crimes involve the deception, concealed and breach of trust done without application of coercion. Here for proving such offence is purely based on material evidence. Which doesn’t involve the proving of emotion for liability. So, this case can be handled by the AI judges. In tax cases the liability is purely based on not paying of assessed tax money. So here also emotion doesn’t play a major role. Here proving of offences relied on documents for supporting the such offences. So, through AI mechanism of adjudicating these cases can be handled. Contract cases are initiated for non-compliance or breach of contract. Here for proving such offences it only requires the contract agreement between the parties and the parties need to prove how it breached and terms and condition. It doesn’t involve the emotional basis to prove the non-compliance. Where as in Environmental cases for proving crimes, it doesn’t include the emotion. It only requires the proof for making the person liable for causing damage to the environment.
B. ISSUE 2
Whether an AI judge can authenticate the evidence submitted before it and how blockchain technology as well as chain of custody pose a major risk?
Evidence is presented as proof by the parties involved and is accepted by the court to affirm or refute a fact during trial proceedings. If the court accepts the evidence as admissible, it can be considered valid. Electronic evidence refers to data generated from computers[16], including information stored and transmitted from such devices. In India, there are regulations regarding the admissibility of electronic evidence. According to Section 65B[17] of the Indian Evidence Act, electronic evidence is permissible only if accompanied by a certificate from an authorized officer. Furthermore, Section 79A[18] of the Information Technology Act, 2000 outlines the Central Government's authority to designate agencies as examiners of electronic evidence, who will offer expert analysis on the electronic records presented before the court or other officials. This brings up the question of how an AI adjudicator will verify and accept the electronic evidence submitted to support a claim. When it comes to adjudication by the AI, the process for submitting electronic evidence and determining who issues the certificates for the records it generates remains unclear. Typically, this type of electronic evidence can be susceptible to alteration, fabrication, and tampering, making it challenging to verify.[19]
1) Blockchain Technology
In China, a blockchain model is employed to validate evidence that AI courts accept. This peer-to-peer network was established following the 2012 amendments to the PRC Civil Procedural Law. The amendment facilitated the creation of technical experts to assist trial judges in evaluating permitted evidence. Although trial judges encountered challenges in verifying the authenticity of electronic evidence due to time lag and expensive methods, they discovered a cost-effective solution through the combination of blockchain technology and the judiciary.[20]
Specific regulations govern the use of blockchain technology, allowing it to preserve digital files for civil cases.[21] In a notable 2018 case, the Chinese court validated the online evidence presented by one party in Huatai Yimei vs. Daotong Technology. And in Huatai Yimei Ltd. v. Yangguang Feihua Ltd, 2019 case court used the same technology for authenticating the defendant evidence.[22] Blockchain technology is governed by regulations under both Chinese law and EU legislation pertaining to AI and blockchain. This establishes guidelines for the use and trustworthiness of these technologies when it comes to submitting evidence.[23]
In India, the role of the authorizing officer is limited to humans. When authentication is performed by a human, the risk of tampering or fabricating evidence exists, even by the authorizing officer. In contrast, the Chinese blockchain model allows parties involved in a case to directly upload evidence to a server, creating a chain of hash functions to safeguard the data from hacking. Users are required to apply their keys to access the information, significantly reducing the potential for fabrication or tampering. The Chinese model consists of numerous interconnected computer units, providing added resilience, with hacking being virtually impossible thus far.
2) Chain of Custody
The challenge that arises is when AI is given authority for automated decision-making, the certification provided by human certifying authorities poses a risk to the authentication process. Furthermore, the reliability of digital data manipulated by offenders results in various cyberattacks. The issue regarding the chain of custody established under Section 65B (4) of the act is connected to the cyberattacks mentioned in the IT Act of 2000 According to Section 65B (4) of the act, certification of electronic records is mandatory for their admissibility. The certification needs to be provided by an individual holding an official position responsible for operating the relevant device or by someone overseeing the relevant devices. This section establishes the workability of the chain of custody[24], providing written evidence of the delivery of electronic evidence, including who seized the electronic device and who transferred the evidence from the place of occurrence to the place of preservation, forensic lab, and then to the court[25].
Digital evidence is handled by a number of individuals, starting from the data feeder, user, and the officer collecting and handling digital evidence. When electronic data is submitted as evidence in court, it is susceptible to tampering, leading to cybercrimes[26]. Proper documentation of such evidence is essential. In traditional paperwork, it is easier to gather all papers, but with electronic data, various procedures must be followed to obtain specific data, and the problem of the black box model also poses a major risk in evidence collection. The IT Act of 2000 addresses forensics related to cyberattacks, cyber investigations, and the evidentiary value of the chain of custody. Digital evidence from computer systems, primary or secondary memory, documents, emails, locally stored files, and social media accounts are collected to address cyberattacks such as malware, phishing, cyberbullying, harassment, financial fraud, password attacks, and potentially unwanted programs. For all these cyberattacks, evidence is collected and authenticated according to Section 65B.
C. Issue 3
Whether AI-based automated decision-making systems used at lower levels of the judiciary in India meet the requirements of validity and reliability as well as how accountability is maintained?
As of now, AI in adjudication in India is still a developing concept. While there is no specific legal framework solely dedicated to AI-driven errors, general principles of law regarding appeals, revisions, and judicial review would likely apply. The future could see more comprehensive regulations to address AI's role in legal and administrative decisions. AI-driven adjudication, while promising increased efficiency and consistency in decision-making, is likely to face significant challenges.
One of the most pressing concerns is bias in decision-making. AI systems learn from historical data, and if that data contains biases—whether based on gender, caste, race, or socioeconomic status—the AI may perpetuate or even amplify these biases. Moreover, even neutral data can lead to biased outcomes depending on how the algorithms are designed and applied[27].
Another complication arises from the inherent lack of transparency in AI decision-making processes[28]. Often referred to as a "black box," many AI models operate in ways that make it difficult to fully understand or explain their reasoning, which could conceal biases or errors that may not be immediately obvious. A critical problem AI faces is maintaining due process and fairness. While human judges and officers can bring intuition, empathy, and discretion into their decision-making, AI lacks these human qualities. It may be unable to fully account for emotional, social, or contextual factors that are often crucial in rendering fair decisions. Procedural fairness could also be called into question if individuals feel they are being judged by a machine that does not allow for the flexibility and understanding that a human judge might bring. The absence of natural justice—such as the right to be heard in one’s own voice—could make people feel disenfranchised by AI-driven judgments. Errors and inaccuracies remain a fundamental challenge for AI adjudication. Even the most advanced systems are prone to mistakes, such as misinterpreting facts or failing to apply legal principles correctly. These errors could lead to wrongful conclusions or missed opportunities to identify violations. Moreover, legal systems often require nuanced interpretation of statutes, precedents, and complex legal language. AI, at its current stage, may struggle with such intricate legal reasoning, which could lead to flawed judgments or misapplications of the law, particularly in cases that lack clear precedent. Legal and ethical challenges also loom large. Delegating judicial or administrative decision-making to AI raises deep ethical concerns. Can a machine truly understand the concepts of fairness, justice, or morality in the way humans do? If not, can we trust it to make decisions that profoundly impact people’s lives? Furthermore, existing laws in India and other countries are not equipped to govern AI’s use in the legal system. Without AI-specific legislation, it remains unclear how courts and legal authorities will regulate its use or manage the risks involved. Privacy and data security are other critical issues. AI systems rely on vast amounts of data, including personal and sensitive information, which makes ensuring the security and privacy.
IX. RECOMMENDATION
To ensure the authenticity and acceptability of the electronic evidence presented by the parties to the AI adjudicator, it is necessary to amend the laws and regulations that oversee blockchain technologies in India, akin to the approach taken by China. It would be more effective to shift from the current method of authenticating electronic evidence to a model similar to that of China.
From this it is very clear that AI cannot be used as a judge in criminal cases. Though it can be used in some of the criminal cases where emotion didn’t play the major role. Cases like white collar crimes, financial crimes, contractual matters, environmental disputes, tax cases, strict liability cases. In these cases, emotions were not considered as a main principle. Here it involves only based on set of procedure to prove guilty and not merely based on the emotion.
In the future in India, if an AI system serving as an adjudicating officer makes a legal or administrative decision error, the key consideration will be how legal recourse, such as appeals or revisions, will adjust to address AI-made mistakes. If an adjudication decision based on AI is deemed flawed, it is probable that a similar appeal process to those available for human officers will be applicable. The involved parties may contest the decision to a higher authority or court, where human judges will scrutinize the AI's decision. In the event of an error related to procedural or factual matters, a petition for revision might be an option, allowing a higher court or tribunal to inspect and rectify the AI's mistake. Both processes might entail evaluating whether the AI system adhered to legal standards, the principles of natural justice, and whether it was correctly programmed or trained. For more intricate cases involving AI errors, specialized AI review boards or expert panels could be established to determine whether the AI error was due to a systemic issue, faulty programming, or another cause. In the future, India could implement legislation tailored to address errors in AI-driven decision-making systems, potentially incorporating specialized dispute resolution mechanisms, checks and balances, or predefined guidelines on handling AI errors, especially in sectors like taxation, consumer protection, or corporate law where AI could see widespread use.
The authors of this paper believe that incorporating AI into the adjudication process could be highly beneficial. Currently, there are millions of cases waiting in the judicial system. Prolonged delays in justice do not serve the needs of the parties involved. By implementing such mechanisms, the workload on human judges could be alleviated, allowing them to manage other cases more effectively and timely. The inclusion of artificial intelligence (AI) as judges in the legal field brings forth both challenges and opportunities that necessitate a collaborative approach across various disciplines. This paper has examined the potential future integration of AI adjudication systems. The concept of personhood theory is addressed to attribute personality to the AI mechanism for assigning liability.
[1] https://doi.org/10.1093/ijlit/eaw005 [2] https://jajharkhand.in/wp/wp-content/uploads/2019/10/02_sop_english.pdf [3] https://ieeexplore.ieee.org/abstract/document/9032693/ [4] https://shelf.io/blog/ai-for-predictive-analytics/ [5] https://doi.org/10.30574/wjarr.2024.23.1.2108 [6] https://browse.arxiv.org/pdf/2408.10701v1 [7] https://shelf.io/blog/ai-for-predictive-analytics/ [8] https://www.penal.org/sites/default/files/Concept%20Paper_AI%20and%20Criminal%20Justice_Ligeti.pdf [9] https://link.springer.com/article/10.1007/s13347-019-00362-x [10] https://www.mccartylarson.com/the-role-of-intent-in-white-collar-crime-cases-in-texas/ [11] https://www.newindianexpress.com/states/tamil-nadu/2020/Aug/16/virtual-reality-pros-and-cons-of-running-online-courts-2183977.html [12] https://www.shellyingramlaw.com/artificial-intelligence-in-law/2024/05/21/artificial-intelligence-and-family-law-a-cautionary-note/ [13] https://ailawyer.pro/blog/the-role-of-legal-ai-in-family-law [14] https://link.springer.com/article/10.1007/s44163-024-00121-8 [15] - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-blockchain#:~:text=The%20EU%20strongly%20supports%20a,ensure%20consumer%20and%20investor%20protection. [16] https://www.sciencedirect.com/science/article/pii/S0267364920300066?ref=pdf_download&fr=RR-2&rr=8d591f080af69aa5 [17] https://doi.org/10.1080/0731129X.2023.2275967 [18] https://link.springer.com/article/10.1007/s13347-019-00362-x [19] https://www.penal.org/sites/default/files/Concept%20Paper_AI%20and%20Criminal%20Justice_Ligeti.pdf [20] https://www.chinalawtranslate.com/en/the-supreme-peoples-courts-provisions-on-several-issues-related-to-trial-of-cases-by-the-internet-courts/
Copyright © 2024 Madhumitha C, Sandhiya Krishnan S. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET65041
Publish Date : 2024-11-06
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here