NEWS   TOP   TAGS   TODAY   ARCHIVE   EN   ES   RU   FR 
NEWS / 2024 / 01 / 08 / AI RECEIVES CAUTIOUS APPROVAL FOR LEGAL OPINIONS FROM UK JUDGES

AI Receives Cautious Approval for Legal Opinions from UK Judges

09:50 08.01.2024

England's legal system, known for its deep-rooted traditions such as wearing wigs and robes, has taken a cautious step into the future by allowing judges to utilize artificial intelligence (AI) to assist in producing rulings. The Courts and Tribunals Judiciary recently announced that AI could be used to write opinions, but emphasized that it should not be employed for research or legal analyses due to the potential for fabricating information and providing misleading, inaccurate, and biased data. Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales, stated that judges should not shy away from the careful use of AI, but must ensure the protection of confidence and take full personal responsibility for their output.

While scholars and legal experts contemplate a future in which AI might replace lawyers, aid in juror selection, or even decide cases, the approach outlined by the judiciary on December 11th is restrained. However, considering the legal profession's historical hesitance towards embracing technological advancements, this move is seen as proactive as the government, industry, and society as a whole grapple with the rapidly advancing technology, which is often depicted as both a panacea and a menace. Ryan Abbott, a law professor at the University of Surrey and author of "The Reasonable Robot: Artificial Intelligence and the Law," acknowledged the ongoing public debate surrounding the regulation of AI. He noted that AI's intersection with the judiciary is a unique concern, prompting a cautious approach to ensure human involvement. Abbott suggested that AI may disrupt judicial activities at a slower pace compared to other sectors, warranting a more cautious approach.

Abbott and other legal experts commended the judiciary for addressing the latest developments in AI, stating that the guidance would be widely embraced by courts and jurists worldwide who are either eager to utilize AI or apprehensive about its potential implications. England and Wales have thus positioned themselves at the forefront of courts addressing AI, although they are not the first to issue such guidance. Five years ago, the European Commission for the Efficiency of Justice of the Council of Europe released an ethical charter on the use of AI in court systems. While this document may not be up to date with the latest technology, it did establish core principles, including accountability and risk mitigation, which judges should adhere to, according to Giulia Gentile, a lecturer at Essex Law School specializing in AI's application in legal and justice systems.

In contrast, the United States Supreme Court has not yet established guidance on AI, and the fragmented nature of state and county courts prevents a universal approach. However, individual courts and judges at the federal and local levels have implemented their own rules, as explained by Cary Coglianese, a law professor at the University of Pennsylvania. He described the guidance for England and Wales as the first published set of AI-related guidelines in the English language, broadly applicable to judges and their staff. Coglianese also suggested that many judges have already cautioned their staff internally regarding existing policies of confidentiality and internet usage when utilizing public-facing portals that offer AI services like ChatGPT.

The guidance issued by England and Wales demonstrates the courts' acceptance of AI technology, albeit without a complete embrace, according to Gentile. She expressed criticism regarding the section that allows judges to refrain from disclosing their use of AI and questioned the absence of an accountability mechanism. Gentile highlighted the need for clarity on how the document would be enforced, including oversight and potential sanctions.

In an effort to maintain the integrity of the court while progressing technologically, the guidance emphasizes the limitations of AI and the potential problems that may arise if users are unaware of its workings. One notable warning pertains to chatbots, specifically mentioning ChatGPT, a conversational tool that gained significant attention last year due to its ability to quickly generate various forms of content, including legal briefs. The pitfalls of this technology in court were exemplified when two New York lawyers relied on ChatGPT to write a legal brief that included references to fictional cases. The judge, outraged by the submission he referred to as "legal gibberish," subsequently fined the lawyers.

To mitigate potential risks, judges in England and Wales were advised not to disclose any private or confidential information to chatbots, as these systems can retain and remember the data provided to them. The guidance explicitly states that any information entered into a public AI chatbot should be considered published to the entire world. Additionally, judges were reminded that AI systems are predominantly trained on legal material sourced from the internet, which is often based on U.S. law. However, AI can serve as a secondary tool for jurists with heavy caseloads, particularly when writing background material or summarizing information they are already familiar with, according to the courts.

/ Monday, 8 January 2024 /

themes:  Internet  ChatGPT  New York  AI (Artificial intelligence)  USA

VIEWS: 169


20/05/2024    info@iqtech.top
All rights to the materials belong to their authors.
RSS