Article

The Risks to Employing AI in Canadian Courts

This blog post highlights the risks created by the use of Artificial Intelligence (“AI”) in the Canadian court system. It also provides strategies to mitigate those risks. While this post focuses on the use of AI in Canadian Courts by Judges, it’s information is widely applicable to lawyers and law firms considering the use of AI in their practices.

In September, the Canadian Judicial Council released the first “Guidelines for the Use of Artificial Intelligence in Canadian Courts” (the “Guidelines”). The publication was prepared by Martin Felsky, Ph.D., J.D. and Professor Karen Eltis for the Canadian Judicial Council. The purpose of this blog is to highlight the key points from the Guidelines.

The overarching message from the Guidelines is that Judges should use support systems to help with their responsibilities, but those support systems cannot overstep into the process of judicial decision-making. The Guidelines promote appropriate use of AI-as a tool-and not a replacement for judicial decision making by a Judge.

The Risks to Employing AI in Canadian Courts

Employing AI in Canadian Courts can lead Judges into ethical pitfalls, inadvertently perpetuate bias, and cause accidental plagiarism.

Ethical Breaches

AI is capable of processing unfathomably large amounts of information. AI is also capable of providing users with outputs that appear thorough and intelligent in a matter of seconds. In a system that is facing pressure from the volume of cases, the use of AI to become engaged in judicial decision-making is tempting. However, Judges are bound by strict codes of ethics. For example, the only person that can engage in judicial decision-making is a Judge. Not a legal assistant, not a law clerk, and not AI. If a Judge allows AI to make a judicial decision, they are acting unethically.

Judges know this. They are well acquainted with their ethical obligations. But with AI becoming more entrenched in today’s society, users of AI are often unaware that they are using it. The Guidelines warn that AI is becoming increasingly integrated in everyday tools such as mobile applications and search engines. Judges are at risk of accidentally delegating judicial decision making to such tools. Therefore, the Guidelines seek to raise awareness about the integration of AI into day to day life so that Judges can avoid accidental ethical pitfalls.

Bias

Generative AI can process, understand, and generate written language. Much of today’s Generative AI is “trained” on large language models. This means that the AI processes and employs information from a large dataset to create output. The AI will produce output that is flavoured by the data it has been trained on, which creates a risk of bias. For example, if an AI system is trained on the whole internet, it may produce results that reinforce and even promote harmful stereotypes.

Consider the potential use of AI in the court system. If AI systems are trained on a broad set of data and then employed in courts, its results may be flavoured by politics and subsequently impact the judicial independence of the courts. On the other hand, if AI are trained on data that is too narrow, its results may be inadequate and inefficient. Thus whatever AI is used by the courts must be carefully scrutinized to ensure it is not biased but is also useful.

Plagiarism

The training data and output of AI must be scrutinized to avoid copyright infringement and plagiarism. Different AI systems are trained on different sources, ranging anywhere from a single page of data to the entire internet. If a Judge uses AI without understanding the source of the AI’s data, they may accidentally plagiarize someone else’s work.

There is also a risk that the training data is not attuned to the laws of Canadian courts. For example, content that would be legally produced in the U.S. could be considered hate speech in Canada. The Guidelines highlight that different countries have their own copyright and privacy laws, and it is essential for Judges to understand where the AI is sourcing its information before using it to guide their responsibilities.

How to safely use AI

Follow the core values of the court

The Guidelines encourage judicial leadership in the process of AI adoption. They emphasize that the use of AI must be consistent with the core values of the court:

  1. independence
  2. integrity
  3. respect
  4. diligence
  5. competence
  6. equality
  7. impartiality
  8. fairness
  9. transparency
  10. accessibility
  11. timeliness
  12. certainty

Judges must scrutinize AI systems to ensure that they do not violate their ethical and professional obligations.

Strong Security Standards

The Guidelines oblige the courts to have security standards in place for AI. The concern is that AI may inadvertently expose sealed court data or the AI algorithm could be tampered with to inject outcomes. It’s essential to have a cybersecurity program specific to the AI being used. The Guidelines also warn Judges not to upload sensitive information to a free AI site, it could cause a breach of professional obligations to keep information confidential.

Explainability

The Guidelines provide that any AI system used by Judges must to be able to explain its output in an understandable way. Canadian courts are currently considering the use of AI to improve efficiency by managing files. It is important for any AI system to be able to explain its output so that users are able to make a decision about whether they trust the output or not. The Guidelines emphasize the importance of scrutinizing output from AI and not accepting any statements generated by AI as true without further research. Explainability also facilitates access to justice by empowering users who may not be familiar with legal concepts to question AI.

Continuous Testing

The Guidelines oblige the courts to regularly test their AI systems. Within regular formal reviews, the courts should consider how external AI providers are able to access and use confidential information.

Education

Lastly, the Guidelines encourage education on the use of AI to ensure Judges can navigate AI effectively and understand the risks.

AI can process data much faster than a human, creating an illusion of efficiency. Users of AI should be disabused of this illusion after considering the serious risks of ethical breaches, bias, and plagiarism. The Guidelines encourage the use of AI systems as a tool to promote efficiency without sacrificing the integrity of the Canadian Court system.