Feedback on the Mechelen international conference on October 23 /24, 2020 : Artificial intelligence in judicial expertise

by Béatrice Deshayes

Not only did we learn a lot on Artificial Intelligence from the various prominent speakers in these two days; they also opened our eyes on the endless perspectives given by AI for the next years – in justice as well as in numerous other fields.

Introduction by Etienne Claes

The different round tables dealt with still unexplored questions such as:

  • What can bring AI to judicial expertise?
  • How can justice be made more efficient thanks to AI?
  • What are the challenges of the use of AI in justice?
  • Why should judicial experts care about AI?
  • What kind of expertise can be encapsulated in AI systems?
  • What are the ethical aspects of the use of AI?
  • Will expert evidence disappear because of AI?

Uses of AI

During the two days of the conference and the high level and very impressive presentations given on different topics, we learned that today, AI is already used for:

  • Image recognition, style imitation (AI is good at detecting patterns and process huge volumes of data – but bad at considering the specificities of a situation – it is still easy to fool by a human!)
  • Gaming (chess and go)
  • Chatbots – even with advanced performances such as medical assistance to detect depression or autism
  • Autonomous vehicles
  • Automated translation, even for complex legal texts
  • Valuation of real estate
  • Medical diagnosis and healthcare – where, from the opinion of one of the speakers, there are already a lot of gadgets, but few real helpful tools (except for smart surgery, which is no AI but rather robotic driven surgery technique)
  • Route mapping according to traffic and user preferences
  • Hospital management in Covid crisis
  • Predicting public health scenarios (however, on the efficiency of vaccines for example, there are still too little databases to use AI, as research only started some months ago)
  • Predictive justice (from studies made in the US and on ECHR decisions, AI is already capable of predicting courts’ decisions with a 70% to 80% accuracy)
  • Fake news and deep fake…

AI can, for example, help medical experts analysing medical imaging, e.g., to avoid missing something in an image – but the result of AI analysis will always have to be validated by scientific studies, which will only be possible when there is enough data, and when it is guaranteed that the data are complete and not (intentionally or accidentally) corrupted. Regarding forensic aspects of clinical data, this is not yet the case. Collecting medical data is a real challenge, also because of data protection aspects.

AI and justice

In any case, AI must be guided, controlled: rules will have to be defined about what is allowed or not when using AI.

Regarding the use of AI in court, there is a first issue regarding admissibility of evidence given by AI. As for other e-evidence aspects, as long as there is no European legislation, national law pertaining to electronic evidence applies. This also applies to experts when they intend to use electronic means of investigation – even if they work on transnational cases, for which there is still no specific legislation besides Regulation (EC) 1206/2001. There are strong challenges to face before AI can be used in expert evidence, as the expert will have to explain how the expertise was conducted and especially how the rights of defence have been observed at each stage.

When defining the aspects of a more efficient justice thanks to AI, several questions must be asked: What is “good decision”? What is the “acceptable quality” of such a decision, and how should it be evaluated? What is a “reasonable time” for the preparation and delivery of a decision, or of an expert opinion, and how can it be quantified? What are “reasonable costs”? Such questions can only be answered by (i) applying a legal rule and (ii) resolving a value conflict.

Criteria for the evaluation of these questions can be:

  • transparency of the process,
  • respect of the adversarial principle / of the principles of fair trial,
  • indication of the reasons on which the opinion is based, thorough answering of the parties’ arguments,
  • unbiased decision-making process,
  • careful listening of the parties: the impression of the parties that they have been heard with their arguments is one of the keys to the feeling of a rightful justice,
  • “catharsis” given by the process, giving the feeling that justice has been rendered,
  • consideration of non-verbal aspects and of topical issues.

New technologies such as AI can help the judge on some of these aspects, like monitoring and analysing case law, which can be automated with algorithms analysing datasets – but not on solving a value conflict, and particularly not on the above mentioned “human” aspects of justice rendering: in justice, AI can only be a tool, and it should only be used if it helps to make more efficient decisions. As such, it can be used, for example, when calculating an accurate fine or maintenance allowances in standard cases but will certainly not be helpful when considering issues like freedom of speech. In any case, AI can only be a support to the expert or to the judge – it will not be able to replace them. And even when – as the case may be in the future – a judgment is rendered by AI, a recourse to a human judge, who shall be specifically trained on that, shall always be possible.

Additionally, to keep control on robot-maked decisions, such judgments shall be published as open data (after they have been anonymized) and regularly audited by independent entities. Such bodies shall keep in mind all the possible biases of AI: which factors and which data have been fed into the algorithm, how have they been weighed, are there “dead ankles” or other forms of biases, can the reasoning that led to the decision be retraced, …

One of the issues to be dealt with will also be the liability of the supplier of the AI system – who will probably insert limitation of liability clauses into his general terms and conditions…

Another aspect will be the change of the lawyers’ and judges’ role towards a role of analyst of the legal risks and a “compliance officer” who will be able to give advice on whether an AI-made decision has been correctly rendered!

In any case, existing fundamental rights shall be observed at each stage of the process, and one might think about creating additional rights when robots are used in the justice field. Especially, there should be “room for error”, especially in predictive justice: no one should be refrained from initiating a process because a robot has calculated only 49% chances of winning it!

Alain Pilette reminded that for judicial experts, it will be important to have databases enabling courts to know whether an expert is recognized or officially registered as a judicial expert; the database(s) should also contain relevant information on their specialities and qualifications as well as on their location. Experts should also be able to dematerialize the exchanges, at least in civil and administrative proceedings, via a platform accessible also to the judge, the parties and their counsel, and the experts should be able to submit their opinion on this platform.

However, when working with AI, the work will be different from a “traditional” analysis: there will be an algorithm working with a certain amount of data, and in the opinion of the speakers in the conference, there should always be a human control of the results at different steps of the analysis.

To deal with ethical issues, there should be a legal basis for the work with AI; at the current stage, a lot of entities are working without any legal framework, even in the medical or in the military sector, which should not continue. There has already been interesting reflexions on the European level: the CEPEJ has issued an Ethical Charter on the use of artificial intelligence in judicial systems and their environment, which is a first basis for such a framework.  The CCBE has also published considerations on the legal aspects of AI, which will be an additional support

But beyond such ethical rules, the real challenge will be to find technical solutions enabling to make sure such rules are observed. One of the major issues will be to ensure that human persons keep control of the use of AI – while it is very difficult to know how it works, as lack of transparency is one of the most difficult aspects when working with AI (“black box effect”).

Lastly, the speakers agreed on the conclusion that – as any scientific development – AI can be used for the best as for the worst, and that human control will be the key for a reasonable use of AI.