Artificial intelligence in the courts: Myths and reality
By Eleftherios Chelioudakis
Many people think that the term “artificial intelligence” is synonymous to technological development, whilst it is frequently presented as the hope for resolution of serious problems, which plague our societies.
Through its articles, our team has tried to explain to our readers the term “artificial intelligence”, as well as the reason why we should be cautious regarding the developments in this sector.
Through this article we will focus on the frenzy regarding artificial intelligence, we will find out if the idea of its use on the area of justice constitutes a new and innovative approach and at the end we will note issues which merit particular attention. However, it is not the first time that Homo Digitalis focuses its attention on the use of artificial intelligence on the area of justice as we have already hosted an article regarding philosophical issues, which concern the replacement of the judiciary by machines and by means of Machine Learning.
In the information society we live in, the increased use of computers and the internet result in the rampant growth of modern human’s digital footprint. Smart devices, like smartphones, wearable systems which record our sporting activity and our health status, even coffee machines, fridges and toothbrushes, simple household appliances, collect and process a flood of information for its users and demonstrate aspects of their daily life and their personality.
The volume of information produced is so enormous, that it suddenly adds value on the gathered information. Large firms base their business model on the exploitation of these information. The goods and services of these firms are provided “for free” and users “pay” with their personal data, which are analysed and shared with third parties in order to create targeted advertisements, which will lead to profit-making.
As societies, we are heading to the belief that collecting information will bring us closer to acquiring knowledge. As human beings we do not have the intellectual capability to process the vast amounts of information arising; thus, we are placing our hopes on the calculating force of machines. The key that opens the door to control diseases, to combat crime, to better administration of our cities and to our personal well-being is data analysis, the identification of correlations between them and the production of forecasts and comparisons. At least this is the idea we are called to embrace.
Thus, sectors like artificial intelligence, which two decades ago were considered outdated and ineffective, such as the sector of machine learning, suddenly attracted more attention. The modern smartphone, the use of the internet and the computers, the improvement of processors, as well as the increased capacity of the means’ of data storage, gave to the algorithms of machine learning the requisite fuel; large amounts of data.
Serious problems in various sectors, such as health, local governance, self-improvement, transports and policing can be resolved as if by magic through the analysis of the amount of information gathered. Certainly, the judicial system could not be absent from these sectors.
At this point, we should distinguish between the use of mechanisms and artificial intelligence tools for supporting court administration (such as the use of Natural Language Processing tools, which aims to the automation of bureaucratic procedures and rapid writing, registration and analysis of judgements and other documents), and the use of the said mechanisms and tools for the granting of justice and the influence on the decision-making procedure from judicial authorities. This article does not address the first category, as the challenges and restrictions arising there, are the same for other application areas of Natural Language Processing technology. In contrast, the article focuses on the second category and the idea that artificial intelligence mechanisms could abet judicial authorities in the decision-making procedure.
The truth is that the idea of using technology in the decision-making procedure in the area of justice is not pioneering nor innovative. On the contrary, it is a past idea that has appeared and has been used a lot in foreign judicial systems, such as Canada, Australia and U.S.A. already since the end of the past century, while it is commonly known to the similar sector of judicial psychology and psychiatric. Systems such as “Level of Service Inventory-Revised (LSI-R)” and the “Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)” were used as risk assessment tools to help the judge in various stages of the criminal proceedings, such as the suspect’s detention, conviction, imposition of a sentence and the decision to release sentenced on behaviour before serving his/her sentence. Frequently, technologies, which just implement mathematical and statistical methods of risk assessment, are baptised by their creators and media as artificial intelligence. Analyses have been repeatedly carried out over the last few decades for these systems and the critical review according to their solvency and their effectiveness differ depending on which agency finances the related researches.
While the uncertainty about the use of different technologies on justice area has been strongly expressed the Council of Europe (1. Parliamentary Assembly (PACE), Recommendation 2101 (2017) 1, Technological convergence, artificial intelligence and human rights, 2. Parliamentary Assembly: Motion for a recommendation about Justice by algorithm – the role of artificial intelligence in policing and criminal justice systems, and 3. European Commission for the Efficiency of Justice (CEPEJ): European Ethical Charter on the use of artificial intelligence in judicial systems) and the European Union (1. European Commission: Communication on Artificial Intelligence in Europe, and 2. European Commission’s Hight-Level Expert Group on Artificial Intelligence (AI HLEG): Draft Ethics guidelines for trustworthy AI) through their actions during the past two years, explore the possibility of the use of artificial intelligence mechanisms in the area of justice from their Member States.
Although we are against the introduction of artificial intelligence mechanisms into the decision-making procedure and we believe that this approach is not the solution to any problem from those besetting the judiciary, because of the frenzy which prevails during the last few years, we consider it important to mention briefly the main issues emerging from the assumed use of artificial intelligence in the field of justice- especially in criminal proceedings. The enumeration which follows is indicative:
- Decision-making solely based on automated processing: as provided by Article 11 of Directive 2016/680, taking a decision based exclusively on automated processing, which produces unfavourable legal effects concerning the data subject or largely affects him, is forbidden. Except in the cases, where the law allows the decision at issue, providing appropriate safeguards to ensure the data subject’s rights and freedoms, as at least the right for human intervention. It is, therefore, recognisable that the human factor is an indispensable component in the decision-making procedure.
- Risk of discrimination and quality of the data used: Artificial intelligence mechanisms, which are trained on the basis of processed data are dependent on the quality of these. In simple terms, the provisions for my future behaviour will be based on other people’s data based on which the algorithm has been trained. If the quality of data used during the training is low, or if any prejudice underlies these data, predictions are condemned to be insolvent. They may also be illegal if they are based on personal data, which are by their very nature particularly sensitive as defined in Articles 10 and 11 of Directive 2016/680.
- Technical training of judges and lawyers: Before the use of any artificial intelligence mechanism on bearings and the decision-making processing, it is a reasonable prerequisite that professional users of this mechanism who use it on daily basis, are familiar with technology. Unfortunately, most judges and lawyers have a poor technical knowledge and do not have programming capabilities or basic knowledge on the capabilities and the functionality of the different artificial intelligence mechanisms. In light of the above, a basic training of the law practitioners is considered necessary, already starting from bachelor studies and a retraining of judges during their education in judiciary.
- Clear rules concerning data ownership used by artificial intelligence mechanisms: Under no circumstances should companies which have created the artificial intelligence mechanism, have access to the personal data of the people tried and the people condemned, nor use them commercially nor for research purposes. Justice cannot be a profit-making sector.
- Concise and explicable way of the way of operation of the artificial intelligence mechanism: The area of justice is interwoven with the principles of transparency and impartiality. Therefore, if a judge bases, even partly, his decision on a prediction of some artificial intelligence mechanism, it should be possible to explain the reason, why the mechanism has come to this prediction. If this explanation is not possible, the judge’s decision which was based on it, does not comply with the principle of transparency and will not be considered as impartial. At this point, it should be underlined that popular and complex mechanisms, such as neural networks, make particularly difficult to meet this requisite.
- Checking artificial intelligence mechanisms’ effectiveness by independent authorities: A scheduled and regular assessment of the validity of the predictions, made by artificial intelligence mechanisms, should be conducted. An independent supervisory authority with sufficient financial resources and personnel with a high level of knowledge and experience is the ideal body for the fulfilment of this task. The assessments of this authority should be based on both quantitative data, such as statistics, arising out of the efficacy of predictions of artificial intelligence mechanisms and qualitative factors, such as the conclusions taken into consideration case-by-case.
Undoubtedly, the implementation of artificial intelligence mechanisms in any sector of modern life is a complex issue, requiring a multidisciplinary approach. In any event, it is not merely a legal issue. On the contrary, it has intense ethical and social aspects, which require serious contemplation. Rapid technology development is of the utmost importance for the improvement of our life quality and for sure we should integrate it in our society. However, its inclusion should be done with thorough preparation and planning. Only then we will experience new technologies’ benefits and we will severely restrict challenges and dangers for the protection of Human Rights.