Friday, September 17, 2021

    AI on Criminal Justice by Dr Jassim Haji

    Listen to this article now

    With the advent of big data analytics, machine learning and artificial intelligence systems , both the assessment of the risk of crime and the operation of criminal justice systems are becoming increasingly technologically sophisticated. While people disagree whether these technologies represent a panacea for criminal justice systems for example by reducing case backlogs or will further exacerbate social divisions and endanger fundamental liberties, the two camps nevertheless agree that such new technologies have important consequences for criminal justice systems. The automation brought about by AI systems challenges us to take a step back and reconsider fundamental questions of criminal justice: What does the explanation of the grounds of a judgment mean? When is the process of adopting a judicial decision transparent? Who should be accountable for (semi-) automated decisions and how should responsibility be allocated within the chain of actors when the final decision is facilitated by the use of AI? What is a fair trial? And is the due process of law denied to the accused when AI systems are used at some stage of the criminal procedure?

    - Advertisement -

    The technical sophistication of the new AI systems used in decision-making processes in criminal justice settings often leads to a ‘black box’ effect. The intermediate phases in the process of reaching a decision are by definition hidden from human oversight due to the technical complexity involved. For instance, multiple areas of applied machine learning show how new methods of unsupervised learning or active learning operate in a way that avoids human intervention. In the active approaches of machine learning used for natural language processing, for instance, the learning algorithm accesses a large corpus of unlabelled samples and, in a series of iterations, the algorithm selects some unlabelled samples and asks the human annotator for appropriate labels. The approach is called active as the algorithm decides what samples should be annotated by the human based on its current hypothesis. The core idea of active machine learning is to eliminate humans from the equation. Moreover, artificial neural networks (ANN) learn to perform tasks by considering examples, generally without being programmed with task-specific rules. As such, artificial neural networks can be extremely useful in multiple areas, such as computer vision, natural language processing, in geoscience for ocean modelling, or in cybersecurity for identifying and discriminating between legitimate activities and malicious ones. They do not demand labelled samples, e.g., in order to recognise cats in images or pedestrians in traffic, but can generate knowledge about what a cat looks like on their own. The operations in machine learning approaches are not transparent even for the researchers that built the systems and while this may not be problematic in many areas of applied machine learning, as the examples below show, AI systems must be transparent when used in judicial settings, where the explainability of decisions and the transparency of the reasoning are of significant even civilizational value. A decision-making process that lacks transparency and comprehensibility is not considered legitimate and non-autocratic. Due to the inherently opaque nature of these AI systems, the new tools used in criminal justice settings may thus be at variance with fundamental liberties.

    by Dr Jassim Haji

    - Advertisement -