You are here

  1. Home
  2. Blog
  3. Robot Judgements In The Courtroom: What Do You Think?

Robot Judgements In The Courtroom: What Do You Think?

This blog is a response to findings reported by the Digital Technologies Power and Control (DTPC) project. DTPC is a SPRITE+ funded research project led by a cross-disciplinary team of researchers drawn from psychology, social science, human computer interaction, and computer science. Its team is currently investigating how digital technologies can empower and/or disempower communities and organisations. Two of its guiding research questions are:

  • Who will have a say in how technology is used to inform decisions, and how will marginalised voices be heard?
  • What could prevent future technologies deepening the digital divide, worsening existing power asymmetries, and creating new ones?

To answer these questions, DTPC researchers led four focus groups with marginalised community members, as situated in the midlands and northern regions of the UK. For DTPC researchers, ‘marginalised’ participants were a) those with constrained access to digital technology and/or b) those with limited or no skills to use digital technology. The project also interviewed stakeholders from a range of charitable, corporate and government organisations, many of whom retained a vested interest in digital technology. A range of common themes emergent from that data include:

  • The extent to which artificial intelligence technology is beginning to influence community behaviour and ways of life. 
  • Concerns about digital-by-default initiatives which move access to local services online, as led by local councils and other organizations.
  • The persistent issue of generating consumer trust in digital systems.
  • The accelerated infringement of digital privacy in the medical sector and beyond. 

 

The first of these themes, namely the extent to which artificial intelligence technology is beginning to influence community behaviour and ways of life, is the focus for this blog entry. Every DTPC participant discussed how they thought power and control relationships are configured through digital technology. Two participants representing digital corporations voluntarily raised an issue which concerned them: the emergent relationship between artificial intelligence (AI) and the courtroom. This blog will now share and discuss their perspectives alongside selected literature.

 

AI can be defined as the capacity of a machine to imitate human behaviour (p.139). As a rule, courtroom AI systems are designed around ‘narrow AI’ principles, which refer to:

‘…the process by which a program learns how to perform a specific, or narrow, task. The program gathers many data points—all of which relate to the relevant task—and processes the data to more accurately perform the task’ (p.1087). Advocates of this technology argue that these systems make use of machine learning to improve their accuracy over time. Fiechuk explains that ‘machine learning is the science behind computer programs and their ability to learn from experience and therefore, performance improves over time, increasing the effectiveness of AI’ (p.140).

 

Given the expanding scale of populations worldwide and knowing that human judgements can be slow and arduous, machine learning AI might ensure that courtrooms scale to meet the needs of this contemporary age. One way this can be accelerated yet further is by implementing ‘robot judges’. Ulenaers explains that instead of using ‘AI assistants’ to support judges in their decision-making, robot judges can replace humans altogether and decide cases autonomously, in fully automated court hearings. While this appears controversial, Estonia is currently exploring how to use a robot judge (an AI-based program) to decide small claims disputes. Estonia’s aim is to address the backlog of cases in small claims disputes, so that human judges have more time to deal with demanding cases. DTPC Participant A adds to the argument for courtroom AI by citing that human judgement is not necessarily a good thing; it’s fallible:

A: People like judges making decisions. You’ve heard if you have the judge after lunch, you’ll get more leniency than if it’s before lunch.

Elrod agrees, suggesting that, ‘…even when dealing with the same facts and the same unambiguous (we are told) statute, judges will still reach differing conclusions’ (p.1089).

 

There remains concern about biases inherent in police interrogations too. Noriega  (2020) explains that there is an ongoing racial and gender divide in statistical data regarding false confessions. To address this, Noriega proposes that AI systems embedded into interrogation room settings could safeguard against discrimination and bias associated with false confessions. The ultimate aim would be to consistently achieve empirical fairness.

 

DTPC Participant A continues to explain that courtroom AI has the potential to achieve fairness between one judgement and the next:

A: If you’re looking at AI to do what they hope, which is to take out the human influence, that can be very empowering and can actually mean there is more fairness, than there is with human judgement. This idea that there is something great about human judgement and something bad about digital judgement is wrong. Human judgement is incredibly fallible and there is a possibility that digital judgement could possibly be better.

AI judgements might also be used in sentencing. DTPC Participant B adds that AI systems are also capable of making higher quality judgements, and ones which are impossible for humans to compute:

B: …there are a huge number of data points which could impact on the likelihood of someone committing a crime after they have returned to society. Too many data points for any one human being to consume realistically and then generate an answer to.

DTPC Participant B then explains that narrow AI systems can assess whether a person has fraternized with gang members, what location they are likely to re-enter into and what the crime rate in that area is likely to be. Elrod terms these systems ‘evidenced-based risk assessment programs’ and explains that they already assist criminal prosecution for the US states of Pennsylvania, Kentucky, and Wisconsin. These systems accept criminal data which includes crime, sentence, and subsequent offenses. They also factor in demographic data including age and gender, among other attributes. The system subsequently takes this data and compares it to an individual awaiting sentencing, in order to determine:

(1) those areas in which the individual will need special assistance — like employment, housing, or substance abuse; and

(2) the individual’s pretrial, general, and violent recidivism risk. (pps. 1089-1090)

This summary of narrow AI courtroom systems defines well their operation in basic terms. Evidently, such systems bring together many data points for the purpose of performing a specific judgement.

 

Despite the virtues of courtroom AI as cited above, concerns remain about the explainability of the programs which run them. Gless remarks that ‘triers of fact will have to decide whether to trust an AI-generated statement that can only partially be explained by experts’ (p.207). Trusting courtroom AI remains challenging because typically, the ‘how’ and ‘why’ of AI-generated statements cannot be stated. Consequently, using AI to inform court judgements can be fraught with danger for those whose lives become defined by its decision-making.

 

Whether judgements are determined by narrow AI programs or simply assisted by them, questions remain about the agency of the subjects involved. Are subjects empowered by a fairer, faster judgement, one devoid of human bias? Or conversely, is the subjective nature of humanness becoming an unfashionable entity, better eliminated from societal discourse?

 

Ben Evans - Postdoctoral Research Associate, Human Computer Interaction - The Open University