In 2019 and 2020 I contributed to a CSIRO report (Ibarra et al., 2020) on how machine learning systems may be designed so that expert users in law enforcement (such as investigators) will have an appropriate level of trust in them. This report may be found here: Machine Learning and Responsibility in Criminal Investigation.
The official project page can be found here: Trust in Machine Learning and Law Enforcement.
This report was also featured on CSIRO’s Data61 website: The Journey Towards Understanding User Trust and How to Design Trustworthy Systems (Data61).
References
2020
-
Machine Learning and Responsibility in Criminal Investigation
Georgina Ibarra, David Douglas, and Meena Tharmarajah
Sep 2020
Most of the literature on using machine learning (ML) systems in criminal investigations concerns whether using these systems undermines public trust in law enforcement. A related concern is whether investigators themselves should trust these systems. But what do we really mean by ‘trust’? Many methods have been developed for promoting fairness, transparency, and accountability in the predictions made by ML systems, however a technical approach to these problems needs to be accompanied by a human centred approach to user trust. In order to address social, ethical and practical issues these systems need to present information in such a way that the people that use them can make balanced decisions on whether or not they should trust them. In this report we use the lens of user experience (UX) and social science to look at how the role responsibilities and accountability of criminal justice experts may be affected by the use of ML systems in criminal investigations. How will these systems be used by these experts and what is the effect that may have in this regulated and legislated environment? To understand this, we explore the concepts of responsibility, accountability and transparency in the context of criminal investigations in parallel to the various levels of automation and AI assistance ranging from full human control to full automation. We discuss why ML systems used in criminal investigations can be considered ‘human-in-the-loop’ forms of automation, where the systems offer decision support to users. We also explore the issues connected with calibrating trust in an ML system, describing the characteristics of automated systems that affect the ability of users to determine if their trust in a system is legitimate, and the risks of misusing, rejecting, and abusing automation by experts operating in criminal justice settings. We highlight the risks connected with using ML systems, and how these risks might affect the use of these systems in criminal justice contexts. The report summarises additional areas of responsibility across the ecosystem of machine learning systems in a criminal context including: the responsibility of law enforcement institutions to train their workforce in the skills needed to use data and insights from ML systems; the responsibility of specific departments in these organisations to examine the intended use of such systems, adjusting their internal policies accordingly and ensuring their employees are alert to how this affects their responsibility or accountability; and the responsibility of technologists to be transparent about the system’s trustworthiness, and to allow experts to accurately calibrate their trust in the system by closely observing and responding to how they interpret and use different types of predictive data in investigative processes.