Algorithmic Prediction in Policing: 
Assumptions, Evaluation, and Accountability (with Janet Chan)

Lyria
Bennett Moses
Short bio: 

Lyria Bennett Moses is Associate Professor and Director of Learning and Teaching in the Faculty of Law at UNSW Australia, Chair of the Australia Chapter of the IEEE Society on the Social Implications of Technology (SSIT) and Academic Co-director of the Cyberspace Law and Policy Community. Lyria’s research explores issues around the relationship between technology and law, including the types of legal issues that arise as technology changes, how these issues are addressed in Australia and other jurisdictions, the application of standard legal categories such as property in new socio-technical contexts, the use of technologically-specific and sui generis legal rules, and the problems of treating “technology” as an object of regulation. She is currently a key researcher on the Data to Decisions Cooperative Research Centre, where she is involved in a comparative project exploring the technological frames and approaches of different stakeholders relating to “Big Data” technologies. Recent publications include ‘Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools’ (2014) 37(2) University of New South Wales Law Journal 643 and ‘Is Big Data challenging criminology?’ (2015) Theoretical Criminology (forthcoming) (both co-authored with Janet Chan) and ‘How to Think about Law, Regulation and Technology: Problems with "Technology" as a Regulatory Target’ (2013) 5(1) Law, Innovation and Technology 1. Lyria is a graduate of UNSW (BSc (Hons), LLB) and Columbia University (LLM, JSD); she holds the University Medal in Pure Mathematics and was an Associate in Law at Columbia University.

Abstract: 

The goal of predictive policing is to forecast where and when crimes will take place in the future. In less than a decade since its inception, the idea has captured the imagination of police agencies around the world. An increasing number of agencies are purchasing software tools that claim to help reduce crime by mapping the likely locations of future crime to guide the deployment of police resources. Yet the claims and promises of predictive policing have not been subject to critical examination. This paper will provide a long overdue review of the available literature on the theories, techniques and assumptions embedded in various predictive tools. Specifically, it highlights three key issues about the use of algorithmic prediction in policing that researchers and practitioners should be aware of:

Assumptions: The historic data mined by algorithms used to predict crime do not reveal the future by themselves. The algorithms used to gain predictive insights build on assumptions about accuracy, continuity, the irrelevance of omitted variables, and the primary importance of particular information (such as location) over others. In making decisions based on these algorithms, police are also directed towards particular kinds of decisions and responses to the exclusion of others. Understanding the assumptions inherent in predictive policing is crucial in critiquing the notion of data-based decision making as “scientific” in the sense of “value-free”.

Evaluation: Figures quoted by vendors of these technologies in the media imply that they are successful in reducing crime. However, these figures are not based on published evaluations, their methodologies are unclear and their relevance to notions of success is assumed rather than analysed. While some evaluations have been conducted, and a high quality evaluation is underway, there is insufficient evidence currently of the effectiveness of predictive policing programs.

Accountability: Finally, the paper will explore the extent to which current practices align with traditional standards of accountability in policing. It will argue that, in the case of algorithmic tools, accountability can only be maintained where there is transparency about the data deployed, the tools used, the assumptions implicit in the process and the effectiveness of predictions. Such transparency would need to be both within the organisation (so that those making deployment decisions based on predictive algorithms understand the limitations of the tools they are using) and externally (either to the general public or to an independent oversight body). Given the current state of play in multiple jurisdictions, the lack of transparency in both respects undermines accountability. The paper also explores the extent to which greater transparency would impact effectiveness.