You are a Political Junkie and Felon Who Loves Blenders: Recovering Motives from Machine Learning

Christian
Sandvig
Short bio: 

Christian Sandvig is Associate Professor in the School of Information, the Center for Political Studies, and the Department of Communication Studies at the University of Michigan. Sandvig is a social scientist and a computer programmer. His current research investigates the negative consequences of algorithmic decision-making by computer systems that curate information. He recently proposed a system for auditing algorithms discussed in Slate and The Washington Post, was part of the team that examined Facebook’s news feed curation algorithm, keeps the reading list on auditing algorithms, and he is currently working on a book on this subject under contract to Yale University Press. Sandvig's research has appeared in The Economist, Le Monde, The New York Times, The Sydney Morning Herald, National Public Radio (US), and elsewhere. Sandvig has written for Wired and The Huffington Post. Before moving to Michigan, Sandvig taught at the University of Illinois and Oxford University. His work has been funded by the US National Science Foundation, the MacArthur Foundation, and the Social Science Research Council. He has consulted for Intel, Microsoft, and the San Francisco Public Library. He is a graduate of University of California, Davis (BA summa cum laude) and Stanford University (MA and PhD).

Abstract: 

It has often been useful to speak of technological systems as having designers. Indeed, to say that a system was “designed” in the first place implies a motive behind its operation, and a “design” implies a process to implement that motive. In this paper, I recall the idea of the algorithm as a kind of plan (after Suchman). I then argue that making some decision-making algorithms more accountable by attributing a design and motive to them may never work. I start by distinguishing what Weizenbaum called “long programs” in the 1970s vs. today’s machine learning (ML) algorithms, dismissing the idea that complexity alone creates a problem of accountability. After discussing the interpretability vs. accuracy trade-off in ML, I consider a series of cases where a ML system may produce quixotic and unexpected results because of the tendency toward “promiscuous association.” Examples include racial discrimination in advertising and serendipity in music recommendations. I argue that promiscuous associations are now a fundamental feature of algorithmic decision-making, and that characteristics of ML make some kinds of plans more difficult. I bring the legal concept of the “protected class” to a discussion of ML and find an awkwardness at understanding previous categories of any kind within new algorithmic systems. I argue that ML in online platforms is a practice of what Hacking called “kind-making” and explore the consequences of this. While some have portrayed ML as inherently more mystical or incomprehensible than non-ML algorithms, I instead conclude that the optimization goals employed in ML offer important access to motive that can make some algorithmic systems easier to understand than earlier systems, if we abandon the hope of understanding process in a traditional way. I observe that in some situations motive, intent, and/or process are not recoverable, leaving only results.