People

Presenters

Kath Albury
Bio:

Kath Albury is an Associate Professor in the School of Arts and Media at UNSW. Her current research projects focus young people’s practices of digital self-representation, and the role of user-generated media (including social networking platforms) in young people’s formal and informal sexual learning. Kath currently leads the Australian Research Council (ARC) Centre of Excellence in Creative Industries and Innovation’s ‘Health Narratives’ node. Since 2001, she has been a Chief Investigator on four ARC Discovery projects, and has lead an ARC Centre of Excellence project, and an ARC Linkage project. Her research has involved collaborations with a range of government and non-government organisations including the NSW Health Department, the AIDS Council of NSW, Family Planning NSW and Queensland, and Rape and Domestic Violence Services, Australia. Kath is a co-author of The Porn Report (MUP, 2008).


Hook-up apps: regulation, resistance and re-use in big data cultures (with Jean Burgess, Kane Race, Ben Light & Rowen Wilken)

Abstract:

With the rise of smartphone use, it has been argued that ‘unlocated information will cease to be the norm’ (Gordon & de Souza e Silva 2011: 19) and that location will become a ‘near universal search string for the world’s data’ (20). Dating and hook-up apps are significant in this context in that geo-locational information is crucial to user interface design, the software sorting that occurs within apps, and the follow-up actions of app users. Despite their wider adoption and economic importance, dating apps have received less attention in communication, media, and cultural studies compared with other facets of mobile location-based communications. Yet, dating apps offer rich insights for the study of communications, cultural practice, media economics, and media and communications and public health policy. Further, the ethics and politics of apps such as Tinder and Grindr are regular topics of discussion in popular digital media forums; and sexual-health-related policy guidance in relation to hook-up app culture is already emerging. This paper offers a research agenda for inquiry into this evolving field by exploring three thematic questions. Firstly, how are people, places and things are made visible/defined by the internal and external regulation of dating and hook-up apps? How do in-app Terms of Service (relating for example, to age limits or permitted content) define practices of use, and how to app developers use data analytics in dialogue with regulatory systems (Ridder 2014)? Secondly, what are the sociotechnical aspects of use of dating apps? How do design features and embedded ‘decision support’ functionality interweave with, and shape, user activities? How do developers work with user-generated data to create ‘premium’ (subscription) services within ‘free’ apps? Finally, how do users engage with apps? How do users deploy data analytics when seeking intimate partners? What cultures of vernacular etiquette and ethics are emerging with app use? How are users ‘gaming’ apps’ data-gathering features (for example, by creating new Facebook profiles to link to Tinder accounts, or deploying third-party apps such as ‘Fake My Location’ to evade geo-locative tracking? It is clear that the social and economic implications of locative media are significant (Wilken 2014, 2013), but are yet to be explored in relation to location based dating apps. The agenda we put forward in this paper represents a step towards developing deeper understandings of this important aspect of contemporary digital culture.


Lyria Bennett Moses
Bio:

Lyria Bennett Moses is Associate Professor and Director of Learning and Teaching in the Faculty of Law at UNSW Australia, Chair of the Australia Chapter of the IEEE Society on the Social Implications of Technology (SSIT) and Academic Co-director of the Cyberspace Law and Policy Community. Lyria’s research explores issues around the relationship between technology and law, including the types of legal issues that arise as technology changes, how these issues are addressed in Australia and other jurisdictions, the application of standard legal categories such as property in new socio-technical contexts, the use of technologically-specific and sui generis legal rules, and the problems of treating “technology” as an object of regulation. She is currently a key researcher on the Data to Decisions Cooperative Research Centre, where she is involved in a comparative project exploring the technological frames and approaches of different stakeholders relating to “Big Data” technologies. Recent publications include ‘Using Big Data for Legal and Law Enforcement Decisions: Testing the New Tools’ (2014) 37(2) University of New South Wales Law Journal 643 and ‘Is Big Data challenging criminology?’ (2015) Theoretical Criminology (forthcoming) (both co-authored with Janet Chan) and ‘How to Think about Law, Regulation and Technology: Problems with "Technology" as a Regulatory Target’ (2013) 5(1) Law, Innovation and Technology 1. Lyria is a graduate of UNSW (BSc (Hons), LLB) and Columbia University (LLM, JSD); she holds the University Medal in Pure Mathematics and was an Associate in Law at Columbia University.


Algorithmic Prediction in Policing: 
Assumptions, Evaluation, and Accountability (with Janet Chan)

Abstract:

The goal of predictive policing is to forecast where and when crimes will take place in the future. In less than a decade since its inception, the idea has captured the imagination of police agencies around the world. An increasing number of agencies are purchasing software tools that claim to help reduce crime by mapping the likely locations of future crime to guide the deployment of police resources. Yet the claims and promises of predictive policing have not been subject to critical examination. This paper will provide a long overdue review of the available literature on the theories, techniques and assumptions embedded in various predictive tools. Specifically, it highlights three key issues about the use of algorithmic prediction in policing that researchers and practitioners should be aware of:

Assumptions: The historic data mined by algorithms used to predict crime do not reveal the future by themselves. The algorithms used to gain predictive insights build on assumptions about accuracy, continuity, the irrelevance of omitted variables, and the primary importance of particular information (such as location) over others. In making decisions based on these algorithms, police are also directed towards particular kinds of decisions and responses to the exclusion of others. Understanding the assumptions inherent in predictive policing is crucial in critiquing the notion of data-based decision making as “scientific” in the sense of “value-free”.

Evaluation: Figures quoted by vendors of these technologies in the media imply that they are successful in reducing crime. However, these figures are not based on published evaluations, their methodologies are unclear and their relevance to notions of success is assumed rather than analysed. While some evaluations have been conducted, and a high quality evaluation is underway, there is insufficient evidence currently of the effectiveness of predictive policing programs.

Accountability: Finally, the paper will explore the extent to which current practices align with traditional standards of accountability in policing. It will argue that, in the case of algorithmic tools, accountability can only be maintained where there is transparency about the data deployed, the tools used, the assumptions implicit in the process and the effectiveness of predictions. Such transparency would need to be both within the organisation (so that those making deployment decisions based on predictive algorithms understand the limitations of the tools they are using) and externally (either to the general public or to an independent oversight body). Given the current state of play in multiple jurisdictions, the lack of transparency in both respects undermines accountability. The paper also explores the extent to which greater transparency would impact effectiveness.


Kathy Bowrey
Bio:

Kathy’s expertise primarily relates to intellectual property, media and information technology regulation, reflecting a broad range of interests pertaining to socio-legal history, media and cultural studies and legal theory. She also does research on western laws affecting indigenous cultural and intellectual property.


Speaking of us, about us and for us: Data, Identity Politics, Law & Cultural Practice

Abstract:

This paper looks at a little explored area of the state we are in: what is it like to be the subject of an archive where information is taken to circumscribe your identity and what are the obstacles that come into play when the subject seeks to challenge or disrupt the narrative developed from that information?

Reflecting upon personal experience with the management of records pertaining to Aboriginal and Torres Strait Islander Peoples in Australia I plot the collision of disciplinary intuitions and intellectual property concepts that free up information flows and authorise others to speak of, about and for the subject. Whilst colonial archives are in some ways exceptional, my discussion will highlight how data management processes, archival practices and intellectual property concepts combine to support a teleology of the copy. In so doing I make a preliminary sketch of a much larger political problem created by the right to document, copy and disseminate information about others. My examples show the inherent difficulties in disrupting these dynamics — to change the dialogue, to ask different questions, to show fuller respect to the subjects of these texts — once an identity has already been framed by information, data and texts that that speaks of them, about them, for them.


Jean Burgess
Bio:

Jean Burgess is Director of the QUT Digital Media Research Centre (DMRC) and Associate Professor of Digital Media in the Creative Industries Faculty at Queensland University of Technology, Australia. She is an expert in digital media, with a focus on the everyday uses and politics of social and mobile media platforms, as well as new digital methods for studying them. She was awarded an Australian Research Council (ARC) Postdoctoral Fellowship for the ARC Discovery Project ‘New Media and Public Communication‘ (2010-2013) and is a Chief Investigator on the ARC Linkage Projects ‘Digital Storytelling and Co-Creative Media’ (2011-2014) and ‘Social Media in Times of Crisis’ (2012-2015). Her books are YouTube: Online Video and Participatory Culture (Polity Press, 2009), Studying Mobile Media: Cultural Technologies, Mobile Communication, and the iPhone (Routledge, 2012), A Companion to New Media Dynamics (Wiley-Blackwell, 2013), and Twitter and Society (Peter Lang, 2014). Over the past decade she has worked with a large number of government, industry and community-based organisations, helping them address the practical opportunities and challenges of social and participatory media. She collaborates widely with international research partners in Germany, Brazil, Sweden, the UK, Canada, the USA, and Taiwan, and in 2013 she spent four months as a Visiting Researcher at Microsoft Research New England’s Social Media Collective.


Hook-up apps: regulation, resistance and re-use in big data cultures (with Kath Albury, Kane Race, Ben Light & Rowen Wilken)

Abstract:

With the rise of smartphone use, it has been argued that ‘unlocated information will cease to be the norm’ (Gordon & de Souza e Silva 2011: 19) and that location will become a ‘near universal search string for the world’s data’ (20). Dating and hook-up apps are significant in this context in that geo-locational information is crucial to user interface design, the software sorting that occurs within apps, and the follow-up actions of app users. Despite their wider adoption and economic importance, dating apps have received less attention in communication, media, and cultural studies compared with other facets of mobile location-based communications. Yet, dating apps offer rich insights for the study of communications, cultural practice, media economics, and media and communications and public health policy. Further, the ethics and politics of apps such as Tinder and Grindr are regular topics of discussion in popular digital media forums; and sexual-health-related policy guidance in relation to hook-up app culture is already emerging. This paper offers a research agenda for inquiry into this evolving field by exploring three thematic questions. Firstly, how are people, places and things are made visible/defined by the internal and external regulation of dating and hook-up apps? How do in-app Terms of Service (relating for example, to age limits or permitted content) define practices of use, and how to app developers use data analytics in dialogue with regulatory systems (Ridder 2014)? Secondly, what are the sociotechnical aspects of use of dating apps? How do design features and embedded ‘decision support’ functionality interweave with, and shape, user activities? How do developers work with user-generated data to create ‘premium’ (subscription) services within ‘free’ apps? Finally, how do users engage with apps? How do users deploy data analytics when seeking intimate partners? What cultures of vernacular etiquette and ethics are emerging with app use? How are users ‘gaming’ apps’ data-gathering features (for example, by creating new Facebook profiles to link to Tinder accounts, or deploying third-party apps such as ‘Fake My Location’ to evade geo-locative tracking? It is clear that the social and economic implications of locative media are significant (Wilken 2014, 2013), but are yet to be explored in relation to location based dating apps. The agenda we put forward in this paper represents a step towards developing deeper understandings of this important aspect of contemporary digital culture.


Janet Chan
Bio:

Janet is a multidisciplinary scholar with research interests in criminal justice policy and practice, sociology of organisation and occupation, and the social organisation of creativity. She is internationally recognised for her contributions to policing research, especially her work on police culture and socialisation, police reform, and the use of information technology in policing. Her major publications in this field include Changing Police Culture (Cambridge University Press 1997) and Fair Cop: Learning the Art of Policing (University of Toronto Press 2003). Janet has been awarded a number of major grants for criminological and sociolegal research, ranging from policing, juvenile justice, restorative justice, work stress and wellbeing of lawyers, to projects on Big Data analytics for national security and law enforcement. Since 2004 she has established a major research program on creativity and innovation, studying the creative practices of visual artists, research scientists and art-technology collaborations. She is co-editor of Creativity and Innovation in Business and Beyond (Routledge 2011) and Handbook of Research on Creativity (Edward Elgar 2013). Janet was elected Fellow of the Academy of Social Sciences in Australia in 2002 for distinction in research achievements


Algorithmic Prediction in Policing: Assumptions, Evaluation and Accountability (with Lyria Bennett Moses)

Abstract:

The goal of predictive policing is to forecast where and when crimes will take place in the future. In less than a decade since its inception, the idea has captured the imagination of police agencies around the world. An increasing number of agencies are purchasing software tools that claim to help reduce crime by mapping the likely locations of future crime to guide the deployment of police resources. Yet the claims and promises of predictive policing have not been subject to critical examination. This paper will provide a long overdue review of the available literature on the theories, techniques and assumptions embedded in various predictive tools. Specifically, it highlights three key issues about the use of algorithmic prediction in policing that researchers and practitioners should be aware of:

Assumptions: The historic data mined by algorithms used to predict crime do not reveal the future by themselves. The algorithms used to gain predictive insights build on assumptions about accuracy, continuity, the irrelevance of omitted variables, and the primary importance of particular information (such as location) over others. In making decisions based on these algorithms, police are also directed towards particular kinds of decisions and responses to the exclusion of others. Understanding the assumptions inherent in predictive policing is crucial in critiquing the notion of data-based decision making as “scientific” in the sense of “value-free”.

Evaluation: Figures quoted by vendors of these technologies in the media imply that they are successful in reducing crime. However, these figures are not based on published evaluations, their methodologies are unclear and their relevance to notions of success is assumed rather than analysed. While some evaluations have been conducted, and a high quality evaluation is underway, there is insufficient evidence currently of the effectiveness of predictive policing programs.

Accountability: Finally, the paper will explore the extent to which current practices align with traditional standards of accountability in policing. It will argue that, in the case of algorithmic tools, accountability can only be maintained where there is transparency about the data deployed, the tools used, the assumptions implicit in the process and the effectiveness of predictions. Such transparency would need to be both within the organisation (so that those making deployment decisions based on predictive algorithms understand the limitations of the tools they are using) and externally (either to the general public or to an independent oversight body). Given the current state of play in multiple jurisdictions, the lack of transparency in both respects undermines accountability. The paper also explores the extent to which greater transparency would impact effectiveness.


Louis De Koker
Bio:

Bank customer data: The impact of dataveillance in support of national security

Abstract:

Prior to the 1980s bank-customer relationships were confidential, private commercial relationships. Banks collected and retain customer information to the extent such information was operationally or commercially relevant. To combat serious crime and later also terrorism and other high profile crimes, the global community, via the Financial Action Task Force and the Basel Committee on Banking Supervision, adopted standards that turned banks into surveillance agents of the state. In this role they have to collect customer details, monitor customer behavior (so-called “dataveillance”), refuse relationships to customers posing an acceptably high risk of money laundering and terrorist financing risk and confidentially report suspicious transactions to national financial intelligence units. Analytical programs are therefore trawling through customer and transactional records of large institutions, reporting millions of transactions automatically to authorities and flagging others for closer inspection and potential reporting by compliance officers. Viewed through the lens of Shosana Zuboff’s concept of “surveillance capitalism” this appropriation of customer data, private relationships and surveillance labour for national security surveillance is ironic.

Risk assessment and risk mitigation lie at the heart of the current public/private bank-customer relationship. Innocent customers whose data profiles are incomplete and low volume businesses that are incorrectly assessed as posing a higher risk can find themselves excluded from the formal financial system. Such exclusion is causing concern and attracted the attention of the United Nations, G20 and World Bank and has led to changes in the international standards. Recognising the impact that a lack of reliable identity verification data may have on data poor customers, many developing countries are embracing new biometric national identification programs. In India this program is linked directly to accessing bank accounts and in Nigeria the national identify card is issued with Mastercard branding. Crime risk assessment models are however not well developed and simply generating, collecting and retaining more identity verification data on customers may solve fewer problems than they create, if risk modelling remains flawed.

This paper will reflect on the complexities surrounding the appropriation of an inherently private relationship and commercial data for the agency surveillance model that underpins the global anti-money laundering and counter-terrorist financing system. It will consider the impact of such a model on the nature of the banking system as a public good and investigate the effect of uneven data on low risk user groups (such as women) and high risk customers (such as money remitters). Against that backdrop the paper will consider whether digital financial services will provide solutions or more reason for concern. Some of these services are data-rich and lend themselves to Big Data analytics while others, such as crypto-currencies, provide alternative mechanisms for criminals and those who rebel against the pervasive financial dataveillance.


Ben Hurlbut
Bio:

Ben Hurlbut is Assistant Professor in the School of Life Sciences at Arizona State University in the United States. Ben Hurlbut is trained in the history of modern biomedical and life sciences. His research lies at the intersection of science and technology studies, bioethics and political theory. He studies the changing relationships between science, politics and law in the governance of biomedical research and innovation in the 20th and 21st centuries. Focusing on controversy around morally and technically complex problems in areas such as human embryonic stem cell research and genomics, Hurlbut examines the interplay of science and technology with shifting notions of democracy, of religious and moral pluralism, and of public reason. He is currently completing a book on human embryo research and public bioethics in the United States. He holds an AB from Stanford University, and a PhD in the history of science from Harvard University.


Assembling Knowledge Without Boarders: the Science of Pandemic Preparedness

Abstract:

“Like science, emerging viruses know no country.” This is the first line in a book that twenty years ago helped to put emerging infectious disease on the global health map. Two decades later, the potentially catastrophic futures it imagined have come to animate a regime of global viral surveillance that aspires to anticipate and detect emerging viruses with pandemic potential, in effect approaching pandemic control as a science of global data analytics that, like viruses, “knows no country.” This regime aims to produce an informatic picture that renders conditions of risk and response epistemically commensurable across global space: heterogeneous social, economic and political forms of life are epistemic noise, obstacles to be overcome in producing a scientifically legible global biogeography. Yet a world that is governable through data requires a world that cooperates in data collection and participates in the imaginary of governance that it represents. This paper looks beneath the (aspirational) epistemic regime of viral surveillance to examine the political norms and relationships that its imperatives of data collection are calling into being. I argue that the imagination of the arena of pandemic risk as global in nature—as knowing no country—and the corollary that risk can only be known and governed in the universalist idiom of a science that likewise “knows no country” has been advanced as a kind of global constitutionalism. I examine controversy around scientific access to H5N1 flu virus genomic data, showing how the norms of scientific practice have displaced equity claims rooted in national interest and sovereignty. I argue that the normative and jurisdictional contours of global risk are not merely the political expression of risk knowledge—what Ulrich Beck has described as “forced cosmopolitanization”—but are taking form as a global constitutional realignments around scientifically authorized imaginations of future risk and corollary tensions between scientific and political sovereignty.


Sheila Jasanoff
Bio:

Sheila Jasanoff is Pforzheimer Professor of Science and Technology Studies at the Harvard Kennedy School. A pioneer in her field, she has authored more than 100 articles and chapters and is author or editor of a dozen books, including Controlling Chemicals, The Fifth Branch, Science at the Bar, and Designs on Nature. Her work explores the role of science and technology in the law, politics, and policy of modern democracies, with particular attention to the nature of public reason. She was founding chair of the STS Department at Cornell University and has held numerous distinguished visiting appointments in the US, Europe, and Japan. Jasanoff served on the Board of Directors of the American Association for the Advancement of Science and as President of the Society for Social Studies of Science. Her grants and awards include a 2010 Guggenheim Fellowship and an Ehrenkreuz from the Government of Austria. She holds AB, JD, and PhD degrees from Harvard, and an honorary doctorate from the University of Twente.


Virtual, Visible, and Actionable: Data Assemblages and the Axiology of Justice

Abstract:

Numbers and justice have long kept company, as the paired words counting and accounting attest. If you can count something, you can also account for it. Enumerating is an instrument of holding accountable, whether for financial transactions in a company, police brutality in a community, health disparities in a country, or the world’s changing climate. Inevitably, then, today’s explosion of data, an offshoot of the digital revolution, has created new conjunctions between numbers and norms, enabling phenomena to be counted and acted on, especially at the global level, that once would have escaped notice because they were too dispersed, too jurisdictionally discrete, too intangible, and hence not anyone’s business to call to anyone else’s attention.

Early commentary on the social justice implications of the data age has oscillated between celebratory, focused on bringing formerly invisible clusters of injustice to light, and cautionary, focused on the loss of control that results from an individual being turned into data points, whether through unwitting transfers of personal information to big corporations or through invasions of privacy and errors of classification by institutions of governmentality. In this paper, I follow instead the way the visual representation of data affects the balance between the collectively seen and the collectively not seen, and constitutes divergent positions of privileged seeing, as captured in the organizing question: “What is made discernible in these practices and what becomes imperceptible?” In effect, this is a paper about the politics and expert practices of using data as a basis for collective witnessing at the global level.

The paper begins with a theoretical exposition of three well-established modes of collective seeing, each represented in the political cultures of sovereign states and each associated with its own legitimation practices, including discourses of valid seeing and forms of authorized expertise. These three positions are the view from nowhere (based on discourses of objectivity and enumeration), the view from everywhere (based on discourses of empiricism and experimentalism), and the view from somewhere (based on discourses of authenticity and witnessing). What counts as good data in each of these regimes depends on prior normative choices about such things as what is worth counting, who has authority to collect and compile data, and what forms of analysis and demonstration are found persuasive.

Building on this base, I trace the ways in which the “global environment” emerged as an actionable object for law and policy in the last quarter of the 20th century, giving rise to new modes of accounting and accountability. I look at climate change as a specific example of a global environmental phenomenon constituted through data, and the proliferation of data institutions and instruments underwriting this issue of global common concern. Using contrasting examples from US, European, and Indian environmentalism, I explore what is at stake when different modes of “seeing” the data on climate change come into contact and conflict. Whose seeing counts, under which rules of the game, and what gets relegated to the margins of invisibility and inaction?


Fleur Johns
Bio:

Fleur Johns is Professor in the Faculty of Law at UNSW Australia working in public international law and legal theory. She is the author of Non-Legality in International Law: Unruly Law (Cambridge University Press, 2013; paperback 2015) and The Mekong: A Socio-legal Approach to River Basin Development (co-authored with Ben Boer, Philip Hirsch, Ben Saul and Natalia Scurrah, Earthscan/Routledge, forthcoming Dec. 2015). Fleur is also the editor of Events: The Force of International Law (Routledge-Cavendish 2011; co-edited with Sundhya Pahuja and Richard Joyce) and International Legal Personality (Ashgate 2010). Currently, Fleur is in the early stages of a new project exploring the use of data analytics in global governance and related configurations of lawful association, for which she will commence fieldwork in August 2015; a representative early stage publication is ‘Global Governance through the Pairing of List and Algorithm’ 33 Environment and Planning D: Society and Space (pre-print available here). Before joining UNSW, Fleur was Co-Director of the Sydney Centre for International Law and an Associate Professor of Law at the University of Sydney. Fleur has been a Distinguished Visiting Professor at the University of Toronto Faculty of Law, a Visiting Fellow at the European University Institute and Leverhulme Visiting Fellow at Birkbeck College, the University of London. Fleur is a graduate of the University of Melbourne (BA, LLB (Hons)) and Harvard Law School (LLM, SJD); at Harvard, she was a Menzies Scholar and Laylin Prize recipient. Before commencing her academic career, Fleur practised as a corporate lawyer for six years with Sullivan & Cromwell LLP in New York, specialising in international project finance.


Data, Detection and the Redistribution of the Sensible in International Law

Abstract:

International legal institutions and doctrines have long relied on certain claims and capacities of perception. The authority of law on the global plane depends upon the prospect of legal agents, occupying designated sensory and attestive roles, detecting and acting on worldly phenomena. Increased recourse to remote sensing and automated mining and analysis of data is, however, effecting a redistribution of the sensible in international legal order, introducing new pathways of mediation and patterns of assembly. This poses challenges for the authority of law and the conduct of relations globally. This paper sets out to explore those challenges by examining, in particular, changing juridical techniques surrounding the global movement of certain weapons and weapon-grade material and the mass-movement of persons. How, it asks, might these techniques be affecting the global sensorium that international legal work has sought to maintain? And what may be their ramifications for the exercise and politics of governance globally?


Daniel Joyce
Bio:

Daniel Joyce is a Lecturer in the Faculty of Law at UNSW Australia, having previously worked as a solicitor for the Office of the Director of Public Prosecutions in NSW and volunteered for human rights NGOs. Daniel is a graduate of the ANU (BA (Hons), LLB (Hons)) and the University of Cambridge (LLM, PhD); at Cambridge, he was the Whewell Scholar in international law and a Senior Rouse Ball Student at Trinity College. Daniel later undertook postdoctoral research as the Erik Castrén Fellow in international law and human rights at the University of Helsinki, where he remains an Affiliated Research Fellow. His main research and teaching interests are in international law and in media law - specifically the development of international media law and the mediatization of international law. He also continues to research and publish in human rights and international legal theory. He is especially interested in the connections between media and human rights and in the digital rights movement. He is working on a longer term project with Dr Jessie Hohmann of Queen Mary, London on 'International Law's Objects'. Daniel has been a visiting research fellow at the Lauterpacht Centre for International Law at the University of Cambridge and at Columbia Law School and is a Laureate of the Junior Faculty Forum for International Law in 2014. Daniel is admitted and practises as a barrister in New South Wales. Daniel is the author (with David Rolph, Matt Vitins and Judith Bannister) of Media Law: Cases, Materials and Commentary, Second Edition (Oxford University Press, Forthcoming 2015). Recent articles include ‘Media Witnesses: Human Rights in an Age of Digital Media’ (2013) 8 Intercultural Human Rights Law Review 232.


Data Associations and the Protection of Reputation

Abstract:

Defamation law seeks to protect reputation, and to balance that interest with freedom of expression. Defamation law in Australia has traditionally cast a wide net in terms of the elements of its cause of action: publication, identification and defamatory meaning. In recent years cases involving online defamation have reached the courts. There is widespread public uncertainty about the potential risks in terms of liability for defamation in digital media contexts. But the answer from the courts appears to be fairly clear - online publication will generally be treated in similar fashion to traditional forms of publication. On face value this seems to be uncontroversial if as yet not more widely understood. What is more contentious is the question of who, if anyone, should bear the responsibility for digital forms of defamatory publication which result not from an individual author’s activity online, but rather from algorithmic associations – most commonly in the form of a search engine result or hit.

This paper seeks to analyse the case law on this question and the literature which has emerged. Rather than focusing on the question of whether online publication is publication, or on associated questions regarding jurisdiction and private international law, I will focus on the challenge posed by defamatory data associations to underlying rationales for the protection of reputation. Defamation law is said to be grounded in a variety of rationales – from honour to dignity to sociality. Does the case of defamatory data associations further unsettle these rationales, or provide a new way of thinking about the protection of reputation and the traditional principles of defamation law? Is automaticity a conceptual hurdle for defamation law, and does it herald the need for thinking differently about how we protect reputation and why we choose to do so?


Emmanuel Letouzé
Bio:

Emmanuel Letouzé is the Director and co-Founder of Data-Pop Alliance on Big Data and Development, co-created by the Harvard Humanitarian Initiative (HHI), MIT Media Lab, and the Overseas Development Institute (ODI), where he is respectively a Fellow, a Visiting Scholar, and a Research Associate. He is also a PhD Candidate in Demography at UC Berkeley (ABD, finishing this year) and a Non-Resident Adviser at the International Peace Institute (IPI). In 2011-12 Emmanuel worked as a Development Economist at UN Global Pulse, where he wrote UN Global Pulse's white paper “Big Data for Development". Before that he worked for UNDP in New York (2006-09) and in Hanoi, Vietnam, for the French Ministry of Finance (2000-04). Emmanuel is a graduate of Sciences Po Paris (BA, Political Science 1999, MA Economic Demography, 2000) and Columbia University SIPA (MA Economic Development, 2006) where he was a Fulbright Fellow. As a political cartoonist ("in the making since 1975") Emmanuel contributes cartoons to Rue89, a French news website, to the satirical blog Stuffexpataidworkerslike as well as illustrations for development reports and campaigns.


Applications and Implications of Algorithmic Decision-Making for Just Societies: The Case of Crime Prediction through Big Data

Abstract:

Crime is arguably one of the most salient symptoms and drivers of social fragmentation, exclusion, and disenfranchisement. Violent crimes such as homicides and rapes constitute infringements on human rights and all crimes are impediments to social progress. Given the high individual and societal benefits of reducing crime levels—including possibly ‘before they happen’—the opportunity cost of not using reliable predictive tools as they may become available and more sophisticated is significant.

The increased availability of fine-grained behavioral data is making crime prediction possible with growing accuracy. ‘Predictive policing’ is currently being used by an increasing number of police departments of large metropolises in the US, UK and a few other countries. Additionally, a nascent body of academic literature involving cell-phone and transportation data analysis has emerged, investigating the possibility and effectiveness of calling and mobility patterns to help forecast crime hotspots. These predictive models may be completely a-theoretical or based on some hypotheses about causal processes that may help devise public policies (e.g. the features of public transportation systems (such as frequency and coverage) and their effect on crime patterns and trends).

At the same time, the development and deployment of these methods and tools raise a number of hard ethical questions, notably around individual and group privacy—another fundamental human right. One ‘intrinsic’ facet of the problem is the fact that the informed consent of the emitters of data is typically not clearly established in such initiatives. An ‘instrumental’ aspect is the risk of profiling, harassment, and reinforcement of existing inequities and prejudices that may result from blind or overreliance on algorithmic predictions.

Against this general background, the paper will (1) take stock of the state of research and practice in the field, (2) discuss whether the use of aggregated and supposedly ‘anonymized’ datasets is a sufficient safeguard against possible harms, included unintended, (3) assess whether profiling risks and reinforcements of inequality that may result from such data-driven approaches can be mitigated and the costs of error balanced with the benefits of prediction in light of model accuracy, and (4) interrogate and suggest ethical principles and requirements and associated legal, regulatory and institutional frameworks that should inform future applications and developments in the field—with a focus on the need to foster opportunities for members and representatives of at-risk communities to have a voice and be actors in related dialogues, decisions and initiatives.


Sarah Logan
Bio:

Sarah Logan is a Research Fellow working in the State, Society and Governance in Melanesia Program of the Coral Bell School of Asia Pacific Affairs at the Australian National University. Sarah’s research explores the political and social impact of the Internet and mobile phones on domestic and international politics. Sarah is currently engaged in research focused on the political and social impact of such technologies in Melanesia. Sarah was a Fellow in the 2014 Milton Wolf Seminar on media and diplomacy at the Diplomatic Academy Vienna and is a member of the International Relations and Digital Technology Project of the Canadian International Council. She has been a visiting scholar in the Department of Government at the London School of Economics and in the School of International and Public Affairs at Columbia University. Sarah is co-editor of Circuit, a blog looking at international relations and information technology.


The Needle and the Damage Done: of Haystacks and Anxious Panopticons

Abstract:

A 2014 analysis by the New America Foundation of 225 terrorism cases inside the US since 9/11 concluded that the bulk collection of phone records by the National Security Agency (NSA) has had no discernible impact on preventing acts of terrorism. Indeed, notable failures exist where data has been collected by surveillance mechanisms but appropriate action has failed to take place because relevant information has been lost in the ever-growing haystack of (big) data which accompanies such surveillance. This paper asks: how should we characterise these failures of analysis in the context of big data generated by global surveillance? Drawing on official reviews of intelligence failures in the Boston bombings and the failed Detroit airline plot of 2009 in the context of debates in intelligence and security studies the paper argues that these failures demand a fundamental reconfiguration of the concept of powerful, all-knowing state-led surveillance which dominates discussion of surveillance-generated big data in the post Snowden age. The paper analyses the phenomenon via the metaphor of the panopticon, showing that current realities conform to neither the Foucauldian nor other more recent scholarly debates which seek to move the concept forward into an age of big data. It argues that such data and the security apparatus associated with it exist in an age of infinite rather than simply ‘big’ data given the nature of the risk environment in which they operate. This generates an anxious rather than all-powerful panopticon paralysed by the very nature of the data it collects, with associated effects on decision-making in security bureaucracies.


Alana Maurushat
Bio:

Alana Maurushat is Senior Lecturer in the Faculty of Law at UNSW Australia and Co-Academic Director of UNSW Law's Cyberspace Law and Policy Community. Dr. Maurushat has spent more than a decade working in Hong Kong, France, the United States, Canada and Australia in the fields of intellectual property, information technology law and cybercrime/cybersecurity. Dr. Maurushat is on the Board of Directors of the Internet Fraud Watchdog and a key researcher in the Cooperative Research Centre (CRC) Data to Decision – Big Data and National Security, 2014-2019.



Annelise Riles
Bio:

Annelise Riles is the Jack G. Clarke Professor of Law in Far East Legal Studies and Professor of Anthropology at Cornell, and she serves as Director of the Clarke Program in East Asian Law and Culture. Her work focuses on the transnational dimensions of laws, markets and culture across the fields of comparative law, conflict of laws, the anthropology of law, public international law and international financial regulation. Her most recent book, Collateral Knowledge: Legal Reasoning in the Global Financial Markets (Chicago Press 2011) is based on ten years of fieldwork among regulators and lawyers in the global derivatives markets. Her recently published article Managing Regulatory Arbitrage: A Conflict of Laws Approach in the Cornell International Law Journal in March 2014 explores what conflict of laws can contribute to global financial regulation. Her first book, The Network Inside Out, won the American Society of International Law's Certificate of Merit for 2000-2002. Her second book, Rethinking the Masters of Comparative Law, is a cultural history of Comparative Law presented through its canonical figures. Her third book, Documents: Artifacts of Modern Knowledge, brings together lawyers, anthropologists, sociologists and historians of science. Professor Riles has conducted legal and anthropological research in China, Japan and the Pacific and speaks Chinese, Japanese, French, and Fijian. She has served as a visiting Professor at Yale, University of Tokyo, the London School of Economics, University of Melbourne and as visiting researcher at the Bank of Japan. She is the founder and director of Meridian 180, a virtual think tank on Pacific Rim issues. She also writes about financial markets regulation on her blog, http://blogs.cornell.edu/collateralknowledge/.



Christian Sandvig
Bio:

Christian Sandvig is Associate Professor in the School of Information, the Center for Political Studies, and the Department of Communication Studies at the University of Michigan. Sandvig is a social scientist and a computer programmer. His current research investigates the negative consequences of algorithmic decision-making by computer systems that curate information. He recently proposed a system for auditing algorithms discussed in Slate and The Washington Post, was part of the team that examined Facebook’s news feed curation algorithm, keeps the reading list on auditing algorithms, and he is currently working on a book on this subject under contract to Yale University Press. Sandvig's research has appeared in The Economist, Le Monde, The New York Times, The Sydney Morning Herald, National Public Radio (US), and elsewhere. Sandvig has written for Wired and The Huffington Post. Before moving to Michigan, Sandvig taught at the University of Illinois and Oxford University. His work has been funded by the US National Science Foundation, the MacArthur Foundation, and the Social Science Research Council. He has consulted for Intel, Microsoft, and the San Francisco Public Library. He is a graduate of University of California, Davis (BA summa cum laude) and Stanford University (MA and PhD).


You are a Political Junkie and Felon Who Loves Blenders: Recovering Motives from Machine Learning

Abstract:

It has often been useful to speak of technological systems as having designers. Indeed, to say that a system was “designed” in the first place implies a motive behind its operation, and a “design” implies a process to implement that motive. In this paper, I recall the idea of the algorithm as a kind of plan (after Suchman). I then argue that making some decision-making algorithms more accountable by attributing a design and motive to them may never work. I start by distinguishing what Weizenbaum called “long programs” in the 1970s vs. today’s machine learning (ML) algorithms, dismissing the idea that complexity alone creates a problem of accountability. After discussing the interpretability vs. accuracy trade-off in ML, I consider a series of cases where a ML system may produce quixotic and unexpected results because of the tendency toward “promiscuous association.” Examples include racial discrimination in advertising and serendipity in music recommendations. I argue that promiscuous associations are now a fundamental feature of algorithmic decision-making, and that characteristics of ML make some kinds of plans more difficult. I bring the legal concept of the “protected class” to a discussion of ML and find an awkwardness at understanding previous categories of any kind within new algorithmic systems. I argue that ML in online platforms is a practice of what Hacking called “kind-making” and explore the consequences of this. While some have portrayed ML as inherently more mystical or incomprehensible than non-ML algorithms, I instead conclude that the optimization goals employed in ML offer important access to motive that can make some algorithmic systems easier to understand than earlier systems, if we abandon the hope of understanding process in a traditional way. I observe that in some situations motive, intent, and/or process are not recoverable, leaving only results.


Naveen Thayyil
Bio:

Naveen Thayyil is a member of the Humanities and Social Sciences Department at the Indian Institute of Technology where he is an Assistant Professor in Law and Public Policy with a special emphasis on Science, Technology and Society (STS) issues. Prior to this, he was teaching at the National Law School of India, Bangalore. He holds a Ph. D from the Tilburg Institute of Law Technology and Society at the University of Tilburg, the Netherlands. He was a Felix scholar between 2006-2007, when he pursued his Masters (LLM) from the University of London – jointly at SOAS, University College and Kings College London. Subsequent to his graduation from the National Law School of India, Bangalore in 2002 he practised public law in the Supreme Court and the High Court at Delhi. Naveen’s research interests lie at the intersection of three fields: legal and political theory, environmental law and technology regulation. His interests lie not only at the level of public policy viz., issues of regulation of technology for the protection of public health, environment and related rights that seek to democratise society, but also in theorising and understanding how categories of law, technology and society shape each other. His publications include the 2014 book Law Technology and Public Contestations in Europe: Biotechnology regulation and GMOs (Edward Elgar, Cheltenham, U.K.).


Shifts in Public reason: Dangers and Data Associations

Abstract:

The increasing aggregation of data and its analysis in risk regulation, across disciplines and continents, poses a peculiar problem to the liberal claim to sovereignty, particularly when transparency, openness, accountability and public participation are implicitly offered as palliatives to augment inadequacies in laws’ representational claims. Protection of society from dangers - be it moral, affective or real - has been often identified as a fundamental justification for liberal legal authority (Foucault 1975; Devlin 1965). Through the realm of environmental law, for instance, the expectation from the modern state to protect human health, the environment and vulnerable groups can be identified as a primary justification in liberal accounts of legal authority and citizenship (Lupton 1999; Kemshall 2002). The distinctive shifts in the performative modes of such accounts of sovereignty, for instance from its earlier emphasis in the principle of protection towards a stated preference in precautionary approaches in certain jurisdictions of environmental law, endures a particular difficulty for liberal sovereignty posed by large-scale aggregation and data association. Once (environmental) risk regulation is recognized as a very abstract probabilistic tool of governance, and not merely as something intrinsically real (O’Malley 2004; Douglas 1992), use of large-scale aggregations of data in risk regulation complicates the normative legitimations of liberal law here through its classical claim of expert construction of risk as representation. The truth claims of causality in risk regulation is achieved through a high level of aggregation of data across disciplines by a ‘simplification of multivalent complexities to simple parameters of likelihood and magnitude, and subsequent aggregation across highly diverse dimensions, contexts and etiologies’ (Stirling 2008). The representational claim of law is based on this ensuing production of an apparently transcendent quantitative idiom by techno-scientific experts, labelled as objective risk analysis. In contrast, scientists in risk assessment can be seen as entering the public arena as `experts who are part of a complex rhetoric and political system, as opposed to as experts on scientific truths, as truth speaking to power in a traditional picture´, (Funtowicz and Ravetz 1993; Hagendijk 2004; Kastenhofer 2011). Despite the avowed potential of the precautionary principle to challenge monopolistic epistemic claims in risk frames through more reflexive and wider deliberative practices, attempts at its legal implementation in various legal jurisdictions have continued to be deeply steeped in expert cultures of risk (Peele 2007; Thayyil 2014). Be it in a classical paradigm of risk, or in its precautionary avatar, a large number of events can be seen to be sorted into a distribution towards making expert probabilistic predictions, particular details of each case submerged or stripped away for a complex assemblage of elements (Dean 1999). The use of large scale data within this assemblage has only accentuated multiple problems viz., the permeability between the categories of knowledge, information and data, the meaning and role of expertise; the significance of various kinds of divides; the normativity of the data set from which the risk analysis is done, including ethics and reliability of techniques of gathering (Boyd and Crawford 2012). All these issues may all be submerged in an ever-increasing abstraction through newer techniques of bioinformatics (Jones et. Al 2006), which may be in contradiction to liberal claims of transparency, openness, accountability and public participation. These emerging situations within environmental regulation require description and further interrogation, and what additional modes of public reason may law need requires further speculation.


Etienne Turpin
Bio:

Etienne Turpin is a philosopher studying, designing, curating, and writing about complex urban systems, political economies of data and infrastructure, aesthetics and visual culture, and Southeast Asia colonial-scientific history. In Jakarta, Indonesia, he is the director of anexact office, and the co-principal investigator, with Dr Tomas Holderness, of PetaJakarta.org. At the University of Wollongong, Australia, he is the Vice-Chancellor’s Postdoctoral Research Fellow with the SMART Infrastructure Facility, Faculty of Engineering and Information Science, and an Associate Research Fellow with the Australian Center for Cultural Environmental Research. He is a member of the SYNAPSE International Curators’ Network of the Haus der Kulturen der Welt in Berlin, where he is the co-editor, with Anna-Sophie Springer, of the intercalations: paginated exhibition series as part of the Das Anthropozän-Projekt. He is also the co-editor of Art in the Anthropocene: Encounters Among Aesthetics, Politics, Environments and Epistemologies (Open Humanities Press, 2015) and Jakarta: Architecture + Adaptation (Universitas Indonesia Press, 2013), and editor of Architecture in the Anthropocene: Encounters Among Design, Deep Time, Science and Philosophy (Open Humanities Press, 2013). Prior to his work in Asia, he taught design research at the University of California Berkeley, the University of Michigan, and the University of Toronto.


Surviving the Laboratory City: GeoSocial Information, Data Polities and the Next 1 Billion Users (with Tomas Holderness)

Abstract:

The role of data within the context of urban governance has become as ubiquitous as the mobile devices which daily generate the petabytes of geospatial information to be consumed for novel urban analyses; yet, the value and purpose of data as an critical component for democratic practices has never been less clear. To address this widening gap between the laudatory role of geospatial data and the absence of democratic purpose that it suffers from, the paper examines four aspects of contemporary data urbanism. The authors situate their concerns within the context of an increasingly commercialized, highly-competitive sector—led by “smart city” firms, data management services, the NGOcracy, and philanthropic “open data” organizations—which drives the data rush for the monetization of urban geospatial information. The authors contend, first, that increasingly data urbanism occurs in the context of “laboratory cities.” Second, the authors claim that data are meaningless without platforms of assembly, but that the construction and maintenance of such platforms create significant challenges for community organizations, institutions, and government agencies. Third, within the context of web 2.0 modes of production and volunteered geographical information (VGI), the authors contend that data producers require techno-organizational structures for the coordination and defense of their data sets and the collective equipment, especially open source software (OSS), upon which these data depend. Fourth, the authors argue that if a meaningful coordination among and defense of VGI data and OSS tools can occur, the democratic potential of geosocial information can be more fully realized; however, such an outcome will require not only surviving, but overcoming the latent colonial and authoritarian tendencies currently operating in the laboratory city.


Rapporteurs

Daniel Cater is a PhD research student at the University of New South Wales and research assistant working with the Data2Decisions Cooperative Research Centre. His research and work are concentrated on the nexus of technology, Big Data, surveillance and privacy issues. Daniel graduated with a Juris Doctor (Hon2:1) from UNSW, represented the university on the Concours Jean-Pictet International Law Team and also achieved Dean’s List awards in IHL and Criminal Threats from Cyberspace courses. During his study, Daniel interned with the Cyberspace Law and Policy Community and contributed to submissions on the ALRC Privacy Tort proposal and the State Parliament’s proposed Classification legislation reforms. Prior to commencing legal study, Daniel worked as a Registered Nurse for 6 years in Intensive Care Departments for major metropolitan adult and children’s hospitals. Daniel has a Bachelor’s Degree in Nursing Science (Distinction) and served for 4 years as a Nursing Officer in the Royal Australian Air Force.    
 

Stanley Shanapinda is a Ph.D. candidate with the Australian Centre for Cyber Security, UNSW, ADFA. He is studying the tripartite relationship between the powers of law enforcement agencies to access and use metadata; the development of communications technologies; and the role of oversight. He holds a Masters degree in ICT Policy and Regulation from the University of the Witwatersrand in Johannesburg, South Africa. He is a Legal Practitioner of the High Court of Namibia. He is the inaugural CEO of the Communications Regulatory Authority of Namibia and was the Head for Legal Advice at Telecom Namibia Limited.
 

Scarlet Wilcock is a doctoral candidate and sessional lecturer at the University of New South Wales’ School of Law. Scarlet is currently engaged in research projects on policing and criminalisation, and gender and social security fraud. Her doctoral research critically explores Centrelink surveillance and investigation practices and their effects on welfare recipients and the welfare state more broadly. Previously she has practiced as a solicitor in the area of social security law.