There were three panel sessions consisting of four short 12-minute presentations, followed by extensive Q&A. For a live-blog of the event, see Prof Muki Haklay’s blog post. For a more reflective take on the workshop, see Anthony Behan’s post. The following is my own (JD), no doubt opinionated, summary of the talks. I believe it reflects some of the key themes and issues raised by the talks, but should not be thought to represent perfectly the priorities and emphases of the speakers.
Panel 1 – ‘Perspectives on Algorithmic Governance’
Professor Willie Golden (NUIG) ‘Information Systems Perspectives on Algorithmic Governance’
Professor Golden’s talk provided an excellent introduction to the day. He explained how issues associated with algorithmic governance and artificial intelligence are nothing new. Algorithmic governance is arguably at the basis of all modern governance structures and problems associated with automation and AI have been discussed in information systems theory for more than 50 years. What is different now is the underlying technology: the rise of big data and the renewed interest in AI over the past four or five years.
Dr Kalpana Shankar (UCD) ‘Algorithmic Governance – and the Death of Governance’
Dr Shankar’s talk raised important questions surrounding data curation. The assumption underlying much of the big data debate is that the data being mined for useful information is ‘just there’. But this is not the case. Decisions are made about which data gets included and, perhaps more important, whose data gets included. This affects what people think is important and who they think is important. We should be careful not to further marginalise particular communities through data curation decisions.
Professor John Morison (QUB) ‘Algorithmic Governmentality: Techno-optimism and the move toward the dark side’
Professor Morison gave a very provocative ‘work-in-progress’ talk. He highlighted the computational turn in modern society and queried whether it represented a turn towards the dark side. At the heart of this computational turn there is a myth: that more data, and more open data = more accountability. This myth rests upon a fantasy of abundance. He also noted how algorithmic governmentality changes the nature of the governed subject. We are no longer natural persons but rather temporal aggregates of data. What implications does this have for the future of politics? Does it spell the death of politics?
Niall O’Brolchain (Insight Centre) ‘The Open Government Initiative’
Niall’s talk reminded us of the important distinction between governance and government. Governance is everywhere; government is more limited (i.e. to legally recognised ‘government’). He then focused on the work of the Open Government Partnership. This is an international initiative that brings together people and government. It tries to use technology to improve accessibility of government. Governments have to develop action plans to address five grand challenges. He then showed how data could be used to call governments to account on those action plans, identifying the gap between rhetoric and reality.
Panel 2 – ‘Algorithmic Governance and the State’
Dr Brendan Flynn (NUIG) – ‘When Big Data Meets Artificial Intelligence will Governance by Algorithm be More or Less Likely to Go to War’
Dr. Flynn’s talk focused on the increasing role of algorithms and AI in military systems and considered the lessons we could learn from these systems for civilian equivalents (e.g. self-driving cars). He felt we could learn something particularly important from the administrative ideal of ‘command responsibility’ in the military context. The commander of a naval vessel has ultimate responsibility for what happens on that vessel. The buck stops with him/her. As automated ‘black box’ systems become more prevalent in society it will be essential to have similarly strong accountability measures in place.
Dr Maria Murphy (Maynooth) – ‘Algorithmic Surveillance: True Negatives’
Dr. Murphy’s talk examined the role of surveillance algorithms in the modern state. This has obviously been an area of considerable social concern, particularly in light of the Snowden revelations. Many governments have adopted a ‘collect everything’ attitude and have then used the data collected to make important decisions (e.g. who gets put on a no-fly list). Dr Murphy challenged this approach and then considered the role of the European Convention on Human Rights, in particular article 8 (“The Right to Private Life”) in protecting against potential abuses of algorithmic surveillance.
Professor Dag Weise Schartum (Oslo) – ‘Transformation of Law into Algorithm’
Professor Schartum offered some critical commentary on the context statement for the workshop, suggesting that algorithms don’t ‘replace’ public-decision-making. Rather, they are a form of public decision-making. He then presented a simple model for transforming legal regulations in algorithm. The model is illustrated below. It is an iterative model and highlights different roles for different social actors in the process.
Dr Heike Felzmann (NUIG) – ‘The Imputation of Mental Health from Social Media Contributions’
Dr. Felzmann’s talk looked at attempts to develop algorithms that can impute mental health status from one’s social media profile and contributions. She highlighted a number of studies in recent years that have tried to infer affective disorders like depression and PTSD from both the timing and content of posts to sites like Facebook or Twitter. She noted some disturbing ethical gaps in the development and potential deployment of these algorithms.
Panel 3 – ‘Algorithmic Governance in Practice’
Professor Burkhard Schafer (Edinburgh) – ‘Exhibit A – Algorithms as Evidence in Legal Fact-Finding’
Professor Schafer’s talk focused on the use of algorithm-derived evidence in legal trials. Courts often display an ambivalent attitude toward such evidence, sometimes accepting it with relatively little question (e.g. DNA evidence) other times being more resistant. Either way, it is essential for the rules of fair procedure and cross examination to be upheld. How can this be the case if the algorithms are opaque for technical or legal reasons? Professor Schafer suggested three solutions to this problem: (i) increasing use of open source algorithms; (ii) auditing by trusted third parties; and (iii) allowing the algorithm to explain itself to us through natural language interpretations of its own ruleset.
Dr. Aisling de Paor (DCU) – ‘Algorithmic Governance and Genetic Information’
Dr de Paor’s talk used the case study of genetic information to explore issues relating to algorithmic governance. Genetic information is now commonplace in society, with much hype and misperception surrounding its utility and effectiveness. Many people do not realise that all genetic information is mined from the human genome using algorithms of some sort. Dr De Paor focused on potential uses of this information in social decision-making, e.g. by insurance companies or employers. She argued that such uses raise important human rights issues, centring on: (i) the right to privacy; (ii) the right to non-discrimination; and (iii) the right to dignity and integrity.
Anthony Behan (IBM – speaking in a personal capacity) – ‘Ad Tech, Big Data and Prediction Markets: The Value of Probability’
Anthony joined us from IBM but spoke at the conference in a personal capacity. He used a particular case study involving online advertising to illustrate the new forms of algorithmic governance. He provided a detailed and fascinating explanation of how real-time bids work in online advertising. These are auctions that try to link consumers to advertisers using data-mining techniques. The auctions take place in their millions every day, operating at 200 millisecond timescales. The goal is to use data derived from consumers to facilitate more tailored advertising. But the consumer themselves is not what is being bought and sold. Rather, it is a probability estimate concerning their likely intent.
Professor Muki Haklay (UCL) – ‘Algorithmic Governance in Environmental Information (or How Technophilia Shapes Environmental Democracy)’
Professor Haklay’s talk looked at the role of data in the environmental movement. It took a historical approach, highlighting some problems in the use of technology to facilitate better environmental decision-making. It noted how open data and access to environmental information have long been cornerstones of the environmental movement. But open data by itself is relatively useless. Very few people know how to make sense of the information that is being collated and released to the public. To truly empower citizens we need to make the data useful. The talk closed by using the recent controversy surrounding the water supply in Flint Michigan as a cautionary tale.