Algorithmic States to Algorithmic Brains Workshop

The modern age is characterised by two technological trends. The first is the increased outsourcing of human agency to intelligent algorithms; the second is the increased integration between mind and machine. Held at NUI Galway in September 2016, the ‘From Algorithmic States to Algorithmic Brains’ workshop aimed to explore the political, legal and ethical implications of both trends asking pressing questions such as:

  • If we outsource political and bureaucratic decision-making authority to machines, what effect does this have on key political values such as transparency, democratic participation, fairness, and efficiency?
  • And if we integrate more with machines what effects might this have on our personal virtues, autonomy, and our sense of self?

Held over two days, the workshop brought together a diverse group of international scholars. Short summaries of their presentations are included below. See here for the schedule and more details on participants.

The workshop took place as part of the Algocracy and Transhumanism project which is funded under the IRC New Horizons scheme.

SESSION ONE – Building Algorithmic Governance Systems

Good decisions by proxy – Tal Zarsky (Haifa)

Tal’s paper explored the problems and possibilities of algorithmic decision making, especially when such decisions are made by proxy. He recognises that such systems, far from being static, are dynamic – constantly readjusting in terms of input and output- and are therefore open to manipulation by proxy, or what he calls gaming, by a variety of actors and means. The paper used examples ranging from the manipulation of credit ratings to student attendance monitoring.

 

There is plenty of time at the bottom : The economics, risk and ethics of time compression – Anders Sandberg (Oxford)

Anders’ paper was an eye-opening exploration of the potential consequences of time compression – or the speeding up of computational tasks to quantum levels – and the effects this might have on human values and society.  Raising frightening prospects of the social and economic inequalities, policy vacuums and loss of control that can occur when some processes are speeded up and others not, he identifies an ethical gap between computation and humans, and a cybernetic gap between computation and regulation. Anders illustrated his paper with examples ranging from enhanced state surveillance apparatus to hyper sensitive high frequency trading algorithms causing flash-crashes.

 

SESSION TWO – Algorithmic Governance in Practice

Fleeing from Frankenstein and meeting Kafka on the way: algorithmic decision-making in higher education – Paul Prinsloo (UNISA)

Paul’s paper raised concerns that increased bureaucratic and financial pressures are necessitating an over-reliance on the algorithmic analysis of student data in higher education – or an ‘audit society’. While he sees the potential for good in this, there are inherent dangers in fetishizing data relating to students and treating it as a resource to be mined, or as a means of surveillance, without thorough ethical consideration. He posits that technology must have ethics, and tries to answer the question of how to use algorithmic decision making in higher education to provide caring appropriate and affordable learning experience while also doing so in a transparent, accountable and ethical way.

{poem}.py : a critique of linguistic capitalism – Pip Thornton (RHUL / NUI Galway)

Pip’s paper considered the power embedded in digital and algorithmically mediated language, examples of which can be come manifest in search engine results or autopredictions. In particular, Pip is concerned with the way Google sells words though its advertising platform, a form of linguistic capitalism which she argues is important on a discursive and epistemic level as well as a political one. The paper concluded with a demonstration of Pip’s {poem}.py project, in which she critiques the exchange value of language to Google by feeding poetry through the Adwords system and printing out the results as receipts.

 

SESSION THREE – Autonomous Systems, Rights and Responsibilities

Ethics behind the Wheel and under the Hood: Vehicle-Automation and Responsibility-Loci – Sven Nyholm (Eindhoven)

Sven’s paper brought an ethical approach to the burgeoning literature on self driving cars. Exploring the issue from the perspective of ‘normal functioning’ mode rather than the accident based scenarios which tend to dominate the debate, Sven questions the concept in practice of an autonomous driving system and looks to ideas of collaborative agency and collective responsibility as a way of furthering the debate. Using the differing examples of Google and Tesla models, he suggests that it is only when we accept that these systems – far from being truly autonomous – are actually machine-human collaborations that we can begin to sort out the ethics and legalities of agency and responsibility.

 

Other Problems: Rethinking Ethics in the Face of Autonomous Machines – David Gunkel (Northern Illinois)

David’s paper was a fascinating enquiry into the implications of ‘de-othering’ the idea of the autonomous machine. Once we stop imagining technological assemblages such as algorithms, bots or computers as being an ‘other’, we can more effectively approach the necessary practicalities such as rights, responsibilities and other ethical considerations of the so-called ‘robot invasion’. Using a moral philosophy framework, David discussed Microsoft’s Tay chatbot and Google’s new domestic personal assistant Google Home to illustrate who and what might be the moral subjects and moral patients of the future.

 

SESSION FOUR – Technological Control and Political Power

Do Blockchains have politics? – Rachel O’Dwyer (Maynooth)

Rachel’s paper provided a fascinating historical context for blockchain as a system of distributed consensus and trust. She highlights the tension within the technology in that it is hailed simultaneously from an anarcho-communist angle as a means of non-heirarchical organization, and by the state as a way of improving bureaucracy. Evoking traces of colonial and feminist critique in the way blockchains govern through numbers, she suggests that far from being a radical break from traditional forms of regulation and providing the potential for political intervention, the blockchain is a post-ideological instrument of quantification which simply replaces politics with economics.

The control of life and everything living. Biohacking as a Technology of Cybernetic Biopolitics – Laura Hille (Leuphana)

Laura presented her Foucaultian critique of biopolitics in the age of bodyhacking techniques such as implantation technologies, which she illustrated with some alarming and uncomfortable examples of  ‘body invasive’ enhancements. Drawing parallels between Cybernetics and modern day biopolitics, Laura’s paper raised important questions as to what the ‘bio’ in biopolitics means today, and  leading on from this, who has – or should have – control of a biology which has become programmable – and therefore as hackable as software.

SESSION FIVE – Building Better People

Neuroenhancement and Human Values: How they Affect Each Other – Laura Cabrera (Michigan State)

Laura’s paper raised interesting cause and effect questions around how neuroenhancement practices not only aim to affect values but that the practices themselves cannot be disentangled from their own value judgements. She suggests that a social responsibility framework might be a way to examine and dissipate the tensions underlying the interplay of values and neuroenhancement practices.

 

Moral Bio-Enhancement, Freedom, Value, And The Parity Principle – Jonathan Pugh (Oxford)

Jonathan’s paper explored the fascinating debate around John Harris’s concept of the ‘freedom to fall’, and how non-cognitive moral bio-enhancements might interfere with this freedom. Examining the concept against objections such as Neil Levy’s Parity Principle and Savulescu and Persson’s God Machine, the paper used examples such as the fictional Ludovico method of aversion therapy in A Clockwork Orange and the use of Baker-Miller pink in prison environments to illustrate the important differences between the freedom to do immoral acts, and the freedom to choose to act immorally. These differences have a significant effect on what we can or cannot call moral enhancements or environmental interventions, and highlight the important dilemmas faced by freedom-based objections to NCMBEs.

 

SESSION SIX – The Future of Technological Governance

AI and the Artificial Moral Advisor – Alberto Giubilini (Oxford)

Alberto’s paper discussed the benefits of having an ‘artificial moral advisor’ to enhance and make up for what he perceives as the moral limitations of human moral psychology. He argues that such a system would utilise artificial intelligence to enhance the positives in human functions while repressing the negatives such as biases and prejudices; in effect creating a ‘moral compass’ that will enable humans to realise their full potential as moral agents.

A Research Agenda for Algorithmic Outsourcing – Chris Noone & John Danaher (NUI Galway)

John and Chris shared results from the collective intelligence workshop that was run as part of the previous project event at NUI Galway on Algorithmic Governance in March 2016. Highlighting the problems of studying algorithmic processes, the study proposes the collective intelligence model of Interactive Manangement as a viable and informative method of studying the conflicting ethical and disciplinary considerations of algorithmic governance.