Acknowledging AI’s dark side

This is the text of a recent letter published in Science, for those who can’t get behind the paywall
Science 4 September 2015:
Vol. 349 no. 6252 p. 1064
DOI: 10.1126/science.349.6252.1064-c

The 17 July special section on Artificial Intelligence (AI) (p. 248), although replete with solid information and ethical concern, was biased toward optimism about the technology.

The articles concentrated on the roles that the military and government play in “advancing” AI, but did not include the opinions of any political scientists or technology policy scholars trained to think about the unintended (and negative) consequences of governmental steering of technology. The interview with Stuart Russell touches on these concerns (“Fears of an AI pioneer,” J. Bohannon, News, p. 252), but as a computer scientist, his solutions focus on improved training. Yet even the best training will not protect against market or military incentives to stay ahead of competitors.

Likewise double-edged was M. I. Jordan and T. M. Mitchell’s desire “that society begin now to consider how to maximize” the benefits of AI as a transformative technology (“Machine learning: Trends, perspectives, and prospects,” Reviews, p. 255). Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate.

The section implied that we are all engaged in a common endeavor, when in fact AI is dominated by a relative handful of mostly male, mostly white and east Asian, mostly young, mostly affluent, highly educated technoscientists and entrepreneurs and their affluent customers. A majority of humanity is on the outside looking in, and it is past time for those working on AI to be frank about it.

The rhetoric was also loaded with positive terms. AI presents a risk of real harm, and any serious analysis of its potential future would do well to unflinchingly acknowledge that fact.

The question posed in the collection’s introduction—“How will we ensure that the rise of the machines is entirely under human control?” (“Rise of the machines,” J. Stajic et al., p. 248)—is the wrong question to ask. There are no institutions adequate to “ensure” it. There are no procedures by which all humans can take part in the decision process. The more important question is this: Should we slow the pace of AI research and applications until a majority of people, representing the world’s diversity, can play a meaningful role in the deliberations? Until that question is part of the debate, there is no debate worth having.

  1. Christelle Didier1,
  2. Weiwen Duan2,
  3. Jean-Pierre Dupuy3,
  4. David H. Guston4,
  5. Yongmou Liu5,
  6. José Antonio López Cerezo6,
  7. Diane Michelfelder7,
  8. Carl Mitcham8,
  9. Daniel Sarewitz9,
  10. Jack Stilgoe10,
  11. Andrew Stirling11,
  12. Shannon Vallor12,
  13. Guoyu Wang13,
  14. James Wilsdon11,
  15. Edward J. Woodhouse14,*

  1. 1Lille University, Education, Lille, 59653, France.

  2. 2Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, 100732, China.

  3. 3Department of Philosophy, Ecole Polytechnique, Paris, 75005, France.

  4. 4School for the Future of Innovation in Society, Arizona State University, Tempe, AZ 85287-5603, USA.

  5. 5Department of Philosophy, Renmin University of China, Beijing, 100872, China.

  6. 6Department of Philosophy, University of Oviedo, Oviedo, Asturias, 33003, Spain.

  7. 7Department of Philosophy, Macalester College, Saint Paul, MN 55105, USA.

  8. 8Liberal Arts and International Studies, Colorado School of Mines, Golden, CO 80401, USA.

  9. 9Consortium for Science, Policy, and Outcomes, Arizona State University, Washington, DC 20009, USA.

  10. 10Department of Science and Technology Studies, University College London, London, WC1E 6BT, UK.

  11. 11Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9SL, UK.

  12. 12Department of Philosophy, Santa Clara University, Santa Clara, CA 95053, USA.

  13. 13Department of Philosophy, Dalian University of Technology, Dalian, 116024, China.

  14. 14Department of Science and Technology Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
  1. *Corresponding author. E-mail:
Posted in Uncategorized | Leave a comment

A tale of two trials

This story is, I think, an interesting example of Responsible Research and Innovation in action. It is a story of an institute and its researchers learning from controversy and from their own experiments in public dialogue. 

In the summer of 2012, a group of polite protesters assembled at Rothamsted Research, north of London, for what turned out to be an extremely polite protest. Earlier that year, a group had threatened to destroy a young crop of experimental wheat, genetically modified with the aim of repelling aphids. Rothamsted had, to their credit, seen the planting of this crop not as an experiment to be hidden from public view (even though they are legally bound to reveal its location and timing), but rather an opportunity to open up a dialogue about the pros and cons of genetic modification as a tool to help improve food security.

Rothamsted were keen to begin a formal public dialogue process focussed on the trial. But, as I and others argued at the time, to do so would have been disingenuous given that there was, understandably, no intention of changing direction or pulling up the crop. Instead, Rothamsted, in conversation with Sciencewise, sensibly chose a strategic, forward-looking dialogue exercise centred on the building of some principles for engagement with industry (a key condensation point for public concerns about GM crops). The exercise and its report informed a new approach, giving the institution and its researchers new confidence in public discussions about contentious agricultural research.

At the same time, Rothamsted found a new use for their high-security test field (the original GM wheat trial cost less than a million pounds, but security around the experiment cost more than two million). In the Spring of 2014, a crop of Camelina that had been modified to produce Omega-3 fatty acids for nutritional purposes, was planted. As with the GM wheat trial two summers earlier, approval had been granted by the UK’s Department for Food and Environmental Affairs, but this time the institute had decided to go public before the crop was in the ground. Rothamsted sent out a public consultation and invited various stakeholders, including local organic farmers and beekeepers, along to meetings the moment the application to run the trial was made. A concern from the beekeepers about the possible spread of pollen meant that Rothamsted went beyond the demands of regulators to put a net over the crop during its pollination.

This trial itself took place in the glare of public scrutiny. The BBC filmed the sowing, the flowering and the harvesting of the plant. This time around, however, there was relatively little antagonism. In summer this year, Rothamsted published the results of the wheat trial. As with many experiments, the results were negative. The crop didn’t repel its pests as hoped. Unusually for a negative result, the paper received huge coverage and allowed Rothamsted to communicate the message that not only were they doing genuine frontier research, but also that they were doing so in public, in the open.

Posted in Uncategorized | Leave a comment

New Paper: Geoengineering as collective experimentation

20219935I’ve just published a paper in the journal of Science and Engineering Ethics which gives a summary of one of the ideas in the book – technology as a social experiment – and develops it to discuss how we might think about the politics of conducting experiments in controversial areas of science. The paper began life at a fascinating conference hosted by a Ibo van de Poel, a philosopher in Delft running a large project looking at a range of technologies-as-experiments.

The paper is Open Access and it’s available here.


Geoengineering is defined as the ‘deliberate and large-scale intervention in the Earth’s climatic system with the aim of reducing global warming’. The technological proposals for doing this are highly speculative. Research is at an early stage, but there is a strong consensus that technologies would, if realisable, have profound and surprising ramifications. Geoengineering would seem to be an archetype of technology as social experiment, blurring lines that separate research from deployment and scientific knowledge from technological artefacts. Looking into the experimental systems of geoengineering, we can see the negotiation of what is known and unknown. The paper argues that, in renegotiating such systems, we can approach a new mode of governance—collective experimentation. This has important ramifications not just for how we imagine future geoengineering technologies, but also for how we govern geoengineering experiments currently under discussion.

Posted in Uncategorized | Leave a comment

Why so quiet?

This blog has entered a fallow period. I have migrated my blog writing to two other places. The first is the Guardian Political Science blog, which has been going better than I and the other editors feared, given how niche debates about science policy can be.

The second place is the blog of my new book, Experiment Earth. This book organises some of the ideas I have drafted here and goes deeper into the case study of geoengineering. I would, of course, love to know what readers think.

I may resuscitate this blog in due course but, for now, don’t expect much…

Posted in Uncategorized | Leave a comment

Responsible Research and Innovation in action

As policy interest in Responsible Research and Innovation grows, those who are new to the discussion rightly ask what it might mean in practice. How do we know it when we see it? What does irresponsible research and innovation look like? My response is perhaps a bit unsatisfying. RRI is a work-in-progress, as are science, politics and society more broadly. This means that RRI is necessarily experimental and open-ended. Responsible science and responsible technologies will not just show themselves. But we can look at experiments that are taking place at various levels in various places, to see how the rules of research and innovation may be rewritten in more responsible ways. Here is my starting list of 13 RRI things I have found interesting. This is not to say that all of these things are unequivocally good. An important part of RRI is the surfacing of differences and political clashes around all of these things. But they seem to me to be interesting developments.

  1. CAMBIA – Open source biotechnology
  2. The Bermuda Principles of the Human Genome Project
  3. Jonas Salk refusing to patent the Polio Vaccine
  4. Joseph Rotblat leaving the Manhattan Project and starting Pugwash
  5. The MHRA’s Yellow Card Scheme – now opened up to members of the public as what Sheila Jasanoff would call a ‘technology of humility’  
  6. Berkeley Earth – an attempt to address issues in climate science
  7. The Biobank UK Ethics and governance council –
  8. The Sciencewise Expert Resource Centre – running public dialogue for the UK Government
  9. CSynBi and Flowers – multidisciplinary Synthetic Biology Research
  10. EPSRC’s framework for responsible innovation
  11. GSK’s involvement in Patent Pools for neglected diseases  and open innovation
  12. The Alzheimers Society QRD network – involving carers and patients in research funding and management
  13. The SPICE project – an early set of technical and social experiments in the world of geoengineering research (see my, ahem, book)

This list is inspired by a project in which I and colleagues at UCL are involved, called RRI TOOLS. It aims to develop a toolkit for responsible research and innovation that can be taken up by scientists, policymakers, companies and others. The inevitable imperfection with such a project is that it will tend to emphasise processes rather than outcomes. This is why, as with the SPICE project, the Polio Vaccine, Pugwash and the Bermuda Principles, we should also pay attention to situations in which people have responsibility thrust upon them. Systems of research and innovation are as likely to be responsibly shaped by accidents as by intentional efforts to increase public engagement and force disciplines to work together.

If you are reading this and have suggestions for more, please add them. I hope this unscientific sample also prompts questions about the criteria for selection, beyond my main one, which is ‘interestingness’.

Posted in Uncategorized | 5 Comments

Governing emerging technologies 2013 Blog Award Winners

(This post is reprinted from the Guardian Political Science blog)

Every year, I teach a course for UCL undergraduates on Governing Emerging Technologies. Students from our department – Science and Technologies – join students from science degrees around UCL to think about technologies before they are set in stone. The course is an exercise in navigating uncertainty. There are few definitive statements to rely upon, and I ask students to be sceptical of claims that scientists, inventors, ethicists, policymakers or anyone else make about the future. As well as doing the usual essays, I also get them to blog about whatever aspects of emerging technologies they like.

Some of the results were brilliant, blending difficult sociological ideas with cutting edge science and first-class writing. As universities go quiet for August, I thought now would be a good time to highlight, and link to, my three favourite examples.

First up, Brandon Gleken, who was visiting for a term from the University of Pennsylvania. Brandon began with an interest in venture capital and innovative start-up. Over the term, he developed a critical angle and put some politics back into a debate that is often breathlessly enthusiastic. His post about “solipsistic startups” is a great case of using one strong idea to hold some important and difficult messages.

Secondly, Rosie Walters. Her blog did a brilliant job of retelling and updating some stories that are often repeated in STS. In particular, her take on feminism and technology, looking at the washing machine, is a far better introduction to that debate than you would find in most dry academic texts.

Finally, Philipp Boeing, who is already involved in the young science of synthetic biology, and came to the Governing Emerging Technologies course from a computer science degree. His blog is autobiographical, including reflections on social and ethical questions as part of his journey towards scientific research. His post on scientific and artistic freedom is an honest account of a perennial tension that a lot of practicing scientists feel.

These students have agreed for me to point people to their work. But, if you visit their blogs, remember that they are not experienced bloggers. They are blogging as part of a course requirement. Their work deserves a wider audience, and it deserves praise. Encouraging comments only, please.

Posted in Uncategorized | 2 Comments

Letter to Nature

(Led by Sam Evans from Berkeley, a few of us in the Science and Technology Studies community have written a letter to Nature in response to a recent comment piece on Synthetic Biology. As the paper is paywalled, I have pasted a version of it here). 


Synthetic biology: missing the point

Volker ter Meulen warns that if environmental groups and others exaggerate the risks of synthetic biology it could promote over-regulation, which he says happened for genetically modified organisms (See here). But the point of supporting synthetic biology is not about making sure that science can go wherever it wants: it is about making the type of society people want to live in.

In the United States, for example, the rapid and uncritical introduction of genetically modified organisms prevented debate on issues such as alternative innovation pathways, and the impact on biodiversity and pest resistance. Many believe that these issues would have been better addressed through earlier and broader public discussion of the uncertainties surrounding transgenic organisms (see  for example S. Jasanoff Designs on Nature Princeton Univ. Press; 2005).

In our view, ter Meulen trivializes the role of social scientists in suggesting that they could help the synthetic-biology debate by finding better ways to communicate what scientists think. He also implies that public concern over such technologies and their governance reflects only a failure to understand the science of risk assessment — but this ‘deficit model’ of public concerns has long been discredited (see A. Irwin and B. Wynne Misunderstanding Science? Cambridge Univ. Press;1996).

It is not unknown for scientists themselves to foster exaggeration and uncritical acceptance of claims, or to focus on anticipated benefits rather than on risks. This practice may be at the heart of wider public concerns about responsible innovation (see the report of the Synthetic Biology dialogue (pdf), for instance).


Sam Weiss Evans University of California, Berkeley, USA.
Sheila Jasanoff Harvard Kennedy School, Cambridge, Masschusetts, USA.
Jane Calvert University of Edinburgh, UK.
Jason Delborne North Carolina State University, Raleigh, USA.
Robert Doubleday University of Cambridge, UK.
Emma Frow University of Edinburgh, UK.
Silvio Funtowicz University of Bergen, Norway.
Brian Green Santa Clara University, California, USA.
Dave H. Guston Arizona State University, Phoenix, USA.
Ben Hurlbut Arizona State University, Phoenix, USA.
Alan Irwin Copenhagen Business School, Denmark.
Pierre-Benoit Joly INRA, IFRIS, Paris, France.
Jennifer Kuzma North Carolina State University, Raleigh, USA.
Megan Palmer Stanford University, California, USA.
Margaret Race SETI Institute, Mountain View, California, USA.
Jack Stilgoe University College London, UK.
Andy Stirling University of Sussex, UK.
James Wilsdon University of Sussex, UK.
David Winickoff University of California, Berkeley, USA.
Brian Wynne Lancaster University, UK.
Laurie Zoloth Northwestern University, Evanston, Illinois, USA.

Posted in Uncategorized | 1 Comment