What can a self-driving car crash teach us about the politics of machine learning?

This post is an excerpt from the paper Machine learning, social learning and the governance of self-driving cars, in Social Studies of Science. It was originally published on the Transmissions blog


(Image: NTSB simulation of crash scenarios, 2017)


In May 2016, a Tesla Model S was involved in what could be considered the world’s first self-driving car fatality. In the middle of a sunny afternoon, on a divided highway near Williston, Florida, Joshua Brown, an early adopter and Tesla enthusiast, died at the wheel of his car. The car failed to see a white truck that was crossing his path. While in ‘Autopilot’ mode, Brown’s car hit the trailer at 74 mph. The crash only came to public light in late June 2016, when Tesla published a blog post, headlined ‘A tragic loss’, that described Autopilot as being ‘in a public beta phase’.

Self-driving cars, quintessentially ‘smart’ technologies, are not born smart. Their brains are still not fully formed. The algorithms that their creators hope will allow them to soon handle any eventuality are being continually updated with new data. The cars are learning to drive.

Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’

Proponents of self-driving cars see machine learning as a way of compensating for human imperfection. Not only are humans unreliable drivers, they seem to be getting worse rather than better. After decades of falling road death numbers in the US, mostly due to improved car design and safety laws, rates have been increasing since 2010, probably due to phone-induced distraction. The rationalists’ lament is a familiar one. TS Eliot has the doomed archbishop Thomas Becket say in ‘Murder in the cathedral’: ‘The same things happen again and again; Men learn little from others’ experience.’

Self-driving cars, however, learn from one another. Tesla’s cars are bought as individual objects, but commit their owners to sharing data with the company in a process called ‘fleet learning’. According to Elon Musk, the company’s CEO, ‘the whole Tesla fleet operates as a network. When one car learns something, they all learn it’, with each Autopilot user as an ‘expert trainer for how the autopilot should work’.

The promise is that, with enough data, this process will soon match and then surpass humans’ abilities. The approach makes the dream of automotive autonomy seem seductively ‘solvable’. It is also represents a privatization of learning.

As work by scholars such as Charles Perrow and Brian Wynne has revealed, technological malfunctions are an opportunity for the reframing of governance and the democratization of learning. The official investigations of and responses to the May 2016 Tesla crash represent a slow process of social learning.

The first official report of the May 2016 crash, from the Florida police, put the blame squarely on the truck driver for failing to yield the right of way. However, the circumstances of the crash were seen as sufficiently novel to warrant investigations by the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA). The NTSB is tasked with identifying the probable cause of every air accident in the US, as well as some highway crashes.

The NTSB’s preliminary report was matter-of-fact. It relates that, at 4:40pm on a clear, dry day, a large truck carrying blueberries crossed US Highway 27A in front of the Tesla, which failed to stop. The Tesla passed under the truck, sheering off the car’s roof. The collision cut power to the wheels and the car coasted off the road for 297 feet before hitting and breaking a pole, turning sideways and coming to a stop. Brown was pronounced dead at the scene. The truck was barely damaged.

The NHTSA saw the incident as an opportunity for a crash course in self-driving car innovation. Its Office of Defects Investigation wrote to Tesla demanding data on all of the company’s cars, instances of Autopilot use and abuse, customer complaints, legal claims, a log of all technology testing and modification in the development of Autopilot and a full engineering specification of how and why Autopilot does what it does.

In January 2017, the NHTSA issued its report on the crash. The agency’s initial aim was to ‘examine the design and performance of any automated driving systems in use at the time of the crash’. The technical part of their report emphasized that the Tesla Autopilot was a long way from full autonomy. A second strand of analysis focused on what the NHTSA called ‘human factors’. The agency chose to direct its major recommendation at users: ‘Drivers should read all instructions and warnings provided in owner’s manuals for ADAS [advanced driver-assistance systems] technologies and be aware of system limitations’. The report followed a pattern, familiar in STS, of blaming sociotechnical imperfection on user error: humans, as anthropologist Madeleine Clare Elish has described, become the ‘moral crumple zone’.

The NTSB went further, recognizing the opportunity to learn. The Board sought to clarify that the Tesla Model S didn’t meet the technical definition of a ‘self-driving car’, but blamed the confusion on the company as well as the victim. Its final word on the probable cause of the Tesla crash added a concern with Autopilot’s ‘operational design, which permitted [the driver’s] prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer’. Tesla, in the words of the NTSB chair, ‘did little to constrain the use of autopilot to roadways for which it was designed’.

Dominant approaches to machine learning still represent a substantial barrier to governance. When NTSB conducted its investigation, it found a technology that was dripping with data and replete with sensors, but offering no insight into what the car thought it saw nor how it reached its decisions. The car’s brain remained largely off-limits to investigators. At a board meeting in September 2017, one NTSB staff member explained: ‘The data we obtained was sufficient to let us know the [detection of the truck] did not occur, but it was not sufficient to let us know why.’ The need to improve social learning goes beyond accident investigation. If policymakers want to maximize the public value of self-driving car technology, they should be intensely concerned about the inscrutability and proprietary nature of machine learning systems.

Posted in Uncategorized | Leave a comment

Why we must scrutinise the magical thinking behind geoengineering

(This piece was originally written in the wake of the Paris Agreement. I’ve tidied it up and put it here for safe-keeping).

‘In time of trouble, I had been trained since childhood, read, learn, work it up, go to the literature. Information is control.’

Joan Didion, The Year of Magical Thinking


In The Year of Magical Thinking, novelist Joan Didion reflects on her attempts to think through the grief brought about by her husband’s death and her daughter’s illness. In trying to understand the incomprehensible and control the uncontrollable, she finds herself victim to a childlike belief that she can wish her way to a new reality. This magical thinking seems egregiously unscientific, and yet, when it comes to new technologies, our finest minds can succumb to something similar.

It is hard to know whether to be optimistic or pessimistic about the Paris agreement, whether to celebrate the consensus on new possibilities, mourn the opportunities already missed, or do both at once. This ambivalence is complicated by the strong thread of optimism that is woven into the future to which we have now become signatories. The agreement inscribes a set of technological promises that have received little democratic scrutiny. If we are to deliver on the vision of Paris, we must urgently confront the politics of radical innovation.

Imaginary technologies

A few commentators have pointed out that the projections of future climate change that provide the scientific ingredients of the Paris agreement are themselves based on political choices. The target of two degrees’ warming has been an anchor for climate negotiations for a long time, even as emissions have continued to rise. The arithmetic of the ‘Intended national determined contributions’ to climate change mitigation agreed in Paris does not currently add up. The gap between expectation and action has been filled by a new sort of hope, that technological means will emerge to extract greenhouse gases from our atmosphere. Imaginary ‘negative emissions technologies’ are built into all of the IPCC scenarios that point to less than two degrees’ warming.

Keeping global warming below this target, let alone the 1.5-degree aspiration, will demand extraordinary innovation in order to develop systems that not only produce no greenhouse gases, but actively remove them from the atmosphere. In order to secure a climate consensus, we have become signatories to a future that is radically different from the past; we have invested our hopes in emerging technologies about which very little is known. So-called ‘geoengineering’ is profoundly uncertain. It brings with it political and ethical baggage, largely ignored in the Paris negotiations. The more controversial geoengineering proposals known as ‘Solar Radiation Management’ (SRM) have been put to one side for the time being, but planetary-scale Carbon Dioxide Removal also represents a form of magical thinking.

The sociology of expectations

Recognising the power of technological promises to shape our world, sociologists are turning their attention to how futures get imagined. The ‘sociology of expectations’ would say that technological visions, often delivered by those who claim privileged access to emerging innovations, are not mere predictions. Instead, they are a form of performance. Futures may be advertised as inevitable, just around the corner or already here in order to make them more likely. A recent commercial for a Mercedes self-driving car runs like this: ‘Is the world truly ready for a vehicle that can drive itself? Ready or not, the future is here. Introducing the 2017 E-Class from Mercedes-Benz, a concept car that’s already a reality.’ The easiest way to sell a future technology is to pretend that it’s already possible.

Science and technology need a degree of salesmanship. Hype is built over genomics, nanotechnology and other fields in order to attract attention and investment. Hidden from view is the otherwise obvious fact that the future is a result of product of choices in the present. Moore’s Law, to take an example, is presented as though it is a law of nature: computers will keep doubling their power over a fixed period. This ‘law’ is in reality a roadmap, the product of conscious decisionmaking by the semiconductor industry. A recent article in Nature described the effort, organisation and investment required to make exponential technical change seem inevitable. It is only now that semiconductor innovation appears to be running out of physical space on silicon chips that the industry is publicly discussing alternative directions for innovation.

The naturalising of technological progress is disempowering, and it leads to some very bad policy decisions. Expectations around new technologies tend to exclude complexity. The detail about what it would take to realise a particular technological future, which might draw in an unpredictable cast of innovators, consumers, users, regulators, protesters, artists, designers and others, is underdrawn, in part because it is profoundly uncertain and in part because a more accurate picture would obscure the interests of technological optimists. This leads to a paradox: the earlier and more uncertain the technology, the less evidence there is to constrain the hype around it. If downsides are presented, they are often of an apocalyptic flavour, as with the recent concerns expressed by Elon Musk, Stephen Hawking and co that Artificial Intelligence poses an existential threat to humanity. The mundane but more powerful ways in which technologies differentially benefit people do not feature.

It is only once technological promises meet scientific, political and public attention that their complexities start to be made apparent. A wider lens sees that Moore’s law is the exception rather than the rule. Nuclear energy, once deemed ‘too cheap to meter’, has only become more expensive over time as its full social complexity and long-term costs are realised. New waves of technological optimism are accompanied by an amnesia, a sense that, this time, it will be different and our hype will be justified.

Technological fixes

In a recent book, former president of the European Research Council Helga Nowotny defines a technological promise as ‘a risk-free mortgage on the future’. She quotes Hannah Arendt, who argued that ‘By bringing the promised future into the present we are able to create reliability and predictability which would otherwise be out of reach’. The promises we make to ourselves about the future could be said to be a defining feature of human modernity. The trouble is with our hypocrisy: we treat the promises of politicians with extreme scepticism, but those surrounding technology are harder to argue with. Unlike other parts of climate agreements, our technological promises can never be legally binding. Morality and responsibility are vital.

Until we ask difficult questions, new technologies are politically seductive because they seem to offer a way out of politically intractable swamps. If politics is a Gordian knot then technology promises a sword with which to cut through. We do not need to deal with the difficulties of incumbent interests if technologies can avoid such dilemmas. In some cases, our fixes work fine. Vaccines, on the whole, offer a clean solution to the problem of some infectious diseases. But in many cases, technological fixes are ugly, blunt instruments, not sharpened swords.

Geoengineering at first glance offers a handy get-out-of-jail-free card for climate change, the most wicked of wicked problems. But, for anyone who pays closer attention, the fix looks deeply flawed. The most melodramatic geoengineering proposals include schemes to inject sulphate particles into the stratosphere, attempting to mimic the effects of massive volcanic eruptions like that of Tambora in 1815, which cooled the Earth for two years. Scientists have been quick to point out the flaws with this idea. While lowering average global temperature we would also likely disrupt regional weather, as Tambora’s eruption did during 1816 – the ‘year without a summer’; oceans would continue to acidify as Carbon Dioxide built up beneath our sunshade; and the offer of a technological fix would surely destabilise fragile international attempts at climate change mitigation.

For all of the discussion of geoengineering’s impacts, however, there has been relatively little interest in the question of whether it is at all possible. Scientists have been happier to run with the speculation. David Keith, the world’s most prominent SRM geoengineering researcher, opens his book by arguing that

‘It is possible to cool the planet by injecting reflective particles of sulfuric acid into the upper atmosphere… it is cheap and technically easy’.

The IPCC’s Fifth Assessment Report in 2013 was concerned about possible ‘side effects’, but still decided to include geoengineering in the policymakers’ summary of working group 1, which assesses the ‘physical science basis’ for climate change, rather than policy responses. Only six years earlier, IPCC had dismissed such ideas as ‘speculative and unproven’. Where interdisciplinary assessments have begun, such as those of the Royal Society or the UK Integrated Assessment of Geoengineering Proposals project, the scale of our uncertainty becomes clear. SRM only looks ‘cheap and technically easy’ if we choose not to ask about its full costs.

Few people think that carbon dioxide removal would be cheap and easy. Reversing the unintended exhausts of the industrial revolution, at speed, presents an unprecedented global engineering challenge. Nevertheless, some are optimistic that tweaks to natural systems – regrowing forests, fertilising the oceans or enhancing the weathering of rocks – or new machines to suck CO2 from the air, will be able to significantly counteract our emissions. For the IPCC, the promise of negative emissions has been domesticated in the form of BECCS – biomass energy with carbon capture and storage – which is seen as the most realistic idea currently on the table. But even with these more modest proposals, the gap between future and present, between idea and workable system, is vast. To see the difficulties, one need only look to those seeking to make small carbon capture and storage plants workable and publicly acceptable.

This doesn’t mean that technology won’t be vital. The future does not look like the past. Indeed, climate action is predicated on the possibility of a future that corrects past mistakes. Innovation will be central to this, and innovation is profoundly unpredictable. The collective energy that gathered behind the ‘Mission Innovation’ clean energy initiative in Paris suggests a promising complement to geopolitical negotiation.

However, if we presume that technologies of the future offer a clean break from our past troubles, we will only be disappointed. The Paris agreement has, by quietly signing up to the promise of negative emissions, sidestepped, and therefore postponed, tough political choices. By including these imaginary technologies in its projections, its forces them closer to reality. We therefore need to urgently ask what it would take to make such ideas work, how to understand their uncertainties and how we might govern them.

The illusion of control

Joan Didion’s eventual conclusion was that the answers were not in the literature, that information was not the same as control. She saw her magical thinking as the product of a culture that forbids grief and demands optimism. While a sense of optimism is important for climate action, we must also maintain our ability to analyse. The IPCC has become a model for the rigour of its scientific treatment of past and possible future climates. Being more scientific about the nature of innovation means resisting what the Royal Society identified as ‘appraisal optimism’. The tools of technology assessment, incorporating the values and knowledge of stakeholders and members of the public as well as experts, need to be applied to the promise of greenhouse gas removal, and this evidence needs to be weighed alongside that from economists and climate scientists. We must be as transparent and sceptical about our technological assumptions as we are about those behind climate science. The illusion of control is a dangerous thing. Wishing for a perfect solution can mean ignoring imperfect but necessary political compromises.

Posted in Uncategorized | Leave a comment

Thoughts on the March for Science

I went along to the March for Science in Denver. I was there in part as an observer. As someone who studies the relationship between science and politics, this was a rare opportunity to see public displays of affections and annoyances that are usually private. For social scientists who study science, there has already been plenty to observe and to criticise in the positioning and framing of this march. Much of the criticism misses the point. Notwithstanding anti-nuclear demonstrations in the 80s, this was perhaps the biggest science-centred protest in history, bringing together thousands of people in more than 500 cities around the world. When I first heard about it, I was worried that its motivations were hopelessly unclear. Having been, I relaxed.

We should not be surprised that the planning of a mass mobilisation of scientists and people who care about science is riddled with hypocrisy, confusion and error. This is politics. Some scientists might claim that they are merely marching for truth. But most would admit that they were a constituency that shared a set of broad political values too. And, as John Holdren argued on the day, they need not apologise for these.

One of many wonderful things about the institutions of contemporary science in democracies is that they are able to support contradiction, uncertainty, doubt and disagreement. Many of us in the social sciences would like to see these qualities democratised. We would like greater consideration by scientists of the profound unresolved issues within what we call ‘science’. We would draw attention to the tension between what Sheila Jasanoff has differentiated as ‘truth’ and ‘gain’ as the two grand justifications for science. (Scientists, in political debates, often thrust with claims about technology and progress but, when challenged, parry with claims about truth and objectivity).

However, the march for science that I saw, rather than being a representation of science’s issues, was a rare opportunity to talk about them. Before it began, the organisers faced difficult questions about diversity, often overlooked when science stays in the lab. Alongside the call to get science more involved in politics, there were calls to talk about the politics of science. The march has been seen by very few as the end of the conversation. As both Roger Pielke Jr and Andrew Maynard suggest, the real question is what comes next. Yes, there were plenty of placards claiming that, when it comes to climate change “the debate is over”, but the March for Science seemed more interested in opening up than closing down. And scientists, while they could do with a few tips on political messaging, do come up with some funny (albeit niche) slogans.

Here are some of my photos


Well done this man

DSC04816 (1).jpg

“Trump pipettes with two hands”. Scathing.


A tie-dye labcoat in front of the State Capitol. How very Colorado.


At this rate, there’s a chance this man may end up with the job.


Here I am with two borrowed Chemtrail signs

DSC04865 (1).jpg

The placards got even more complicated. Very few people would have got the joke. I had to get him to explain it, which he did with extreme patience. Ask a physicist what the rate of change of acceleration is called.

DSC04839 (1).jpg

“Science Trumps Politics”. Discuss. (I didn’t have the heart. They were such a nice family.)

DSC04808 (1).jpg

This placard would have got a higher grade in my University of Colorado graduate science policy class.

DSC04848 (1).jpg

While this boy was adding nuance to Denver’s science policy debate, my own kids were playing with dry ice


The Lorax is a big part of US Earth-Day culture. Ahem and Ahem. One of its most interesting messages is that technology can be part of the problem as well as the solution…


… an issue that was taken up by this guy. Great question. I love that he brought it to a science march.


And finally, Earth Day Yoga


Posted in Uncategorized | 1 Comment

Talking ‘tech’ on The World This Weekend

At the start of 2017, I ventured through the snow to KGNU, Boulder’s community radio station, to record an interview with Mark Mardell for The World This Weekend on the BBC.

They edited out my citation for the quote at the start, which is a little embarrassing. I took it from this tweet:

Posted in Uncategorized | Leave a comment

Podcast on self-driving cars

During a visit to the wonderful people at Arizona State University’s School for the Future of Innovation and Society, Andrew Maynard and Heather Ross invited me into the podcast booth to talk about self-driving cars.

This connects to a piece on the Guardian Political Science blog

Posted in Uncategorized | Leave a comment

Frankenstein podcast, featuring Langdon Winner

I made this podcast during a recent meeting called ‘Frankenstein’s Shadow‘, which took place at the same time (200 years on) and place (pretty much) as Mary Shelley began writing her great novel. The bulk of the podcast is a talk given by Langdon Winner, the philosopher of technology, in which he revisits his great book, Autonomous Technology, 40 years on.

My aim is to do more of these – a series of Responsible Innovation podcasts. Watch (or listen to) this space.

Posted in Uncategorized | Leave a comment

Acknowledging AI’s dark side

This is the text of a recent letter published in Science, for those who can’t get behind the paywall
Science 4 September 2015:
Vol. 349 no. 6252 p. 1064
DOI: 10.1126/science.349.6252.1064-c

The 17 July special section on Artificial Intelligence (AI) (p. 248), although replete with solid information and ethical concern, was biased toward optimism about the technology.

The articles concentrated on the roles that the military and government play in “advancing” AI, but did not include the opinions of any political scientists or technology policy scholars trained to think about the unintended (and negative) consequences of governmental steering of technology. The interview with Stuart Russell touches on these concerns (“Fears of an AI pioneer,” J. Bohannon, News, p. 252), but as a computer scientist, his solutions focus on improved training. Yet even the best training will not protect against market or military incentives to stay ahead of competitors.

Likewise double-edged was M. I. Jordan and T. M. Mitchell’s desire “that society begin now to consider how to maximize” the benefits of AI as a transformative technology (“Machine learning: Trends, perspectives, and prospects,” Reviews, p. 255). Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate.

The section implied that we are all engaged in a common endeavor, when in fact AI is dominated by a relative handful of mostly male, mostly white and east Asian, mostly young, mostly affluent, highly educated technoscientists and entrepreneurs and their affluent customers. A majority of humanity is on the outside looking in, and it is past time for those working on AI to be frank about it.

The rhetoric was also loaded with positive terms. AI presents a risk of real harm, and any serious analysis of its potential future would do well to unflinchingly acknowledge that fact.

The question posed in the collection’s introduction—“How will we ensure that the rise of the machines is entirely under human control?” (“Rise of the machines,” J. Stajic et al., p. 248)—is the wrong question to ask. There are no institutions adequate to “ensure” it. There are no procedures by which all humans can take part in the decision process. The more important question is this: Should we slow the pace of AI research and applications until a majority of people, representing the world’s diversity, can play a meaningful role in the deliberations? Until that question is part of the debate, there is no debate worth having.

  1. Christelle Didier1,
  2. Weiwen Duan2,
  3. Jean-Pierre Dupuy3,
  4. David H. Guston4,
  5. Yongmou Liu5,
  6. José Antonio López Cerezo6,
  7. Diane Michelfelder7,
  8. Carl Mitcham8,
  9. Daniel Sarewitz9,
  10. Jack Stilgoe10,
  11. Andrew Stirling11,
  12. Shannon Vallor12,
  13. Guoyu Wang13,
  14. James Wilsdon11,
  15. Edward J. Woodhouse14,*

  1. 1Lille University, Education, Lille, 59653, France.

  2. 2Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, 100732, China.

  3. 3Department of Philosophy, Ecole Polytechnique, Paris, 75005, France.

  4. 4School for the Future of Innovation in Society, Arizona State University, Tempe, AZ 85287-5603, USA.

  5. 5Department of Philosophy, Renmin University of China, Beijing, 100872, China.

  6. 6Department of Philosophy, University of Oviedo, Oviedo, Asturias, 33003, Spain.

  7. 7Department of Philosophy, Macalester College, Saint Paul, MN 55105, USA.

  8. 8Liberal Arts and International Studies, Colorado School of Mines, Golden, CO 80401, USA.

  9. 9Consortium for Science, Policy, and Outcomes, Arizona State University, Washington, DC 20009, USA.

  10. 10Department of Science and Technology Studies, University College London, London, WC1E 6BT, UK.

  11. 11Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9SL, UK.

  12. 12Department of Philosophy, Santa Clara University, Santa Clara, CA 95053, USA.

  13. 13Department of Philosophy, Dalian University of Technology, Dalian, 116024, China.

  14. 14Department of Science and Technology Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
  1. *Corresponding author. E-mail: woodhouse@rpi.edu
Posted in Uncategorized | 1 Comment