We need new rules for self-driving cars

I have a feature piece in Issues in Science and Technology. It makes the case for self-driving car regulation, beginning with questions of safety and expanding into questions of mobility and urban planning.

StilgoeMy argument focuses on the National Transportation Safety Board and the role that they played in social learning around the May 2016 Tesla Autopilot crash. After this crash, I wrote a piece for the Guardian asking ‘What will happen when a self-driving car kills a bystander?‘ and a longer paper for Social Studies of Science. As the Issues in Science and Technology piece was growing to press, the NTSB were called into action again because a self-driving Uber had run over and killed a pedestrian in Tempe, Arizona. Dan Sarewitz (the editor of Issues) and I added a small line to the end of the piece but the details of the case are still not clear. Five days later, it was reported that another Tesla driver had been killed in a crash while his car was on Autopilot.

The landscape for US self-driving car development is currently a Wild West, a product of thoughtless policymakers competing for the attention of tech companies. The history of technology suggests that, too often, it takes a crisis to prompt regulatory movement.  ‘I told you so’ is not a good look, but those of us working on governance had hoped that this time it might be different.

If I were Volvo, GM or Ford, I would be intensely concerned about reputational risk and looking to distance myself from Tesla and Uber. But companies are not going to win public trust by themselves. The question now is how regulators can learn from technological mishaps and build credible governance. Calls for governance are growing in number and volume. Among the most interesting is a proposal from David Danks and Alex John London of Carnegie Mellon to follow the model of the Food and Drug Administration. Most carmakers would baulk at the thought of putting their cars through clinical trials before releasing them onto the open road. But some sort of pre-market approval now seems necessary.

 

(Read the full piece here: ‘We need new rules for self-driving cars’ (pdf), Issues in Science and Technology, Spring 2018).

 

Advertisements
Posted in Uncategorized | Leave a comment

What can a self-driving car crash teach us about the politics of machine learning?

This post is an excerpt from the paper Machine learning, social learning and the governance of self-driving cars, in Social Studies of Science. It was originally published on the Transmissions blog

gv6ugbsbxhcfbma1hgrh.gif

(Image: NTSB simulation of crash scenarios, 2017)

 

In May 2016, a Tesla Model S was involved in what could be considered the world’s first self-driving car fatality. In the middle of a sunny afternoon, on a divided highway near Williston, Florida, Joshua Brown, an early adopter and Tesla enthusiast, died at the wheel of his car. The car failed to see a white truck that was crossing his path. While in ‘Autopilot’ mode, Brown’s car hit the trailer at 74 mph. The crash only came to public light in late June 2016, when Tesla published a blog post, headlined ‘A tragic loss’, that described Autopilot as being ‘in a public beta phase’.

Self-driving cars, quintessentially ‘smart’ technologies, are not born smart. Their brains are still not fully formed. The algorithms that their creators hope will allow them to soon handle any eventuality are being continually updated with new data. The cars are learning to drive.

Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’

Proponents of self-driving cars see machine learning as a way of compensating for human imperfection. Not only are humans unreliable drivers, they seem to be getting worse rather than better. After decades of falling road death numbers in the US, mostly due to improved car design and safety laws, rates have been increasing since 2010, probably due to phone-induced distraction. The rationalists’ lament is a familiar one. TS Eliot has the doomed archbishop Thomas Becket say in ‘Murder in the cathedral’: ‘The same things happen again and again; Men learn little from others’ experience.’

Self-driving cars, however, learn from one another. Tesla’s cars are bought as individual objects, but commit their owners to sharing data with the company in a process called ‘fleet learning’. According to Elon Musk, the company’s CEO, ‘the whole Tesla fleet operates as a network. When one car learns something, they all learn it’, with each Autopilot user as an ‘expert trainer for how the autopilot should work’.

The promise is that, with enough data, this process will soon match and then surpass humans’ abilities. The approach makes the dream of automotive autonomy seem seductively ‘solvable’. It is also represents a privatization of learning.

As work by scholars such as Charles Perrow and Brian Wynne has revealed, technological malfunctions are an opportunity for the reframing of governance and the democratization of learning. The official investigations of and responses to the May 2016 Tesla crash represent a slow process of social learning.

The first official report of the May 2016 crash, from the Florida police, put the blame squarely on the truck driver for failing to yield the right of way. However, the circumstances of the crash were seen as sufficiently novel to warrant investigations by the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA). The NTSB is tasked with identifying the probable cause of every air accident in the US, as well as some highway crashes.

The NTSB’s preliminary report was matter-of-fact. It relates that, at 4:40pm on a clear, dry day, a large truck carrying blueberries crossed US Highway 27A in front of the Tesla, which failed to stop. The Tesla passed under the truck, sheering off the car’s roof. The collision cut power to the wheels and the car coasted off the road for 297 feet before hitting and breaking a pole, turning sideways and coming to a stop. Brown was pronounced dead at the scene. The truck was barely damaged.

The NHTSA saw the incident as an opportunity for a crash course in self-driving car innovation. Its Office of Defects Investigation wrote to Tesla demanding data on all of the company’s cars, instances of Autopilot use and abuse, customer complaints, legal claims, a log of all technology testing and modification in the development of Autopilot and a full engineering specification of how and why Autopilot does what it does.

In January 2017, the NHTSA issued its report on the crash. The agency’s initial aim was to ‘examine the design and performance of any automated driving systems in use at the time of the crash’. The technical part of their report emphasized that the Tesla Autopilot was a long way from full autonomy. A second strand of analysis focused on what the NHTSA called ‘human factors’. The agency chose to direct its major recommendation at users: ‘Drivers should read all instructions and warnings provided in owner’s manuals for ADAS [advanced driver-assistance systems] technologies and be aware of system limitations’. The report followed a pattern, familiar in STS, of blaming sociotechnical imperfection on user error: humans, as anthropologist Madeleine Clare Elish has described, become the ‘moral crumple zone’.

The NTSB went further, recognizing the opportunity to learn. The Board sought to clarify that the Tesla Model S didn’t meet the technical definition of a ‘self-driving car’, but blamed the confusion on the company as well as the victim. Its final word on the probable cause of the Tesla crash added a concern with Autopilot’s ‘operational design, which permitted [the driver’s] prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer’. Tesla, in the words of the NTSB chair, ‘did little to constrain the use of autopilot to roadways for which it was designed’.

Dominant approaches to machine learning still represent a substantial barrier to governance. When NTSB conducted its investigation, it found a technology that was dripping with data and replete with sensors, but offering no insight into what the car thought it saw nor how it reached its decisions. The car’s brain remained largely off-limits to investigators. At a board meeting in September 2017, one NTSB staff member explained: ‘The data we obtained was sufficient to let us know the [detection of the truck] did not occur, but it was not sufficient to let us know why.’ The need to improve social learning goes beyond accident investigation. If policymakers want to maximize the public value of self-driving car technology, they should be intensely concerned about the inscrutability and proprietary nature of machine learning systems.

Posted in Uncategorized | 1 Comment

Why we must scrutinise the magical thinking behind geoengineering


(This piece was originally written in the wake of the Paris Agreement. I’ve tidied it up and put it here for safe-keeping).

‘In time of trouble, I had been trained since childhood, read, learn, work it up, go to the literature. Information is control.’

Joan Didion, The Year of Magical Thinking

 

In The Year of Magical Thinking, novelist Joan Didion reflects on her attempts to think through the grief brought about by her husband’s death and her daughter’s illness. In trying to understand the incomprehensible and control the uncontrollable, she finds herself victim to a childlike belief that she can wish her way to a new reality. This magical thinking seems egregiously unscientific, and yet, when it comes to new technologies, our finest minds can succumb to something similar.

It is hard to know whether to be optimistic or pessimistic about the Paris agreement, whether to celebrate the consensus on new possibilities, mourn the opportunities already missed, or do both at once. This ambivalence is complicated by the strong thread of optimism that is woven into the future to which we have now become signatories. The agreement inscribes a set of technological promises that have received little democratic scrutiny. If we are to deliver on the vision of Paris, we must urgently confront the politics of radical innovation.

Imaginary technologies

A few commentators have pointed out that the projections of future climate change that provide the scientific ingredients of the Paris agreement are themselves based on political choices. The target of two degrees’ warming has been an anchor for climate negotiations for a long time, even as emissions have continued to rise. The arithmetic of the ‘Intended national determined contributions’ to climate change mitigation agreed in Paris does not currently add up. The gap between expectation and action has been filled by a new sort of hope, that technological means will emerge to extract greenhouse gases from our atmosphere. Imaginary ‘negative emissions technologies’ are built into all of the IPCC scenarios that point to less than two degrees’ warming.

Keeping global warming below this target, let alone the 1.5-degree aspiration, will demand extraordinary innovation in order to develop systems that not only produce no greenhouse gases, but actively remove them from the atmosphere. In order to secure a climate consensus, we have become signatories to a future that is radically different from the past; we have invested our hopes in emerging technologies about which very little is known. So-called ‘geoengineering’ is profoundly uncertain. It brings with it political and ethical baggage, largely ignored in the Paris negotiations. The more controversial geoengineering proposals known as ‘Solar Radiation Management’ (SRM) have been put to one side for the time being, but planetary-scale Carbon Dioxide Removal also represents a form of magical thinking.

The sociology of expectations

Recognising the power of technological promises to shape our world, sociologists are turning their attention to how futures get imagined. The ‘sociology of expectations’ would say that technological visions, often delivered by those who claim privileged access to emerging innovations, are not mere predictions. Instead, they are a form of performance. Futures may be advertised as inevitable, just around the corner or already here in order to make them more likely. A recent commercial for a Mercedes self-driving car runs like this: ‘Is the world truly ready for a vehicle that can drive itself? Ready or not, the future is here. Introducing the 2017 E-Class from Mercedes-Benz, a concept car that’s already a reality.’ The easiest way to sell a future technology is to pretend that it’s already possible.

Science and technology need a degree of salesmanship. Hype is built over genomics, nanotechnology and other fields in order to attract attention and investment. Hidden from view is the otherwise obvious fact that the future is a result of product of choices in the present. Moore’s Law, to take an example, is presented as though it is a law of nature: computers will keep doubling their power over a fixed period. This ‘law’ is in reality a roadmap, the product of conscious decisionmaking by the semiconductor industry. A recent article in Nature described the effort, organisation and investment required to make exponential technical change seem inevitable. It is only now that semiconductor innovation appears to be running out of physical space on silicon chips that the industry is publicly discussing alternative directions for innovation.

The naturalising of technological progress is disempowering, and it leads to some very bad policy decisions. Expectations around new technologies tend to exclude complexity. The detail about what it would take to realise a particular technological future, which might draw in an unpredictable cast of innovators, consumers, users, regulators, protesters, artists, designers and others, is underdrawn, in part because it is profoundly uncertain and in part because a more accurate picture would obscure the interests of technological optimists. This leads to a paradox: the earlier and more uncertain the technology, the less evidence there is to constrain the hype around it. If downsides are presented, they are often of an apocalyptic flavour, as with the recent concerns expressed by Elon Musk, Stephen Hawking and co that Artificial Intelligence poses an existential threat to humanity. The mundane but more powerful ways in which technologies differentially benefit people do not feature.

It is only once technological promises meet scientific, political and public attention that their complexities start to be made apparent. A wider lens sees that Moore’s law is the exception rather than the rule. Nuclear energy, once deemed ‘too cheap to meter’, has only become more expensive over time as its full social complexity and long-term costs are realised. New waves of technological optimism are accompanied by an amnesia, a sense that, this time, it will be different and our hype will be justified.

Technological fixes

In a recent book, former president of the European Research Council Helga Nowotny defines a technological promise as ‘a risk-free mortgage on the future’. She quotes Hannah Arendt, who argued that ‘By bringing the promised future into the present we are able to create reliability and predictability which would otherwise be out of reach’. The promises we make to ourselves about the future could be said to be a defining feature of human modernity. The trouble is with our hypocrisy: we treat the promises of politicians with extreme scepticism, but those surrounding technology are harder to argue with. Unlike other parts of climate agreements, our technological promises can never be legally binding. Morality and responsibility are vital.

Until we ask difficult questions, new technologies are politically seductive because they seem to offer a way out of politically intractable swamps. If politics is a Gordian knot then technology promises a sword with which to cut through. We do not need to deal with the difficulties of incumbent interests if technologies can avoid such dilemmas. In some cases, our fixes work fine. Vaccines, on the whole, offer a clean solution to the problem of some infectious diseases. But in many cases, technological fixes are ugly, blunt instruments, not sharpened swords.

Geoengineering at first glance offers a handy get-out-of-jail-free card for climate change, the most wicked of wicked problems. But, for anyone who pays closer attention, the fix looks deeply flawed. The most melodramatic geoengineering proposals include schemes to inject sulphate particles into the stratosphere, attempting to mimic the effects of massive volcanic eruptions like that of Tambora in 1815, which cooled the Earth for two years. Scientists have been quick to point out the flaws with this idea. While lowering average global temperature we would also likely disrupt regional weather, as Tambora’s eruption did during 1816 – the ‘year without a summer’; oceans would continue to acidify as Carbon Dioxide built up beneath our sunshade; and the offer of a technological fix would surely destabilise fragile international attempts at climate change mitigation.

For all of the discussion of geoengineering’s impacts, however, there has been relatively little interest in the question of whether it is at all possible. Scientists have been happier to run with the speculation. David Keith, the world’s most prominent SRM geoengineering researcher, opens his book by arguing that

‘It is possible to cool the planet by injecting reflective particles of sulfuric acid into the upper atmosphere… it is cheap and technically easy’.

The IPCC’s Fifth Assessment Report in 2013 was concerned about possible ‘side effects’, but still decided to include geoengineering in the policymakers’ summary of working group 1, which assesses the ‘physical science basis’ for climate change, rather than policy responses. Only six years earlier, IPCC had dismissed such ideas as ‘speculative and unproven’. Where interdisciplinary assessments have begun, such as those of the Royal Society or the UK Integrated Assessment of Geoengineering Proposals project, the scale of our uncertainty becomes clear. SRM only looks ‘cheap and technically easy’ if we choose not to ask about its full costs.

Few people think that carbon dioxide removal would be cheap and easy. Reversing the unintended exhausts of the industrial revolution, at speed, presents an unprecedented global engineering challenge. Nevertheless, some are optimistic that tweaks to natural systems – regrowing forests, fertilising the oceans or enhancing the weathering of rocks – or new machines to suck CO2 from the air, will be able to significantly counteract our emissions. For the IPCC, the promise of negative emissions has been domesticated in the form of BECCS – biomass energy with carbon capture and storage – which is seen as the most realistic idea currently on the table. But even with these more modest proposals, the gap between future and present, between idea and workable system, is vast. To see the difficulties, one need only look to those seeking to make small carbon capture and storage plants workable and publicly acceptable.

This doesn’t mean that technology won’t be vital. The future does not look like the past. Indeed, climate action is predicated on the possibility of a future that corrects past mistakes. Innovation will be central to this, and innovation is profoundly unpredictable. The collective energy that gathered behind the ‘Mission Innovation’ clean energy initiative in Paris suggests a promising complement to geopolitical negotiation.

However, if we presume that technologies of the future offer a clean break from our past troubles, we will only be disappointed. The Paris agreement has, by quietly signing up to the promise of negative emissions, sidestepped, and therefore postponed, tough political choices. By including these imaginary technologies in its projections, its forces them closer to reality. We therefore need to urgently ask what it would take to make such ideas work, how to understand their uncertainties and how we might govern them.

The illusion of control

Joan Didion’s eventual conclusion was that the answers were not in the literature, that information was not the same as control. She saw her magical thinking as the product of a culture that forbids grief and demands optimism. While a sense of optimism is important for climate action, we must also maintain our ability to analyse. The IPCC has become a model for the rigour of its scientific treatment of past and possible future climates. Being more scientific about the nature of innovation means resisting what the Royal Society identified as ‘appraisal optimism’. The tools of technology assessment, incorporating the values and knowledge of stakeholders and members of the public as well as experts, need to be applied to the promise of greenhouse gas removal, and this evidence needs to be weighed alongside that from economists and climate scientists. We must be as transparent and sceptical about our technological assumptions as we are about those behind climate science. The illusion of control is a dangerous thing. Wishing for a perfect solution can mean ignoring imperfect but necessary political compromises.

Posted in Uncategorized | Leave a comment

Thoughts on the March for Science

I went along to the March for Science in Denver. I was there in part as an observer. As someone who studies the relationship between science and politics, this was a rare opportunity to see public displays of affections and annoyances that are usually private. For social scientists who study science, there has already been plenty to observe and to criticise in the positioning and framing of this march. Much of the criticism misses the point. Notwithstanding anti-nuclear demonstrations in the 80s, this was perhaps the biggest science-centred protest in history, bringing together thousands of people in more than 500 cities around the world. When I first heard about it, I was worried that its motivations were hopelessly unclear. Having been, I relaxed.

We should not be surprised that the planning of a mass mobilisation of scientists and people who care about science is riddled with hypocrisy, confusion and error. This is politics. Some scientists might claim that they are merely marching for truth. But most would admit that they were a constituency that shared a set of broad political values too. And, as John Holdren argued on the day, they need not apologise for these.

One of many wonderful things about the institutions of contemporary science in democracies is that they are able to support contradiction, uncertainty, doubt and disagreement. Many of us in the social sciences would like to see these qualities democratised. We would like greater consideration by scientists of the profound unresolved issues within what we call ‘science’. We would draw attention to the tension between what Sheila Jasanoff has differentiated as ‘truth’ and ‘gain’ as the two grand justifications for science. (Scientists, in political debates, often thrust with claims about technology and progress but, when challenged, parry with claims about truth and objectivity).

However, the march for science that I saw, rather than being a representation of science’s issues, was a rare opportunity to talk about them. Before it began, the organisers faced difficult questions about diversity, often overlooked when science stays in the lab. Alongside the call to get science more involved in politics, there were calls to talk about the politics of science. The march has been seen by very few as the end of the conversation. As both Roger Pielke Jr and Andrew Maynard suggest, the real question is what comes next. Yes, there were plenty of placards claiming that, when it comes to climate change “the debate is over”, but the March for Science seemed more interested in opening up than closing down. And scientists, while they could do with a few tips on political messaging, do come up with some funny (albeit niche) slogans.

Here are some of my photos

 

Well done this man

DSC04816 (1).jpg

“Trump pipettes with two hands”. Scathing.

DSC04860.JPG

A tie-dye labcoat in front of the State Capitol. How very Colorado.

DSC04889.JPG

At this rate, there’s a chance this man may end up with the job.

DSC04837.JPG

Here I am with two borrowed Chemtrail signs

DSC04865 (1).jpg

The placards got even more complicated. Very few people would have got the joke. I had to get him to explain it, which he did with extreme patience. Ask a physicist what the rate of change of acceleration is called.

DSC04839 (1).jpg

“Science Trumps Politics”. Discuss. (I didn’t have the heart. They were such a nice family.)

DSC04808 (1).jpg

This placard would have got a higher grade in my University of Colorado graduate science policy class.

DSC04848 (1).jpg

While this boy was adding nuance to Denver’s science policy debate, my own kids were playing with dry ice

DSC04880.JPG

The Lorax is a big part of US Earth-Day culture. Ahem and Ahem. One of its most interesting messages is that technology can be part of the problem as well as the solution…

DSC04836.JPG

… an issue that was taken up by this guy. Great question. I love that he brought it to a science march.

DSC04988.JPG

And finally, Earth Day Yoga

DSC04991.JPG

Posted in Uncategorized | 1 Comment

Talking ‘tech’ on The World This Weekend

At the start of 2017, I ventured through the snow to KGNU, Boulder’s community radio station, to record an interview with Mark Mardell for The World This Weekend on the BBC.

They edited out my citation for the quote at the start, which is a little embarrassing. I took it from this tweet:

Posted in Uncategorized | Leave a comment

Podcast on self-driving cars

During a visit to the wonderful people at Arizona State University’s School for the Future of Innovation and Society, Andrew Maynard and Heather Ross invited me into the podcast booth to talk about self-driving cars.

This connects to a piece on the Guardian Political Science blog

Posted in Uncategorized | Leave a comment

Frankenstein podcast, featuring Langdon Winner

I made this podcast during a recent meeting called ‘Frankenstein’s Shadow‘, which took place at the same time (200 years on) and place (pretty much) as Mary Shelley began writing her great novel. The bulk of the podcast is a talk given by Langdon Winner, the philosopher of technology, in which he revisits his great book, Autonomous Technology, 40 years on.

My aim is to do more of these – a series of Responsible Innovation podcasts. Watch (or listen to) this space.

Posted in Uncategorized | Leave a comment