Review of The Space Barons, in the Literary Review

The Space Barons: Elon Musk, Jeff Bezos, and the Quest to Colonize the Cosmos

By Christian Davenport

Originally published in the Literary Review, May 2018

In February this year, to much fanfare, Elon Musk propelled a car made by his company Tesla towards Mars. The Falcon Heavy made by his other company, SpaceX, is the world’s most powerful rocket, with three boosters and a capsule that, for this trip, contained the Tesla car but may in time transport astronauts to the International Space Station and beyond.

A few weeks later, Microsoft co-founder Paul Allen’s Stratolaunch prototype emerged from its hangar to trundle along a Californian runway. It is the largest plane ever built, with 6 jumbo jet engines, 28 wheels and a 117-metre wingspan. Its job will be to transport rockets around 10,000 metres up, at which point they will be released to continue their journeys into space. Should it be successfully launched, Allen’s plane will take the record for largest wingspan of any aircraft that has ever flown, beating Howard Hughes’s Spruce Goose, which flew once, in 1947, for less than a minute.

In The Space Barons, Christian Davenport, a writer for the Washington Post, takes us to a new entrepreneurial frontier: the private space industry. It was easy to dismiss the Spruce Goose as an eccentric billionaire’s hobby. Musk’s Tesla stunt points to something more ambitious. In addition to Allen and Musk, Davenport’s cast includes Richard Branson and Jeff Bezos, with a walk-on part for poker-playing plutocrat Andy Beal. With the withering of NASA’s ambitions in the 21st century, these are the heroes who will define the future of space exploration. The book is written in the airport-friendly popular-business style of Tom Friedman, replete with mixed metaphors (‘The hare was letting it hang out for everyone to see, writing the script live and in public … But it had guts’) and Shatner-esque sentence stubs.

The characters are very rich and very male, with astronomical ambitions. The potted biographies in this book suggest that, as well as looking up to space, these men are also looking back to a more heroic age, in which men went to the Moon. At one point, as befits a man unconstrained by resources, Bezos decides that he must scour the ocean for one of the dozens of booster engines thrown from Saturn V rockets, which were far more powerful than those used today, during the Apollo years. More than four kilometres down, he finds the very engine from the centre of the Apollo 11 rocket that took Neil Armstrong et al to the Moon.

The trouble is that this new space race is just not as interesting as the first one. We were once sold a story of exploration. Exploitation is a poor sequel. Musk and co are all talking about building bases on Mars (SpaceX sells an ‘Occupy Mars’ T-shirt, a pretty brazen appropriation by the 1 per cent of a once-radical slogan), but the private space industry is basically Space FedEx, making its money taking satellites into orbit and bits and pieces to the International Space Station.

This is big business. It is not The Right Stuff. Tom Wolfe said that his 1979 book was about what ‘makes a man willing to sit up on top of an enormous Roman candle’. It concentrated on the people inside the rockets, even though the astronauts were derided by their test pilot predecessors as ‘Spam in a can’ because they weren’t really in control of their vessels. The private space industry runs robotic rockets. There is no real jeopardy. Davenport attempts to heroise his characters. But Bezos’s helicopter crash while out launch pad-hunting cannot compare to the fire that engulfed the Apollo 1 astronauts in 1967. Branson, an old-fashioned huckster who once claimed he would have tourists in space by 2007, brings the human interest element to the story, along with some tragedy. In 2014, the Virgin Galactic VSS Enterprise broke up in flight, killing one pilot and injuring another.

Davenport attempts to turn his industry analysis into an adventure by writing in a baddie. Musk, Bezos, Branson and friends – a ‘merry band of rocketeers’ – are fighting against, you guessed it, Big Government. This is ingenuity against bureaucracy, entrepreneurial zeal against government bloat.

Except that this battle is actually a love-in. The billionaires are hopelessly nostalgic for a time when the military-industrial complex was all-powerful and NASA’s share of tax dollars was disgracefully large. Nobody has been to the Moon since 1972 and NASA even lost the ability to put astronauts into space when its expensive, unsafe space shuttle was retired in 2011. Since then, the USA has had to pay the Russians for a lift. These emasculations have clearly hurt the dreams of our billionaires. Their companies, notwithstanding speculative plans for space tourism, are entirely dependent on government contracts. (Incidentally, Hughes built the Spruce Goose under contract with the US government.)

Davenport’s determination to depict his characters as plucky upstarts (he calls a successful SpaceX launch the ‘triumph of the little guy’) rather than masters of the universe suggests that he has forgotten his own title. If these space barons are the 21st century’s robber barons, we should ask whether their power is being used for the public good. Davenport doesn’t want to mention, for example, that Beal, whose anti-government rants would be ridiculous in any industry, let alone one subsidised by government, has served as Donald Trump’s economic adviser. Rather than making heroes of these individuals, we should be asking who’s in control of space.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Review of Autonomy in Literary Review, October 2018

(The paywalled version of this review is here).

Will the Wheels Come Off?

Autonomy: The Quest to Build the Driverless Car – and How It Will Reshape Our World

By Lawrence D Burns with Christopher Shulgan

 

There is a video on the Tesla website that shows something magical: a car driving itself through a foggy Silicon Valley suburb. The man in the driving seat, we are told, is there only ‘for legal reasons’. He is a lumpen spectator as his car manoeuvres through junctions, around traffic cones and back to its home. Monitors show what the robot car is seeing as it makes its way along the road. The car detects and classifies objects, putting coloured boxes around things like other cars, cyclists and road signs. If these things are moving, the software predicts their next steps. Ten years ago, a self-driving car seemed impossible, but here it is, driving past pedestrians who are happily unaware of the magic on their streets.

Should we believe our eyes? Simulations such as the one in the Tesla video are, in the words of one roboticist, doomed to succeed. Alongside the hype, we have seen high-profile failures. In May 2016, Joshua Brown died instantly when neither he nor his Tesla, which was in autopilot mode, saw a truck that was crossing the road ahead of him. Two years later, Elaine Herzberg was killed by a self-driving Uber while walking her bicycle across a road in Tempe, Arizona. The achievements of Uber, Tesla and Waymo (Google’s self-driving company) are extraordinary. But are they good enough? Are we ready to hand over control of our cars, and possibly our futures, to Silicon Valley?

A video on the website of a car company is an unreliable guide to the future. So too, unfortunately, is this book. Lawrence D Burns, who tells his story here with the help of journalist Christopher Shulgan, is a car man, a Detroit insider who lived through the devastation wrought by the financial crisis on his company, General Motors. He knows that making cars is not easy: he talks convincingly about Detroit’s expertise in ‘hardening’ the bits of a car to cope with the myriad different conditions it must face. But this is not a book about hardware. Burns has been converted by Silicon Valley. He is now a true believer in software.

He has the certainty of a good evangelist: ‘we’re going to take 1.3 million fatalities a year and cut them by 90 percent. We’re going to eliminate oil dependence in transportation. We’re going to erase the challenges of parking in cities … People who haven’t been able to afford a car will be able to afford the sort of mobility only afforded to those with cars. And we’re going to slow climate change.’ The wellspring for these projected transformations is artificial intelligence.

The future he imagines has a long history. The book mentions the attempts at transport automation stretching back to the middle of the 20th century, when car companies grudgingly began to introduce safety technologies such as airbags and imaginative governments experimented with driverless trains and roads that would talk to cars.

But Burns and Shulgan aren’t interested in that history. Their story begins in 2004, when the US Defense Advanced Research Projects Agency (DARPA) staged a competition to find a team of engineers who could build a robot car able to navigate the Mojave Desert. All of the entrants failed, but many of the competitors went on to lead the teams developing self-driving cars for Google and Uber, supported by massive resources (one think-tank reckons that $80 billion was invested in the industry in the years 2014–18). Subsequent competitions showed how quickly things were changing. Burns calls the third DARPA competition, which took place in 2007 in a model town, ‘the moment … when everything changed’. In 2004, during what one news report called ‘DARPA’s debacle in the desert’, vehicles were crashing, capsizing and catching fire. By 2007, upgraded software autonomously steered six vehicles past parked cars and moving traffic. By 2015, a team at Google had, in secret, racked up 101,000 miles of testing on Californian roads in order to meet a target set by their boss, Larry Page, and earn a hefty bonus. In doing so, and with the support of enthusiastic tech reporters, they set a precedent. The notion that testing technology in public is not just responsible but also vital is now being written into US law. It is depressing to note that, after Elaine Herzberg was killed and the governor of Arizona announced that Uber’s right to test self-driving vehicles would be suspended, the governor of Ohio immediately invited the company onto his state’s streets. As with traditional carmakers in the 20th century, companies are being encouraged into recklessness in the name of automotive freedom.

Burns’s critical insights into the car industry are valuable. He sees the absurdity of the state of the motoring industry in the USA, where there is more than one car per driving American, with each vehicle gigantically over-engineered, according to the ‘occasional use imperative’, and chronically underused. He quotes the writer Edward Humes: ‘in almost every way imaginable, the car, as it is deployed and used today, is insane.’ And the revelations about the secret experiment that Google undertook in plain sight on California’s roads are important. Teaching a car to drive is far more complicated than teaching a computer to play chess, and the authors are good on the details. But the book is a cut and shut – a biography welded to a Google corporate prospectus, a business book that has had a scientific paint job.

This is a book about unbounded innovation that allows itself and its characters almost no imagination. The future that Burns sees is inevitable, and the individuals he describes are mere instruments in the process. We are expected to believe that their feet are on the accelerator but that they have no steering wheel. Perhaps this comes from technological determinism – the assumption that technology is an autonomous driver of social change. Or perhaps it is a reflection of quite how entrenched car culture is in the American West. Had the book been written by a native of London or New York, it would surely have come with some consideration of new possibilities for public transport. The video on the Tesla website is not the whole truth. Silicon Valley wants us to believe that autonomous cars will move effortlessly through traffic. But as in the 20th century, when the car was first introduced, making this technology ‘work’ will require vast changes to our roads, towns and lifestyles. Figuring all this out calls for more than just clever engineering.

Posted in Uncategorized | Leave a comment

Geoengineering. A chapter in the Companion to Enviromental Studies

I have a chapter in the new Routledge Companion to Environmental Studies summarising the debate on geoengineering. The publisher, despite employing our free labour, have refused to send out copies to authors.

I’ve pasted a preprint version of the chapter. A pdf is here

 

Abstract

Geoengineering means deliberately manipulating the earth’s climate in order to counteract climate change. It is little more than a set of ideas. There is little in the way of technological development. However, proposals such as the spreading of sulphate particles in the stratosphere are now starting to be taken seriously by climate scientists. Geoengineering would mean humanity taking responsibility for planetary climate control. It raises huge questions that relate to technological feasibility and unintended climatic consequences, as well as the ethics and politics of pursuing a technological fix for climate change.

 

Introduction

Responses to the global problem of climate change have conventionally been separated into mitigation and adaptation. As concerted global action on climate has stalled, national science advisers and others have augmented this to press their case. John Holdren in 2010 told a climate change conference that “We only have three options… It’s really that simple: mitigation, adaptation, and suffering.”[1]

However, in the darker corners of the climate debate, another, more radical option has sometimes been discussed: geoengineering (or ‘climate engineering’). The idea of intentional technological interference in the climate system in order cool the planet has a long history, but has only recently emerged to become a topic of mainstream scientific discussion.

 

History: Rethinking the unthinkable

Geoengineering is often discussed as an ‘emerging technology’. But it is not a technology at all. It is not even a basket of technologies, or potential technologies, although some of the technical possibilities suggested for geoengineering are better developed than others. Geoengineering is an idea, and it is a deeply problematic one. In 2010, science writer Eli Kintisch announced that it was ‘a bad idea whose time has come’.[2]

As an idea, geoengineering has a history as long as that of modern science. Francis Bacon, in his imaginary constitution of Salomon’s House’, saw the control of weather being an important part of the project of organised natural philosophy.[3] Three centuries later, JD Bernal claimed that,

“By an intelligent diversion of warm ocean-currents together with some means of colouring snow so that the sun could melt it, it might be possible to keep the Arctic ice-free for one summer, and that one year might tip the balance and permanently change the climate of the northern hemisphere.”[4]

Such speculations would continue throughout the twentieth century. At the same time, as described exhaustively by James Fleming, enthusiasts and engineers of varying credibility promised control of the weather to desperate farmers and others who had fallen victim to climatic whims.[5] The story of these rainmakers moved slowly from mythology to respectable science during the twentieth century. As the growth of computing power and global climate models promised greater predictive power over the weather and potent, world-changing technologies emerged during the two World Wars and the Cold War, some techno-optimists began to construct more detailed schemes. John Von Neumann wanted to take ‘the first steps toward influencing the weather by rational, human intervention’.[6] As described by Kristine Harper, visions of control lay behind the rapid growth of meteorology as a science, although in public most scientists would emphasise that their aims were merely predictive.

Some of the earliest thinking on what came to be known as stratospheric particle injection came from Mikhail Budyko, a Russian who was one of the leading figures in the quantification of meteorology, previously been ridiculed as a ‘guessing science’.[7] In the 1970s, Budyko sketched a plan for increasing the reflectivity of the planet’s upper atmosphere using sulphate particles, dispensed from aeroplanes.[8] It was this idea that, having lain all-but dormant for 40 years, inspired the intervention of Paul Crutzen in 2006. Crutzen, a highly respected Nobel Laureate for his work on atmospheric ozone, argued that the problem of climate change was an intractable Gordian knot. Geoengineering with a stratospheric sunshade provided a sword, or, as he put it, ‘a contribution to resolve a policy dilemma’ (Crutzen, 2006).

Crutzen’s paper brought a veneer of respectability to what had previously been considered a Cold War joke. He argued that the technologies with which to geoengineer were cheap and readily available. Even scientists who hated the idea could not ignore the possibility that it might be put into action at some point.

Assessing geoengineering options

Oliver Morton begins his analysis of geoengineering by asking two questions, originally from Robert Socolow:

  1. Do you believe the risks of climate change merit serious action aimed at lessening them?

  2. Do you think that reducing an industrial economy’s carbon dioxide emissions to near zero is very hard?[9]

If we answer yes to both, Morton argues, we should take geoengineering seriously. As 21st Century climate scientists around the world detected inadequate policy responses to the problem they had elucidated, they reluctantly began to agree.

For many of these scientists, geoengineering aroused particular concerns. International negotiations on climate change mitigation were fragile and geoengineering seemed to present a ‘moral hazard’: if insurance against the risks of climate change were on offer, people in power would surely become less interested in reducing greenhouse gas emissions. Scientists were not only concerned that geoengineering would be seen as a ‘get out of jail free’ card, they also worried that its deployment would have unintended consequences on global weather. In 2008, Alan Robock offered what he called a ‘fairly comprehensive list of reasons why geoengineering might be a bad idea’. The list was wide-ranging, encompassing politics, ethics and risks to local weather. Robock concludes that, in addition, ‘there is reason to worry about what we don’t know’.[10] Robock was one of the first wave of natural scientists to seriously explore geoengineering.

The move towards the scientific mainstream, coupled with growing attention from right-wing pundits in the USA who were eager for hassle-free solutions to the question of climate change, prompted the Royal (see diagram), the UK’s national academy of sciences, to take on geoengineering in 2008. Their assessment, which aimed to bring a cool scientific rationality to what had become a heated discussion, set the tone for much subsequent discussion.[11]

(Copyright Royal Society. Used with permission)

The Royal Society divided Geoengineering options into two proposed mechanisms of intervention. The first, carbon dioxide removal, involves the reduction of greenhouse gas concentrations in the atmosphere with machines or by enhancing natural systems. The second, solar radiation management, bounces a proportion of sunlight back into space by making the Earth’s surface, clouds or upper atmosphere more reflective.

The Society assessed the various options on multiple criteria, including effectiveness, speed, cost, safety, and concluded, as Paul Crutzen had done, that stratospheric particle injection was the most potent, the cheapest, but also the riskiest option available. The report concluded “all of the geoengineering methods assessed have major uncertainties in their likely costs, effectiveness or associated risks and are unlikely to be ready for deployment in the short to medium term”[12]. Nevertheless, scientific interest in stratospheric particle injection continued to grow, [13] in part because of an assumption that it would be, in David Keith’s words, ‘cheap and technically easy’.[14] Economist William Nordhaus was among the first to argue that, compared with decarbonisation of industrial society, geoengineering offered the potential for ‘costless mitigation of climate change’.[15] This economic enthusiasm was then, without much critical analysis, popularised in the book Superfreakonomics.[16] The history of similar sociotechnical systems suggests that the complexities and uncertainties associated such cost estimates are vast.

In 2015, the US National Academies revisited geoengineering, renaming it ‘climate intervention’ because of the committee’s opinion that the previous label “implies a greater level of precision and control than might be possible” (ibid., p. x). The US assessment echoed much of the Royal Society’s and further elucidated the problem identified by the UK body, which is that the cheapest most potent proposals for geoengineering were also those that were most ethically problematic and hardest to govern. It is notable that both the UK and US assessments have included ethical and political concerns alongside more conventional technical ones.

For some (including an early reviewer of the Royal Society’s report) the profundity of ethical and safety concerns raised by stratospheric particle injection warranted ruling it our altogether. Mike Hulme has argued that it represents ‘an illusory solution to the wrong problem’ and should therefore be taken off the table altogether.[17] Whether in spite of or because of its Promethean connotations, stratospheric geoengineering continues to dominate geoengineering discussions.

 

Conclusion

The debate about geoengineering has tended to make technologies and ideas appear closer and more real than they in fact are. For any geoengineering technology to make a substantial difference to climate change, it would demand a dramatic reconfiguration of research, technology, society and politics. The debate about geoengineering currently out of all proportion to the scale of actual research into it. Where research has been funded, it has tended to involve frictionless simulations in computer climate models and speculative social science and ethics. As of 2016, there is very little engineering in geoengineering.[18] There is still, therefore, an important discussion to be had about we – as society and as scientific researchers – should proceed: Should outdoor experiments begin? Should patents on geoengineering technologies be allowed? Can geoengineering only be legitimately governed at the level of the United Nations? As with any set of complex technologies, any hard predictions are doomed to fail. Geoengineering, perhaps in another guise or with more modest ambitions, may come to be an important part of the response to climate change, or it may eventually be regarded as nothing more than wild speculation. Watch this space.

 

 

Learning resources

Books

  • Hulme, M. (2014). Can science fix climate change: A case against climate engineering. John Wiley & Sons.
  • Keith, D. (2013). A case for climate engineering. MIT Press.
  • Morton, O. (2015). The Planet Remade: How geoengineering could change the world. Princeton University Press.
  • Stilgoe, J. (2015). Experiment earth: Responsible innovation in geoengineering. Routledge.

 

Audio

Video

 

 

[1] Text of remarks by Obama science adviser John Holdren to the National Climate Adaptation Summit, May 27, 2010

[2] Kintisch, E. (2010). Hack the planet: science’s best hope-or worst nightmare-for averting climate catastrophe. John Wiley & Sons.p. 13

[3] Horton, Z. (2014) Collapsing Scale: Nanotechnology and Geoengineering as Speculative Media, in press

[4] Bernal, J. D. (1939). The Social Function of Science, Faber 2010 edition, pp. 379-380

[5] Fleming, J. (2010). Fixing the Sky: The Checkered History of Weather and Climate Control, Columbia University Press. New York

[6] Von Neumann, quoted in Harper, K. (2008). Weather by the numbers: The genesis of modern meteorology, MIT Press.

[7] Harper, K. (2006). Meteorology’s Struggle for Professional Recognition in the USA (1900–1950). Annals of science, 63(02), 179-199.

[8] Budyko, M. I. 1974, Izmeniya Klimata. Gidrometeoizdat, later published as: Budyko, M. I. 1977 Climatic changes (transl. Izmeniia Klimata Leningrad: Gidrometeoizdat, 1974). Washington, DC: American Geophysical UnionBuchanan, R. A. (2006).

[9] Morton, O, 2015, The Planet Remade, Princeton University Press, p. 1

[10] Robock, A., (2008). 20 reasons why geoengineering may be a bad idea. Bulletin of the Atomic Scientists, 64, No. 2, 14-18, 59., p. 17.

[11] Royal Society, 2009, Geoengineering the Climate: Science, governance and uncertainty, London, Royal Society. (It is worth noting that this study was not the first assessment of geoengineering by a national academy. The US national academies addressed the issue, albeit in politically unsophisticated terms, in 1992 as part of an assessment of options for tackling climate change (NAS (1992) Policy Implications of Greenhouse Warming. National Academy Press, Washington).)

[12] Geoengineering the Climate: Science, Governance and Uncertainty (London: The Royal Society, September 2009), p. 57,

[13] Oldham, P., Szerszynski, B., Stilgoe, J., Brown, C., Eacott, B., & Yuille, A. (2015). Mapping the landscape of climate engineering. Philosophical Transactions of the Royal Society of London A, 372(2031), 1-20.

[14] Keith, D. (2013). A case for climate engineering. MIT Press, p. ix

[15] Nordhaus, W. D. (1992). An Optimal Transition Path for Controlling Greenhouse Gases. Science, 258, 1315-19.p.1317.

[16] Levitt, S. D., & Dubner, S. J. (2010). Superfreakonomics: Global cooling, patriotic prostitutes and why suicide bombers should buy life insurance. Penguin UK.

[17] Hulme, M. (2014). Can Science Fix Climate Change: A Case Against Climate Engineering. John Wiley & Sons., p. 130

[18] Oldham, P., Szerszynski, B., Stilgoe, J., Brown, C., Eacott, B., & Yuille, A. (2015). Mapping the landscape of climate engineering. Philosophical Transactions of the Royal Society of London A, 372(2031), 1-20.

Posted in Uncategorized | 1 Comment

‘What a pleasure it is to be misled’: Thoughts on a year in the US

(This piece was originally published in Alchemy magazine, the annual newsletter of my department, Science and Technology Studies at UCL)

trump rally

A Trump rally in Denver, Colorado (credit: Jack Stilgoe)

July, 2016: In an aircraft museum in Denver, against a backdrop of Cold War cast-offs, Donald Trump arrived, late, to a lukewarm reception. I had no trouble getting an easy view of the man who was, at that time, a mere curiosity. His audience seemed unthreatening, unlike at some of the rallies I had seen on TV. A few people were clearly angry and hungry for change, but their exchanges with the Clinton supporters protesting outside were civilised.

During Trump’s speech, I heard only incoherent nonsense: crime; war; his TV ratings; China; a poem about a snake; The Wall. At one point, he asked the crowd, “Do you guys want to do the ‘Lock Her Up’ thing”. They did, but they lacked gusto. I went away thinking that we had little to fear from this circus. As would become clear when he won the election four months later, I wasn’t attuned to his dog whistles.

The Trump rally marked the start of my year in the US, based at the University of Colorado, Boulder. In the Presidential election, Colorado was supposed to be a swing state, blending Republican and Democrat sensibilities. Boulder, however, is one of the most reliably Democrat-leaning places in the country (one family from outside what Coloradans call the ‘Boulder Bubble’ told me that the town was ’20 square miles surrounded by reality’). It is a prosperous, liberal college town. And it was a fantastic standpoint from which to watch the great American experiment produce a true anomaly.

trump

I don’t think we’re in Boulder any more, Toto (credit: Jack Stilgoe)

When Trump was elected, Boulder was, along with much of liberal America, in shock. More than 80% of the city voted for Clinton. As for many Americans, there was a sense among the population that they no longer knew their own country. For social scientists like me, it was a wake up call – another surprise to add to the Brexit vote earlier in 2016. For some academics, Trump’s success was a sure sign of ‘post-truth’ politics. I was more interested in whether it was an expression of those left behind by recent American progress.

My aim was not to study elections but to study technology. I wanted to observe American cultures of innovation. I was particularly interested in self-driving cars – a technology replete with world-changing promise, surrounded by questions about risk, ethics and regulation. Along with other digital technologies, the explosion of interest in self-driving cars seemed to mark a shift in American corporate power, away from the manufacturing heartlands of the mid-West towards Silicon Valley.

California, like Colorado, is a place built on promise. During the Gold Rush, people were propelled West by stories of untold riches and unfettered freedoms. Some struck lucky, but the people making the real money were those, like Levi Strauss and Leland Stanford (who went on to found Stanford university), who were selling clothes, equipment and transport to the wide-eyed miners. Since then, for rich Americans, the Wild West has become the Mild West – its hardships replaced by comforts, many of which are technological. If tech is the new gold rush, its promise is not just that some will get rich quick; supposedly we will all benefit.

America is a highly unequal place, at times cruelly so. Technology, left to its own devices, risks exacerbating such inequalities, because those with power and money are the people most able to take advantage of technology’s benefits. An August 2016 editorial in Nature by science policy scholar Dan Sarewitz drew a direct connection between the rise of Trump and the unevenness of technological progress – a nationwide expression of trouble that had already surfaced around San Francisco as tech companies priced out their poorer neighbours. Sarewitz took issue with the laissez-faire attitude of US policymakers towards science and innovation; the assumption is that trickle-down innovation will float everyone’s boats.

 

Self-driving innovation

Twenty years ago, British sociologists Richard Barbrook and Andy Cameron identified what they called ‘Californian ideology’. This cocktail of hippy countercultural libertarianism seems to have only got stronger. When hippies were making computers in garages, this is inconsequential. When they became the new masters of the universe, their ideology started to matter. As with conventional conservatism, the risk with those who think government is bad is that they produce bad governments.

Much of Silicon Valley’s innovation has been in software. As Amazon, Facebook, Apple and Google have become the world’s largest companies, we are starting to see that their influence has effects in the material world too. While in the US, I developed a particular interest in Tesla, another Silicon Valley company that is now leading the development of towards self-driving cars.

tesla (1)

Tesla Showroom in Denver (credit: Jack Stilgoe)

In a shopping mall in Denver, I turned up at a Tesla showroom hoping for a test-drive in a Tesla Model S. I particularly wanted to try out a feature – Autopilot – that promised to relieve drivers of some of their tedious responsibilities. A few months earlier, a man had died in a crash while his Tesla was in Autopilot. Evidence from videos posted to YouTube suggested that plenty of other drivers were similarly involved in a dangerous, chaotic experiment with autonomous driving. After the crash, Tesla were careful to tell people that they should keep their hands on the wheel, but my Tesla co-pilot told me that I would be fine going hands-free. An initially terrifying experience – allowing a machine to control a car travelling at 70mph – quickly became normal.

The American experiment with self-driving cars seemed to be prioritising freedom over public safety. (One could argue that the American experiment with the automobile in the 20th Century did the same. The US death rate per mile is more than three times worse than in the UK or Sweden). My question was whether there could be a more responsible alternative. Outside the big cities, US transport tends to privilege the privately-owned car. In Europe, where we think about transport differently, there are surely alternative versions of a self-driving future. Reimagining a self-driving future means being sceptical of Silicon Valley promises of technological enchantment. We often seem unwilling to do this. As leading sociologist of science Bruno Latour concluded in a Los Angeles Review of Books piece published in the wake of Trump’s victory, “what a pleasure it is to be misled”.

 

Posted in Uncategorized | Leave a comment

We need new rules for self-driving cars

I have a feature piece in Issues in Science and Technology. It makes the case for self-driving car regulation, beginning with questions of safety and expanding into questions of mobility and urban planning.

StilgoeMy argument focuses on the National Transportation Safety Board and the role that they played in social learning around the May 2016 Tesla Autopilot crash. After this crash, I wrote a piece for the Guardian asking ‘What will happen when a self-driving car kills a bystander?‘ and a longer paper for Social Studies of Science. As the Issues in Science and Technology piece was growing to press, the NTSB were called into action again because a self-driving Uber had run over and killed a pedestrian in Tempe, Arizona. Dan Sarewitz (the editor of Issues) and I added a small line to the end of the piece but the details of the case are still not clear. Five days later, it was reported that another Tesla driver had been killed in a crash while his car was on Autopilot.

The landscape for US self-driving car development is currently a Wild West, a product of thoughtless policymakers competing for the attention of tech companies. The history of technology suggests that, too often, it takes a crisis to prompt regulatory movement.  ‘I told you so’ is not a good look, but those of us working on governance had hoped that this time it might be different.

If I were Volvo, GM or Ford, I would be intensely concerned about reputational risk and looking to distance myself from Tesla and Uber. But companies are not going to win public trust by themselves. The question now is how regulators can learn from technological mishaps and build credible governance. Calls for governance are growing in number and volume. Among the most interesting is a proposal from David Danks and Alex John London of Carnegie Mellon to follow the model of the Food and Drug Administration. Most carmakers would baulk at the thought of putting their cars through clinical trials before releasing them onto the open road. But some sort of pre-market approval now seems necessary.

 

(Read the full piece here: ‘We need new rules for self-driving cars’ (pdf), Issues in Science and Technology, Spring 2018).

 

Posted in Uncategorized | Leave a comment

What can a self-driving car crash teach us about the politics of machine learning?

This post is an excerpt from the paper Machine learning, social learning and the governance of self-driving cars, in Social Studies of Science. It was originally published on the Transmissions blog

gv6ugbsbxhcfbma1hgrh.gif

(Image: NTSB simulation of crash scenarios, 2017)

 

In May 2016, a Tesla Model S was involved in what could be considered the world’s first self-driving car fatality. In the middle of a sunny afternoon, on a divided highway near Williston, Florida, Joshua Brown, an early adopter and Tesla enthusiast, died at the wheel of his car. The car failed to see a white truck that was crossing his path. While in ‘Autopilot’ mode, Brown’s car hit the trailer at 74 mph. The crash only came to public light in late June 2016, when Tesla published a blog post, headlined ‘A tragic loss’, that described Autopilot as being ‘in a public beta phase’.

Self-driving cars, quintessentially ‘smart’ technologies, are not born smart. Their brains are still not fully formed. The algorithms that their creators hope will allow them to soon handle any eventuality are being continually updated with new data. The cars are learning to drive.

Self-driving cars represent a high-stakes test of the powers of machine learning, as well as a test case for social learning in technology governance. Society is learning about the technology while the technology learns about society. Understanding and governing the politics of this technology means asking ‘Who is learning, what are they learning and how are they learning?’

Proponents of self-driving cars see machine learning as a way of compensating for human imperfection. Not only are humans unreliable drivers, they seem to be getting worse rather than better. After decades of falling road death numbers in the US, mostly due to improved car design and safety laws, rates have been increasing since 2010, probably due to phone-induced distraction. The rationalists’ lament is a familiar one. TS Eliot has the doomed archbishop Thomas Becket say in ‘Murder in the cathedral’: ‘The same things happen again and again; Men learn little from others’ experience.’

Self-driving cars, however, learn from one another. Tesla’s cars are bought as individual objects, but commit their owners to sharing data with the company in a process called ‘fleet learning’. According to Elon Musk, the company’s CEO, ‘the whole Tesla fleet operates as a network. When one car learns something, they all learn it’, with each Autopilot user as an ‘expert trainer for how the autopilot should work’.

The promise is that, with enough data, this process will soon match and then surpass humans’ abilities. The approach makes the dream of automotive autonomy seem seductively ‘solvable’. It is also represents a privatization of learning.

As work by scholars such as Charles Perrow and Brian Wynne has revealed, technological malfunctions are an opportunity for the reframing of governance and the democratization of learning. The official investigations of and responses to the May 2016 Tesla crash represent a slow process of social learning.

The first official report of the May 2016 crash, from the Florida police, put the blame squarely on the truck driver for failing to yield the right of way. However, the circumstances of the crash were seen as sufficiently novel to warrant investigations by the National Transportation Safety Board (NTSB) and the National Highway Traffic Safety Administration (NHTSA). The NTSB is tasked with identifying the probable cause of every air accident in the US, as well as some highway crashes.

The NTSB’s preliminary report was matter-of-fact. It relates that, at 4:40pm on a clear, dry day, a large truck carrying blueberries crossed US Highway 27A in front of the Tesla, which failed to stop. The Tesla passed under the truck, sheering off the car’s roof. The collision cut power to the wheels and the car coasted off the road for 297 feet before hitting and breaking a pole, turning sideways and coming to a stop. Brown was pronounced dead at the scene. The truck was barely damaged.

The NHTSA saw the incident as an opportunity for a crash course in self-driving car innovation. Its Office of Defects Investigation wrote to Tesla demanding data on all of the company’s cars, instances of Autopilot use and abuse, customer complaints, legal claims, a log of all technology testing and modification in the development of Autopilot and a full engineering specification of how and why Autopilot does what it does.

In January 2017, the NHTSA issued its report on the crash. The agency’s initial aim was to ‘examine the design and performance of any automated driving systems in use at the time of the crash’. The technical part of their report emphasized that the Tesla Autopilot was a long way from full autonomy. A second strand of analysis focused on what the NHTSA called ‘human factors’. The agency chose to direct its major recommendation at users: ‘Drivers should read all instructions and warnings provided in owner’s manuals for ADAS [advanced driver-assistance systems] technologies and be aware of system limitations’. The report followed a pattern, familiar in STS, of blaming sociotechnical imperfection on user error: humans, as anthropologist Madeleine Clare Elish has described, become the ‘moral crumple zone’.

The NTSB went further, recognizing the opportunity to learn. The Board sought to clarify that the Tesla Model S didn’t meet the technical definition of a ‘self-driving car’, but blamed the confusion on the company as well as the victim. Its final word on the probable cause of the Tesla crash added a concern with Autopilot’s ‘operational design, which permitted [the driver’s] prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer’. Tesla, in the words of the NTSB chair, ‘did little to constrain the use of autopilot to roadways for which it was designed’.

Dominant approaches to machine learning still represent a substantial barrier to governance. When NTSB conducted its investigation, it found a technology that was dripping with data and replete with sensors, but offering no insight into what the car thought it saw nor how it reached its decisions. The car’s brain remained largely off-limits to investigators. At a board meeting in September 2017, one NTSB staff member explained: ‘The data we obtained was sufficient to let us know the [detection of the truck] did not occur, but it was not sufficient to let us know why.’ The need to improve social learning goes beyond accident investigation. If policymakers want to maximize the public value of self-driving car technology, they should be intensely concerned about the inscrutability and proprietary nature of machine learning systems.

Posted in Uncategorized | 1 Comment

Why we must scrutinise the magical thinking behind geoengineering


(This piece was originally written in the wake of the Paris Agreement. I’ve tidied it up and put it here for safe-keeping).

‘In time of trouble, I had been trained since childhood, read, learn, work it up, go to the literature. Information is control.’

Joan Didion, The Year of Magical Thinking

 

In The Year of Magical Thinking, novelist Joan Didion reflects on her attempts to think through the grief brought about by her husband’s death and her daughter’s illness. In trying to understand the incomprehensible and control the uncontrollable, she finds herself victim to a childlike belief that she can wish her way to a new reality. This magical thinking seems egregiously unscientific, and yet, when it comes to new technologies, our finest minds can succumb to something similar.

It is hard to know whether to be optimistic or pessimistic about the Paris agreement, whether to celebrate the consensus on new possibilities, mourn the opportunities already missed, or do both at once. This ambivalence is complicated by the strong thread of optimism that is woven into the future to which we have now become signatories. The agreement inscribes a set of technological promises that have received little democratic scrutiny. If we are to deliver on the vision of Paris, we must urgently confront the politics of radical innovation.

Imaginary technologies

A few commentators have pointed out that the projections of future climate change that provide the scientific ingredients of the Paris agreement are themselves based on political choices. The target of two degrees’ warming has been an anchor for climate negotiations for a long time, even as emissions have continued to rise. The arithmetic of the ‘Intended national determined contributions’ to climate change mitigation agreed in Paris does not currently add up. The gap between expectation and action has been filled by a new sort of hope, that technological means will emerge to extract greenhouse gases from our atmosphere. Imaginary ‘negative emissions technologies’ are built into all of the IPCC scenarios that point to less than two degrees’ warming.

Keeping global warming below this target, let alone the 1.5-degree aspiration, will demand extraordinary innovation in order to develop systems that not only produce no greenhouse gases, but actively remove them from the atmosphere. In order to secure a climate consensus, we have become signatories to a future that is radically different from the past; we have invested our hopes in emerging technologies about which very little is known. So-called ‘geoengineering’ is profoundly uncertain. It brings with it political and ethical baggage, largely ignored in the Paris negotiations. The more controversial geoengineering proposals known as ‘Solar Radiation Management’ (SRM) have been put to one side for the time being, but planetary-scale Carbon Dioxide Removal also represents a form of magical thinking.

The sociology of expectations

Recognising the power of technological promises to shape our world, sociologists are turning their attention to how futures get imagined. The ‘sociology of expectations’ would say that technological visions, often delivered by those who claim privileged access to emerging innovations, are not mere predictions. Instead, they are a form of performance. Futures may be advertised as inevitable, just around the corner or already here in order to make them more likely. A recent commercial for a Mercedes self-driving car runs like this: ‘Is the world truly ready for a vehicle that can drive itself? Ready or not, the future is here. Introducing the 2017 E-Class from Mercedes-Benz, a concept car that’s already a reality.’ The easiest way to sell a future technology is to pretend that it’s already possible.

Science and technology need a degree of salesmanship. Hype is built over genomics, nanotechnology and other fields in order to attract attention and investment. Hidden from view is the otherwise obvious fact that the future is a result of product of choices in the present. Moore’s Law, to take an example, is presented as though it is a law of nature: computers will keep doubling their power over a fixed period. This ‘law’ is in reality a roadmap, the product of conscious decisionmaking by the semiconductor industry. A recent article in Nature described the effort, organisation and investment required to make exponential technical change seem inevitable. It is only now that semiconductor innovation appears to be running out of physical space on silicon chips that the industry is publicly discussing alternative directions for innovation.

The naturalising of technological progress is disempowering, and it leads to some very bad policy decisions. Expectations around new technologies tend to exclude complexity. The detail about what it would take to realise a particular technological future, which might draw in an unpredictable cast of innovators, consumers, users, regulators, protesters, artists, designers and others, is underdrawn, in part because it is profoundly uncertain and in part because a more accurate picture would obscure the interests of technological optimists. This leads to a paradox: the earlier and more uncertain the technology, the less evidence there is to constrain the hype around it. If downsides are presented, they are often of an apocalyptic flavour, as with the recent concerns expressed by Elon Musk, Stephen Hawking and co that Artificial Intelligence poses an existential threat to humanity. The mundane but more powerful ways in which technologies differentially benefit people do not feature.

It is only once technological promises meet scientific, political and public attention that their complexities start to be made apparent. A wider lens sees that Moore’s law is the exception rather than the rule. Nuclear energy, once deemed ‘too cheap to meter’, has only become more expensive over time as its full social complexity and long-term costs are realised. New waves of technological optimism are accompanied by an amnesia, a sense that, this time, it will be different and our hype will be justified.

Technological fixes

In a recent book, former president of the European Research Council Helga Nowotny defines a technological promise as ‘a risk-free mortgage on the future’. She quotes Hannah Arendt, who argued that ‘By bringing the promised future into the present we are able to create reliability and predictability which would otherwise be out of reach’. The promises we make to ourselves about the future could be said to be a defining feature of human modernity. The trouble is with our hypocrisy: we treat the promises of politicians with extreme scepticism, but those surrounding technology are harder to argue with. Unlike other parts of climate agreements, our technological promises can never be legally binding. Morality and responsibility are vital.

Until we ask difficult questions, new technologies are politically seductive because they seem to offer a way out of politically intractable swamps. If politics is a Gordian knot then technology promises a sword with which to cut through. We do not need to deal with the difficulties of incumbent interests if technologies can avoid such dilemmas. In some cases, our fixes work fine. Vaccines, on the whole, offer a clean solution to the problem of some infectious diseases. But in many cases, technological fixes are ugly, blunt instruments, not sharpened swords.

Geoengineering at first glance offers a handy get-out-of-jail-free card for climate change, the most wicked of wicked problems. But, for anyone who pays closer attention, the fix looks deeply flawed. The most melodramatic geoengineering proposals include schemes to inject sulphate particles into the stratosphere, attempting to mimic the effects of massive volcanic eruptions like that of Tambora in 1815, which cooled the Earth for two years. Scientists have been quick to point out the flaws with this idea. While lowering average global temperature we would also likely disrupt regional weather, as Tambora’s eruption did during 1816 – the ‘year without a summer’; oceans would continue to acidify as Carbon Dioxide built up beneath our sunshade; and the offer of a technological fix would surely destabilise fragile international attempts at climate change mitigation.

For all of the discussion of geoengineering’s impacts, however, there has been relatively little interest in the question of whether it is at all possible. Scientists have been happier to run with the speculation. David Keith, the world’s most prominent SRM geoengineering researcher, opens his book by arguing that

‘It is possible to cool the planet by injecting reflective particles of sulfuric acid into the upper atmosphere… it is cheap and technically easy’.

The IPCC’s Fifth Assessment Report in 2013 was concerned about possible ‘side effects’, but still decided to include geoengineering in the policymakers’ summary of working group 1, which assesses the ‘physical science basis’ for climate change, rather than policy responses. Only six years earlier, IPCC had dismissed such ideas as ‘speculative and unproven’. Where interdisciplinary assessments have begun, such as those of the Royal Society or the UK Integrated Assessment of Geoengineering Proposals project, the scale of our uncertainty becomes clear. SRM only looks ‘cheap and technically easy’ if we choose not to ask about its full costs.

Few people think that carbon dioxide removal would be cheap and easy. Reversing the unintended exhausts of the industrial revolution, at speed, presents an unprecedented global engineering challenge. Nevertheless, some are optimistic that tweaks to natural systems – regrowing forests, fertilising the oceans or enhancing the weathering of rocks – or new machines to suck CO2 from the air, will be able to significantly counteract our emissions. For the IPCC, the promise of negative emissions has been domesticated in the form of BECCS – biomass energy with carbon capture and storage – which is seen as the most realistic idea currently on the table. But even with these more modest proposals, the gap between future and present, between idea and workable system, is vast. To see the difficulties, one need only look to those seeking to make small carbon capture and storage plants workable and publicly acceptable.

This doesn’t mean that technology won’t be vital. The future does not look like the past. Indeed, climate action is predicated on the possibility of a future that corrects past mistakes. Innovation will be central to this, and innovation is profoundly unpredictable. The collective energy that gathered behind the ‘Mission Innovation’ clean energy initiative in Paris suggests a promising complement to geopolitical negotiation.

However, if we presume that technologies of the future offer a clean break from our past troubles, we will only be disappointed. The Paris agreement has, by quietly signing up to the promise of negative emissions, sidestepped, and therefore postponed, tough political choices. By including these imaginary technologies in its projections, its forces them closer to reality. We therefore need to urgently ask what it would take to make such ideas work, how to understand their uncertainties and how we might govern them.

The illusion of control

Joan Didion’s eventual conclusion was that the answers were not in the literature, that information was not the same as control. She saw her magical thinking as the product of a culture that forbids grief and demands optimism. While a sense of optimism is important for climate action, we must also maintain our ability to analyse. The IPCC has become a model for the rigour of its scientific treatment of past and possible future climates. Being more scientific about the nature of innovation means resisting what the Royal Society identified as ‘appraisal optimism’. The tools of technology assessment, incorporating the values and knowledge of stakeholders and members of the public as well as experts, need to be applied to the promise of greenhouse gas removal, and this evidence needs to be weighed alongside that from economists and climate scientists. We must be as transparent and sceptical about our technological assumptions as we are about those behind climate science. The illusion of control is a dangerous thing. Wishing for a perfect solution can mean ignoring imperfect but necessary political compromises.

Posted in Uncategorized | 4 Comments

Thoughts on the March for Science

I went along to the March for Science in Denver. I was there in part as an observer. As someone who studies the relationship between science and politics, this was a rare opportunity to see public displays of affections and annoyances that are usually private. For social scientists who study science, there has already been plenty to observe and to criticise in the positioning and framing of this march. Much of the criticism misses the point. Notwithstanding anti-nuclear demonstrations in the 80s, this was perhaps the biggest science-centred protest in history, bringing together thousands of people in more than 500 cities around the world. When I first heard about it, I was worried that its motivations were hopelessly unclear. Having been, I relaxed.

We should not be surprised that the planning of a mass mobilisation of scientists and people who care about science is riddled with hypocrisy, confusion and error. This is politics. Some scientists might claim that they are merely marching for truth. But most would admit that they were a constituency that shared a set of broad political values too. And, as John Holdren argued on the day, they need not apologise for these.

One of many wonderful things about the institutions of contemporary science in democracies is that they are able to support contradiction, uncertainty, doubt and disagreement. Many of us in the social sciences would like to see these qualities democratised. We would like greater consideration by scientists of the profound unresolved issues within what we call ‘science’. We would draw attention to the tension between what Sheila Jasanoff has differentiated as ‘truth’ and ‘gain’ as the two grand justifications for science. (Scientists, in political debates, often thrust with claims about technology and progress but, when challenged, parry with claims about truth and objectivity).

However, the march for science that I saw, rather than being a representation of science’s issues, was a rare opportunity to talk about them. Before it began, the organisers faced difficult questions about diversity, often overlooked when science stays in the lab. Alongside the call to get science more involved in politics, there were calls to talk about the politics of science. The march has been seen by very few as the end of the conversation. As both Roger Pielke Jr and Andrew Maynard suggest, the real question is what comes next. Yes, there were plenty of placards claiming that, when it comes to climate change “the debate is over”, but the March for Science seemed more interested in opening up than closing down. And scientists, while they could do with a few tips on political messaging, do come up with some funny (albeit niche) slogans.

Here are some of my photos

 

Well done this man

DSC04816 (1).jpg

“Trump pipettes with two hands”. Scathing.

DSC04860.JPG

A tie-dye labcoat in front of the State Capitol. How very Colorado.

DSC04889.JPG

At this rate, there’s a chance this man may end up with the job.

DSC04837.JPG

Here I am with two borrowed Chemtrail signs

DSC04865 (1).jpg

The placards got even more complicated. Very few people would have got the joke. I had to get him to explain it, which he did with extreme patience. Ask a physicist what the rate of change of acceleration is called.

DSC04839 (1).jpg

“Science Trumps Politics”. Discuss. (I didn’t have the heart. They were such a nice family.)

DSC04808 (1).jpg

This placard would have got a higher grade in my University of Colorado graduate science policy class.

DSC04848 (1).jpg

While this boy was adding nuance to Denver’s science policy debate, my own kids were playing with dry ice

DSC04880.JPG

The Lorax is a big part of US Earth-Day culture. Ahem and Ahem. One of its most interesting messages is that technology can be part of the problem as well as the solution…

DSC04836.JPG

… an issue that was taken up by this guy. Great question. I love that he brought it to a science march.

DSC04988.JPG

And finally, Earth Day Yoga

DSC04991.JPG

Posted in Uncategorized | 1 Comment

Talking ‘tech’ on The World This Weekend

At the start of 2017, I ventured through the snow to KGNU, Boulder’s community radio station, to record an interview with Mark Mardell for The World This Weekend on the BBC.

They edited out my citation for the quote at the start, which is a little embarrassing. I took it from this tweet:

https://twitter.com/azizshamim/status/595285234880491521

Posted in Uncategorized | Leave a comment

Podcast on self-driving cars

During a visit to the wonderful people at Arizona State University’s School for the Future of Innovation and Society, Andrew Maynard and Heather Ross invited me into the podcast booth to talk about self-driving cars.

This connects to a piece on the Guardian Political Science blog

Posted in Uncategorized | Leave a comment