Acknowledging AI’s dark side

This is the text of a recent letter published in Science, for those who can’t get behind the paywall
Science 4 September 2015:
Vol. 349 no. 6252 p. 1064
DOI: 10.1126/science.349.6252.1064-c

The 17 July special section on Artificial Intelligence (AI) (p. 248), although replete with solid information and ethical concern, was biased toward optimism about the technology.

The articles concentrated on the roles that the military and government play in “advancing” AI, but did not include the opinions of any political scientists or technology policy scholars trained to think about the unintended (and negative) consequences of governmental steering of technology. The interview with Stuart Russell touches on these concerns (“Fears of an AI pioneer,” J. Bohannon, News, p. 252), but as a computer scientist, his solutions focus on improved training. Yet even the best training will not protect against market or military incentives to stay ahead of competitors.

Likewise double-edged was M. I. Jordan and T. M. Mitchell’s desire “that society begin now to consider how to maximize” the benefits of AI as a transformative technology (“Machine learning: Trends, perspectives, and prospects,” Reviews, p. 255). Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate.

The section implied that we are all engaged in a common endeavor, when in fact AI is dominated by a relative handful of mostly male, mostly white and east Asian, mostly young, mostly affluent, highly educated technoscientists and entrepreneurs and their affluent customers. A majority of humanity is on the outside looking in, and it is past time for those working on AI to be frank about it.

The rhetoric was also loaded with positive terms. AI presents a risk of real harm, and any serious analysis of its potential future would do well to unflinchingly acknowledge that fact.

The question posed in the collection’s introduction—“How will we ensure that the rise of the machines is entirely under human control?” (“Rise of the machines,” J. Stajic et al., p. 248)—is the wrong question to ask. There are no institutions adequate to “ensure” it. There are no procedures by which all humans can take part in the decision process. The more important question is this: Should we slow the pace of AI research and applications until a majority of people, representing the world’s diversity, can play a meaningful role in the deliberations? Until that question is part of the debate, there is no debate worth having.

  1. Christelle Didier1,
  2. Weiwen Duan2,
  3. Jean-Pierre Dupuy3,
  4. David H. Guston4,
  5. Yongmou Liu5,
  6. José Antonio López Cerezo6,
  7. Diane Michelfelder7,
  8. Carl Mitcham8,
  9. Daniel Sarewitz9,
  10. Jack Stilgoe10,
  11. Andrew Stirling11,
  12. Shannon Vallor12,
  13. Guoyu Wang13,
  14. James Wilsdon11,
  15. Edward J. Woodhouse14,*

  1. 1Lille University, Education, Lille, 59653, France.

  2. 2Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, 100732, China.

  3. 3Department of Philosophy, Ecole Polytechnique, Paris, 75005, France.

  4. 4School for the Future of Innovation in Society, Arizona State University, Tempe, AZ 85287-5603, USA.

  5. 5Department of Philosophy, Renmin University of China, Beijing, 100872, China.

  6. 6Department of Philosophy, University of Oviedo, Oviedo, Asturias, 33003, Spain.

  7. 7Department of Philosophy, Macalester College, Saint Paul, MN 55105, USA.

  8. 8Liberal Arts and International Studies, Colorado School of Mines, Golden, CO 80401, USA.

  9. 9Consortium for Science, Policy, and Outcomes, Arizona State University, Washington, DC 20009, USA.

  10. 10Department of Science and Technology Studies, University College London, London, WC1E 6BT, UK.

  11. 11Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9SL, UK.

  12. 12Department of Philosophy, Santa Clara University, Santa Clara, CA 95053, USA.

  13. 13Department of Philosophy, Dalian University of Technology, Dalian, 116024, China.

  14. 14Department of Science and Technology Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
  1. *Corresponding author. E-mail: woodhouse@rpi.edu
Advertisements

About Jack Stilgoe

Jack Stilgoe is a senior lecturer in science policy at the department of Science and Technology Studies, University College London.
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Acknowledging AI’s dark side

  1. Nullius says:

    “The more important question [about the possible dangers of AI] is this: Should we slow the pace of AI research and applications until a majority of people, representing the world’s diversity, can play a meaningful role in the deliberations?”

    Think of the bearpit of US capitalism. Is this even possible? Couldn’t we say the same for climate change or nuclear proliferation? Idealistic wishes are fine, but if we don’t accept and work with the gritty realities of the world we risk the fate of Corbyn – irrelevance. Moreover, given the depressing state of consciousness among the public, we should not assume that the great unwashed will make the best choices. Too many make terrible choices for their kids, for themselves, and for their countries. Especially when the issues at hand are complicated and technical.

    The fact is, AI research is romping on, and will continue to do so because the rewards for the first team to build even a semi-strong AI will be Vast. As Sam Harris says, beating the competition by even a week or two may well make all the difference between being a trillionaire or someone else’s employee. Now, it may be that after the third or fourth week of its existence, a self-improving AI system will control the world, even if it is giving us all sorts of goodies. As Martin Ford and Nick Bostrom point out in their books, there is no way to ensure this does not happen beforehand, short of enforcing a moratorium on this research today.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s