Vol. 349 no. 6252 p. 1064
The 17 July special section on Artificial Intelligence (AI) (p. 248), although replete with solid information and ethical concern, was biased toward optimism about the technology.
The articles concentrated on the roles that the military and government play in “advancing” AI, but did not include the opinions of any political scientists or technology policy scholars trained to think about the unintended (and negative) consequences of governmental steering of technology. The interview with Stuart Russell touches on these concerns (“Fears of an AI pioneer,” J. Bohannon, News, p. 252), but as a computer scientist, his solutions focus on improved training. Yet even the best training will not protect against market or military incentives to stay ahead of competitors.
Likewise double-edged was M. I. Jordan and T. M. Mitchell’s desire “that society begin now to consider how to maximize” the benefits of AI as a transformative technology (“Machine learning: Trends, perspectives, and prospects,” Reviews, p. 255). Given the grievous shortcomings of national governance and the even weaker capacities of the international system, it is dangerous to invest heavily in AI without political processes in place that allow those who support and oppose the technology to engage in a fair debate.
The section implied that we are all engaged in a common endeavor, when in fact AI is dominated by a relative handful of mostly male, mostly white and east Asian, mostly young, mostly affluent, highly educated technoscientists and entrepreneurs and their affluent customers. A majority of humanity is on the outside looking in, and it is past time for those working on AI to be frank about it.
The rhetoric was also loaded with positive terms. AI presents a risk of real harm, and any serious analysis of its potential future would do well to unflinchingly acknowledge that fact.
The question posed in the collection’s introduction—“How will we ensure that the rise of the machines is entirely under human control?” (“Rise of the machines,” J. Stajic et al., p. 248)—is the wrong question to ask. There are no institutions adequate to “ensure” it. There are no procedures by which all humans can take part in the decision process. The more important question is this: Should we slow the pace of AI research and applications until a majority of people, representing the world’s diversity, can play a meaningful role in the deliberations? Until that question is part of the debate, there is no debate worth having.
- Christelle Didier1,
- Weiwen Duan2,
- Jean-Pierre Dupuy3,
- David H. Guston4,
- Yongmou Liu5,
- José Antonio López Cerezo6,
- Diane Michelfelder7,
- Carl Mitcham8,
- Daniel Sarewitz9,
- Jack Stilgoe10,
- Andrew Stirling11,
- Shannon Vallor12,
- Guoyu Wang13,
- James Wilsdon11,
- Edward J. Woodhouse14,*
1Lille University, Education, Lille, 59653, France.
2Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, 100732, China.
3Department of Philosophy, Ecole Polytechnique, Paris, 75005, France.
4School for the Future of Innovation in Society, Arizona State University, Tempe, AZ 85287-5603, USA.
5Department of Philosophy, Renmin University of China, Beijing, 100872, China.
6Department of Philosophy, University of Oviedo, Oviedo, Asturias, 33003, Spain.
7Department of Philosophy, Macalester College, Saint Paul, MN 55105, USA.
8Liberal Arts and International Studies, Colorado School of Mines, Golden, CO 80401, USA.
9Consortium for Science, Policy, and Outcomes, Arizona State University, Washington, DC 20009, USA.
10Department of Science and Technology Studies, University College London, London, WC1E 6BT, UK.
11Science Policy Research Unit, University of Sussex, Falmer, Brighton, BN1 9SL, UK.
12Department of Philosophy, Santa Clara University, Santa Clara, CA 95053, USA.
13Department of Philosophy, Dalian University of Technology, Dalian, 116024, China.
14Department of Science and Technology Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.