Abstract
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Ahmed, A. (2014). The thistle and the drone: How America’s war on terror became a global war on Tribal Islam. Noida: Harper Collins Publishers India.
Aidyia. (2016). Aidyia: About us. Retrieved from www.aidyia.com/company/
Amazon. (2016). Amazon Prime Air. Retrieved from www.amazon.com/primeair/
Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York City, NY: Thomas Dunne Books.
Berkowitz, R. (2014). Drones and the question of “The Human”. Ethics & International Affairs, 28(2), 159–169.
Bijker, W. E. (1993). Do not despair: There is life after constructivism. Science, Technology and Human Values, 18(1), 113–138.
Bijker, W. E. (1997). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. Cambridge, MA: MIT Press.
Callon, M. (1999). Actor-network theory—The market test. The Sociological Review, 47(S1), 181–195.
Carr, N. (2014). The glass cage: Automation and us. New York City, NY: W. W. Norton & Company.
Future of Life Institute. (2015). Research priorities for robust and beneficial artificial intelligence. Retrieved from http://futureoflife.org/ai-open-letter/
Future of Life Institute. (2015). Autonomous weapons: An open letter from AI & robotics researchers. Retrieved from http://futureoflife.org/open-letter-autonomous-weapons/
Gaudin, S. (2015). Stephen Hawking fears robots could take over in 100 years. ComputerWorld, 14 May 2015.
Google. (2016). Google self-driving car project. Retrieved from www.google.com/selfdrivingcar/
Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.
Heyns, C. (2013). Report of the special rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. United Nations Human Rights Council, Session 23, 9 April 2013.
Irmak, N. (2012). Software is an abstract artifact. Grazer Philosophische Studien, 86, 55–72.
Itskov, D. (2016). 2045 strategic social initiative. Retrieved from http://2045.com
Jennewein, T., Achleitner, U., Weihs, G., Weinfurter, H., & Zeilinger, A. (1999). A fast and compact quantum random number generator. Retrieved from arxiv.org/abs/quant-ph/9912118
Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.
Kant, I. (1785/2002). Groundwork of the metaphysics of morals (trans: Wood, A. W.). New Haven: Yale University Press.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Penguin Books.
Latour, L. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society studies in sociotechnical change. Cambridge, MA: MIT Press.
Levine, S., Pastor, P., Krizhevsky, A., & Quillen, D. (2016). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Google Preliminary Report. arxiv.org/pdf/1603.02199v4.pdf
MacKenzie, D. (2014). A sociology of algorithms: High-frequency trading and the shaping of markets. Retrieved from http://www.sps.ed.ac.uk/__data/assets/pdf_file/0004/156298/Algorithms25.pdf
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
Metz, C. (2016). The rise of the artificially intelligent hedge fund. Wired, 25 January 2016.
Mindell, D. A. (2015). Our robots, ourselves: Robotics and the myths of autonomy. New York City, NY: Viking Press.
Minski, M. (2013). Dr. Marvin Minsky—Facing the future. Retrieved from www.youtube.com/watch?v=w9sujY8Xjro
Morton, O. (2014). Good and ready. The Economist, 29 March 2014.
Omohundro, S. (2016). Autonomous technology and the greater human good. In V. Müller (Ed.), Risks of artificial intelligence (pp. 9–27). Boca Raton, FL: CRC Press.
Storm, D. (2015). Steve Wozniak on AI: Will we be pets or mere ants to be squashed our robot overlords? ComputerWorld, 25 March 2015.
Yampolskiy, R. V. (2016). Utility function security in artificially intelligent agents. In V. Müller (Ed.), Risks of artificial intelligence (pp. 115–140). Boca Raton, FL: CRC Press.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Johnson, D.G., Verdicchio, M. Reframing AI Discourse. Minds & Machines 27, 575–590 (2017). https://doi.org/10.1007/s11023-017-9417-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-017-9417-6