Artificial intelligence (AI) generates some of the most pressing ethical questions of any technology today. This is due to many factors, including the nearly ubiquitous influence it will have over many areas of our lives, the immense power it will give to those who govern it, and its somewhat unpredictable nature, among other things. AI will have an immense ethical impact both because of the potential harms it can unleash, as well as because of the immense amount of good it can bring about. I will here present a brief overview of some of the most significant ethical issues facing the continued development and implementation of artificial intelligence.
Interest in the ethical dimensions of AI has increased dramatically in recent years. It seems like there is an almost daily reporting on ethical issues relevant to the development and application of AI. Recognizing the importance of addressing ethical issues related to AI, major companies such as Amazon, Google, Facebook, DeepMind, Microsoft and IBM came together in 2016 to create the “Partnership on AI to Benefit People and Society.” DeepMind, an AI company (now owned by Alphabet/Google) which is famous for its AlphaGo program that beat the world’s leading “Go” player, has its own “DeepMind Ethics Society.” Many prominent individuals including Elon Musk (Tesla, SpaceX), the late Stephen Hawking, and 8000 others signed an open letter concerned about the research priorities of AI. There is thus a clear recognition of the importance of focusing on the ethical implication of artificial intelligence.
Positive Ethical Possibilities for AI
Developments in artificial intelligence and machine learning provide some of the most exciting breakthroughs in technology, medicine, and many other fields. There are many incredibly promising possibilities—including better medical diagnoses, self-driving cars projected to significantly lower the rate of serious traffic accidents, improvements in education and many other areas. To get a sense of the incredible possibilities associated with AI, one need only listen to Sebastian Thrun—the founder of Google X and Google’s self-driving cars project, and the current CEO of Udacity—who provides a highly optimistic appraisal of the endless possibilities for good that can come from advances in AI. Thrun highlights what he says is only a small sample of the many incredible uses of AI. He first points to a recent application of AI to the diagnosis of skin cancer. At Stanford, he led a team that created an artificially intelligent diagnosis algorithm for skin cancer detection. By combining visual processing and deep learning—a type of artificial intelligence that is modeled on neural networks—they were able to match or outperform the diagnoses of some of the world’s leading dermatologists. There are, of course, other promising medical applications of deep learning AI, from radiology, to tumor detection, and so on. Thrun argues that the most positive benefits will come from the way in which AI will allow us to be more creative because it will master the repetitive tasks that take up so much time in the day-to-day life of so many people. Of course, if such AI technology actually becomes broadly implemented, this will also lead to another ethical issue concerning work displacement—e.g., what about the dermatologists (or others) who might be out of a job? We will return to this question in the second part.
One of the most interesting and perhaps controversial applications of AI is its use in self-driving or autonomous vehicles. The main ethical argument for creating self-driving cars is that 94% of fatal car accidents are caused by human error, which would hopefully be removed by self-driving cars. Such autonomous vehicles will suffer from neither fatigue, nor drunk driving, nor the type of distractions human drivers encounter. There are also some serious ethical questions in regard to self-driving cars. One of the issues is how they will be programmed to respond in unavoidable harm scenarios. Should they be programmed to always minimize harm—even if that means crashing into a wall and killing the driver, as well as passengers, in order to avoid killing more pedestrians? Studies have shown that only a minority of people would be willing to buy such utilitarian programmed self-driving cars that would sacrifice the driver or passengers in order to minimize harm. Thus, ironically, for consequentialist-utilitarian purposes, it might be best to allow such cars to operate in a more ethically egoistic manner. There are many positive possibilities even beyond fewer car accidents, including a reduction in the stress of driving (imagine reading a book on the way to work), less traffic congestion, as well as greater transportation options for those who are unable to drive for various reasons (e.g., those who are non-sighted, etc.). These same benefits apply to the potential for self-flying cars as well.
There are many other positive consequences of AI. The McKinsey Global Institute published a study last year outlining some of the many important contributions that will come from AI—from health care, to manufacturing, to retail, to education. This past March, I accompanied a team of four students (as an advisor) to Seattle in order to compete in the Milgard Invitational Corporate Social Responsibility case competition at the University of Washington (Tacoma). The topic this year was “Microsoft and the Future of AI.” These students were tasked with coming up with a CSR plan (in 72 hours, and without any outside help) which complemented and expanded Microsoft’s existing CSR initiatives. They chose to focus on how Microsoft can enhance two of their “tech spark” initiatives—enhancing digital literacy and job creation—through artificial intelligence. As part of their presentation, the team focused on the way in which AI will be key in “adaptive learning.” Such adaptive learning involves computer assisted, AI driven learning, which—as one of our students, Harrison Miner, pointed out—“will be able to learn and adapt to [an] individual’s learning styles and the pace at which they learn,” with a supposedly 90% higher engagement than traditional learning ( You can see their first place winning presentation here). There will thus be many opportunities to use such adaptive learning in both education and job-development.
There are yet many other positive uses for AI. As Bernard Marr points out, artificial intelligence in conjunction with big data will have a significant impact for good in the energy industry (in optimizing “the use of resources and safety and reliability of oil and gas production and refining”), in the financial industry (by helping to detect fraud), in manufacturing, the service industries, and even in the creative arts.
Negative Ethical Possibilities for AI
There are, however, potentially harmful possibilities that advances in artificial intelligence may also bring about. One potential consequence of AI is massive job loss. In that Mckinsey study we cited earlier, they argue that by 2030 nearly 70 million U.S. workers will need to find new occupations due to the automation of their jobs by AI. Many people have been concerned about the supposedly impending “robopocolypse.” In a recent article from Wired, James Surowiecki downplays the possibility, or at least the imminent inevitability, of such a takeover. He cites a Goldman Sachs report that claims self-driving cars could take away 300,000 driving jobs per year, but that this will not happen for at least 25 years. This, he claims, will give the economy time to adjust. There may also be much AI can do to help solve the problem of resulting job loss. For example, as Thomas Colton, (another student from our Seattle team) argued, AI adaptive learning can “help career training development match the pace of the demands of a fast evolving labor-market” to those displaced by technology in order to “repurpose their skills” in optimal ways.
Furthermore, there are surprising results of automation that often offset job loss. Surowiecki cites economist James Bessen, who showed that when ATMs were introduced, “the number of bank tellers actually rose between 2000 and 2010. That’s because even though the average number of tellers fell per branch, ATMs made it cheaper to open branches, so banks opened more of them.” Surowiecki continues: “The irony of our anxiety about automation is that if the predictions about a robot-dominated future were to come true, a lot of our other economic concerns would vanish.” He cites a recent study by Accenture, which “suggests that implementation of AI, broadly conceived, could lift GDP growth in the US by two points (to 4.6 percent).” He then remarks, “A growth rate like that would make it easy to deal with the cost of things like Social Security and Medicare and the rising prices of health care. . . . In that sense, the problem we’re facing isn’t that robots are coming. It’s that they aren’t,” or aren’t coming fast enough. Others, such as Martin Ford—who wrote the The Rise of the Robots—are not so optimistic.
There are numerous ethical issues concerning AI and privacy. As Zeynep Tufekci, an associate at the Berkman Klein Center for Internet & Society at Harvard, points out, machine learning algorithms are essentially “black box” algorithms because we do not understand exactly how they get the results they do, or why they select what they do. This could be a problem when such algorithms select the supposedly best candidates for a job (by means of hiring software). Many people have pointed out the possible biases associated with AI.
While there are many extremely beneficial uses of AI in the medical field, there are also some concerning scenarios. Tufekci mentions a friend of hers who developed a computational system to predict the likelihood of clinical depression based on social media data. As she points out, while such predictive power might be very good for the purposes of early intervention, it might also be used by employers to screen out certain types of people who are more likely to become depressed at some point.
Then there are the many possible ethical problems concerning privacy. In the recent congressional hearings, we have seen how this was a problem for Facebook. One of the perhaps hidden costs of the free services provided by companies such as Facebook and Google is often our privacy. Amazon, Google, Facebook, Snapchat, Netflix, and many other platforms use AI algorithms to determine our preferences (with amazing precision), and such information is worth a lot of money to such companies, as they sell that information to other companies. This may not be as much of a problem when we give our consent (which we often do somewhat unknowingly). It is more of a problem when our privacy is violated in more serious ways as it was with the recent Cambridge Analytica scandal.
Another problem concerns the role of AI and machine learning in manipulating (and in some cases) addicting people. Social media platforms such as Facebook are often wonderful tools for helping us stay in touch (at least minimally) with friends, family, and other acquaintances with whom we would otherwise have little, if any, contact. However, as Tristian Harris, a former “design ethicist” at Google, argues, such platforms also have a built in incentive to manipulate us as part of a “race for our attention.” As he points out, social media and other platforms define success by the time users spend on their platform. Harris presents the case for changing the social media design goal from a time spent to a time well-spent model.
Harmful Unintended Consequences
Many of the worries about artificial intelligence concern the possibility of even more serious unintended consequences. Several prominent people—including Elon Musk, the late Stephen Hawking, and others—have warned about some of these more extreme harmful possibilities of AI. One of the biggest fears that worry Musk and others is that of AI running out of control in unanticipated and catastrophic ways. Such concerns focus on how AI might surpassing human beings. They point to the possibility of a not-to-distant possible future in which AI achieves “superintelligence.” Such a possibility has been called the “technological singularity” or “intelligence explosion,” The idea is that through machine learning, AI systems improve themselves and will increase in intelligence at such a rate that we will not be able to control what then happens. In the more extreme version, the worry is that future superintelligent artificial general intelligent machines take over the world in some manner (perhaps like VIKI in I-Robot, or perhaps something even more sinister, such as Skynet from the Terminator movies). In some versions of this concern, such superintelligent AI agents might actually attain human type desires, but perhaps the more likely scenario is simply that things get out of control as there is a rapidly advancing, runaway recursive self-improvement of such machines.
In the recent documentary, “Do You Trust This Computer,” which contains interviews with many of those who are concerned with unintended consequences of AI, Elon Musk is quoted as saying: “AI doesn’t have to be evil to destroy humanity. If AI has a goal, and humanity just happens to be in the way, it will destroy humanity as a matter of course—without even thinking about it.” This is one reason Elon musk and Sam Altman founded Open AI, a research company that aims to discover “the path to safe artificial general intelligence.” As Elon Musk states: “The least scary future I can think of is one where we have at least democratized AI, because if one company or small group of people manages to develop god-like, digital superintelligence, they can take over the world” (“Do You Trust This Computer”).
John Searle, one of the most famous critics of so-called Strong AI, argues that it is ridiculous to think that AI agents could even engage in any sort of uprising, simply because they could never attain a state in which they have conscious desires and intentions. This is based, in part, on his well-known “Chinese room” argument against the possibility that artificial intelligence could ever attain the type of consciousness that human beings possess. The basic idea is as follows:
Imagine someone who doesn’t know Chinese—me, for example—following a computer program for answering questions in Chinese. We can suppose that I pass the Turing Test because, following the program, I give the correct answers to the questions in Chinese, but all the same, I do not understand a word of Chinese. And if I do not understand Chinese on the basis of implementing the computer program, neither does any other digital computer solely on that basis.
He argues that artificial intelligence works by means of syntax, and that it has no human (or semantic) understanding. AI can thus simulate human intelligence but never duplicate it.For Searle, AI agents have no true understanding. When Deep Blue beat Kasparov, or when AlphaGo beat Lee Sedol at Go, neither of them really knew they had won, nor did it really mean to them what it would for a human. They didn’t care that they had won.
This question is relevant to another AI related ethical issue—namely, whether AI agents will ever get to the point that they become moral agents—i.e., equivalent to persons in some way. Could they ever become “persons?” As Nick Bostrom and Eliezer Yudkowsky point out, AI would need to have both sentience—the capacity to feel pain and suffer—and sapience—“capacities such as self-awareness and being a reason-responsive agent.” In order to have such sentience, Searle argues AI agents would likely need the biological “wetware” (as Searle calls it) that we have. Many AI experts would disagree with this—wherever they stand on the question of AI. Interestingly, Saudi Arabia has recently given citizenship to the well-known robot named “Sophia.” Of course, most AI is still limited, single domain. As Thrun points out, there has been very little progress in Artificial General Intelligence. Searle, Dreyfus, and others would argue that even if AGI were achieved, it still would not have they same type of meaningfulness we have as humans.
It is thus highly controversial whether AI could ever attain human consciousness. However, even if AI agents will never attain human consciousness, they can still do a great deal of harm. This is why it is important to find a way to incorporate ethics into the behavior of AI agents. We still need to find a way to create ethical AI systems. That is, even if AI agents can never attain human understanding or truly have a moral sense, or duplicate human phronesis, we still need to find a way to build some sort of ethical limits into the “thinking” of AI agents. Professor Dan Ventura and I are currently working on a project to explore this possibility. Professor Ventura will present a paper we collaborated on at the June “International Conference on Computational Creativity” (ICCC) in Salamanca Spain. Others have presented possible ethical laws for AI agents. Isaac Asimov gave his famous 3 laws of robotics. Recently Professor Stuart Russell put forward 3 laws to replace Asimov’s laws.
Thus, even if the extreme concerns about an AI takeover or singularity are unwarranted, there are still many legitimate concerns about the unintended consequences of AI. Furthermore, there will always be those who would use AI for evil purposes—whether that involves hacking into self-driving car systems, or other nefarious uses of AI. A recent viral video (released by the Future of Life institute) concerning so-called “slaughterbots” presents a sensational and dramatized account of microdrones that use AI and facial recognition to assassinate people.
Paul Scharre, a senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security (CNAS), argues that the claims of the slaughterbots is exaggerated. However, it is not hard to believe there will be many who will be willing and able to use the power of AI to advance harmful and seriously unethical ends.
In conclusion, we have seen that artificial intelligence will have a major impact in our world. We have seen there are many positive potential uses of artificial intelligence. AI undoubtedly has the potential to have an incredibly significant impact for good in our world. We have also seen there are many potential harms on the horizon. There are many ways in which people will be able to do much good or bad through such technology. The way in which AI will transform our world certainly deserves a good deal of careful thought and attention.
 This is true whether one follows the MIT Tech Review, Wierd, the Business Insider, C-Net, the Verge, etc., or whether one simply follows major news outlets.
 You can see Thrun discuss the many positive benefites of AI in this TED talk: https://www.ted.com/talks/sebastian_thrun_and_chris_anderson_the_new_generation_of_computers_is_programming_itself
 For further research on this issue see https://www.wired.com/story/self-driving-cars-rand-report/
 https://spectrum.ieee.org/transportation/self-driving/can-you-program-ethics-into-a-selfdriving-car We will look at the question of job loss due to coming implementation of self-driving cars in the next section.
 https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/ See also the Moral Machine project at MIT: http://moralmachine.mit.edu/
 For more on such possibilities, see this McKinsey Global Institute study published last June: https://www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx
 See Bernard Marr’s list of “27 Incredible Examples Of AI And Machine Learning” here: https://www.forbes.com/sites/bernardmarr/2018/04/30/27-incredible-examples-of-ai-and-machine-learning-in-practice/3/#4543a4963b2b
 ttps://www.mckinsey.com/featured-insights/future-of-organizations-and-work/what-the-future-of-work-will-mean-for-jobs-skills-and-wages http://bigthink.com/paul-ratner/a-new-study-says-a-third-of-all-us-workers-to-be-replaced-with-robots-by-2030
 https://www.wired.com/2017/08/robots-will-not-take-your-job/ See also Rise of the Robots (2016) by Martin Ford (Basic Press).
 See Brannen, Learning by Doing: The Real Connection between Innovation, Wages, and Wealth (2015).
 See his Rise of the Robots (2016) by Martin Ford (Basic Press). You can also see his Ted Talk on the subject here: https://www.ted.com/talks/martin_ford_how_we_ll_earn_money_in_a_future_without_jobs
 Bernard Marr discusses the ways in which Facebook, for example, uses deep learning in its Deep Text program, which uses neural networks to analyze the meaning of words from their context; and its Deep Face program, which uses deep learning for facial recognition.
 This phrase “technological singularity”was coined by Vernor Vinge in his 1993 piece, “The Coming Technological Singularity.”
 Some, such as Ray Kurzweil, see this as a good thing and long for the day when we can merge with such AI in a process of “uploading”—i.e., “the hypothetical future technology that would enable human . . . intellect to be transferred from its original implementation in an organic brain onto a digital computer” (Bostrom & Yudkowski, the Cambridge Handbook of Artificial Intelligence).
 See also Minds, Brains, and Science (Harvard, 1984), The Rediscovery of the Mind (MIT Press, 1994), Seeing Things as They Are (Oxford, 2015).
 Others, such as Hubert Dreyfus, have also argued against equating human intelligence and AI. See Dreyfus’ What Computers Still Cannot Do (MIT press, 1992).
 “The Ethics of Artificial Intelligence” in the Cambridge Companion to Artificial Intelligence. If by intelligence we simply mean computational abilities, then AI passed us long ago. But that is not what most philosophers mean by it.
 These are as follows: (1) “The robot’s only objective is to maximize the realization of human values.” As he points out, the first rule violates Asimov’s robot-self-preservation rule. He refers to his second law as a “law of humility.” His second law (2) is that “the robot is initially uncertain about what those values are.” As he puts it, the robot is programmed “to maximize those values, but it does not know what they are.” For Russell, this uncertainty is essential because it prevents the type of single-minded pursuit of objectives that can cause serious problems. He illustrates this problem by using the example of a robot that is given the command to “fetch the coffee,” but which might then do anything to do so—including making it so that no one can turn it off, since that might prevent its reaching its goal. This of course takes us to the same concerns that go along with the singularity worry above. Russell’s final law is that (3) “Human behavior provides information about human values.” It is not difficult to see there are likely problems with this third principle.
 His response was published in the IEEE spectrum: https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots