Joe HansonSenior Project 2012Dr. Call     Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial In...
2characteristics of intelligence include the ability to: “respond to situations flexibly, make senseout of ambiguous or co...
3than a simple informational retrieval system. One of these components is “knowledge base,” acollection of declarative kno...
4       AI benefits society in a number of ways, including socially, economically, andtechnologically. AIs rapid data proc...
5skeptics, recognizing the benefits and progress that AI can bring, but still being wary of over-reliance on AI, specifica...
6existence of thinking machines. Turing refutes these objections through both philosophical andscientific arguments suppor...
7incoherent concept philosophically,” specifically refuting the arguments of Dreyfus. McCarthyargues that philosophers oft...
8there has been much debate over how much integration is safe. As the military integratesautonomous systems in their commu...
9in the progressing computer technologies. After WWII, the militarys role in computer researchgrew exponentially. The U.S....
10issues surrounding the use of AI weaponry. Krisnan’s book was written in 2009, and looks at AIin the military currently,...
11ability to think for itself. This idea of “singularity” is expanded on in Katherine Hayless HowWe Became Posthuman. Hayl...
12nine objections goes against the views of the skeptics and deniers, recognizing a diverse varietyof arguments against AI...
13developers shared this optimism, they were also wary of the dangers of the progressing computertechnology. Specifically ...
14conference was based on the idea that, “machines use language, form abstractions and concepts,solve kinds of problems no...
15development, directing it in the direction they desired. This direction was primarily concernedwith developing technolog...
16development of AI, but also the optimism that the military had for its development. ARPA wasable to mix basic computer r...
17development agency. DARPA received heavy funding from the federal government, as militaryleaders continued to support th...
18management capabilities. These satellites would be equipped with layers of computers, “whereeach layer of defense handle...
19most advanced level, operating themselves with no human input or control.40 Although completeautonomy is still being dev...
20outcome of over reliance and over optimism in AI technology: the loss of control of AI and theend of the human era. Ving...
21to look at these benefits, I will use examples of how AI can be applied to diverse sections ofsociety. Speech recognitio...
22and danger of AI machines. Dreyfus claims in What Computers Can’t Do that early AIdevelopers were, “blinded by their ear...
23putting more decisions into the hands of the AI machine. Dreyfus argues the danger inimplementing “questionable A.I.-bas...
24still have grave consequences by falling into the hands of the military. Though Wiener is notexplicitly talking about AI...
25weapons and networks, when the weapon or system acts in an unanticipated way. As previouslystated, computer programming ...
26fired and took down the plane, killing all 290 people. This event showed that humans are alwaysneeded in the loop, espec...
27            The last danger, the “Terminator Scenario” is more of a stretch, but still is a possibility.In the “Terminat...
28management. This increased integration has led to both an over reliance and over optimism ofthe technology. The rise of ...
29machines the ability to make their own decisions is dangerous, as their decisions would beunpredictable and uncontrollab...
30and increase performance as compared to their human counterparts. The amplified capabilitiesof the machines give them th...
31who also served as Chief of Staff of the German Army from 2002 to 2009 shows this skepticismin his article, “UV’s: An In...
32effectively accomplish missions without risking human life,” drones are necessary fortransforming armies.76 General Jame...
33assessment amplify the capabilities of a human soldier, making UCAVs extremely efficient anddangerous. By processing lar...
34be viewed as ethical. Bin Laden was operating an international terrorist organization that hadsuccessfully killed thousa...
35the Women’s International League for Peace and Freedom takes the ethical argument a stepfurther, claiming that drone war...
36out of the line of fire and human causalities shrink as AI weapons increase, is it easier to go towar? An unethical resu...
37ability to empathize with human beings. If an AI machine can’t understand human suffering orhas never experienced it the...
38warning violates the humanitarian protections established by the Geneva Convention, illegallycarrying out attacks result...
39person responsible for his subordinates.”98 This makes sense for human soldiers, but makes itvery hard to legally contro...
40                                         BibliographyAdler, Paul S. and Terry Winograd. Usability: Turning Technologies ...
41Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and       Informatics. Chicago: T...
42McDaid, Hugh and David Oliver. Robot Warriors: The Top Secret History of the Pilotless Plane.       London: Orion Books ...
43Singer, P.W. “Robots At War: The New Battlefield.” Wilson Quarterly 33, no. 1 (Winter 2009):       30-48.The Economist. ...
44U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles:       Background and Issues for Con...
Upcoming SlideShare
Loading in …5
×

Dangers of Over-Reliance on AI in the Military

0 views

Published on

Senior Project 2012

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
0
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
21
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Dangers of Over-Reliance on AI in the Military

  1. 1. Joe HansonSenior Project 2012Dr. Call Taking Man Out of the Loop: The Dangers of Exponential Reliance On Artificial Intelligence A 2012 Time Magazine article dubbed “The Drone” the 2011 Weapon of the Year.1 Withover 7,000 drones in the air, military use of unmanned vehicles is exponentially rising. Why isdrone technology progressing at such a fast rate? Artificial Intelligence (AI) is at the forefront ofdrone technology development. Exponential technological developments in the last century havechanged society in numerous ways. Mankind is beginning to rely increasingly on technology ineveryday life, with many of these technologies bringing beneficial progress to all aspects ofsociety. Exponential growth in computer, robotic, and electronic technology has led to theintegration of this technology into social, economic, and military systems. Artificial intelligence is a part of computer science that is the intelligence and cause ofaction of a machine, both in hardware and software form. Using AI this machine can actautonomously and function in an environment using rapid data processing, pattern recognition,and environmental perception sensors to make decisions and carry out goals and tasks. AI seeksto emulate human intelligence, using these sensors to understand and process to solve and adaptto problems in real time. There is debate over whether AI is even plausible; if it is even possible to create amachine that can emulate human thought. Both humans and computers are able the processinformation, but humans have the ability to understand that information. Humans are able tomake sense out of what they see and hear, involving the use of intelligence.2 Some1 Feifel Sun, TIME Magazine 178, no. 25 (2011): 26.2 Henry Mishkoff, Understanding Artificial Intelligence (Texas: Texas Instruments, 1985), 5.
  2. 2. 2characteristics of intelligence include the ability to: “respond to situations flexibly, make senseout of ambiguous or contradictory messages, recognize importance of different elements of asituation, and draw distinctions.”3 When discussing the possibilities of AI and the creation of athinking machine, the main issue is whether or not a computer is able to possess intelligence.Supporters of AI development argue that because of exponential progress in computer androbotic technology, AI is developing further than just simple data processing, to the creation ofautonomous AI that can emulate and surpass the intelligence of a human. According toUniversity of Michigan Professor Paul Edwards, scientists are beginning to “simulate some ofthe functional aspects of biological neurons and their synaptic connections, neural networkscould recognize patterns and solve certain kinds of problems without explicitly encodedknowledge or procedures,” meaning that AI is beginning to incorporate human biology to makeit think.4 On the other side of the debate, AI skeptics and deniers argue that AI will never havethe ability to surpass human intelligence. They argue that the human brain is far too advanced,that though a machine can calculate data faster, it will never match the complexity of a humanbrain. In order to emulate human thought, computer systems rely on programmed “expertsystems,” an kind of AI that, “acts as an intelligent assistant,” to the AIs human user.5 An expertsystem is not just a computer program that can search and retrieve knowledge. Instead, an expertsystem possesses expertise, pools information and creates its own conclusion, “emulating humanreason.”6 An expert system has three components that makes it more technologically advanced3 Mishkoff, 5.4 PaulEdwards, The Closed World (Cambridge: The MIT Press, 1997), 356.5 Edwards, 356.6 Mishkoff, 5.
  3. 3. 3than a simple informational retrieval system. One of these components is “knowledge base,” acollection of declarative knowledge (facts) and procedural knowledge (courses of action), actingas the expert systems memory bank. An expert system can integrate the two types of knowledgewhen making a conclusion.7 Another component is an “user interface,” hardware that a humanuser can communicate with the system, forming a two-way communication channel. The lastcomponent is the interface engine, which is the most advanced part of the expert system. Thisprogram knows when and how to apply knowledge, and also directs the implementation of thatknowledge. These three components allow the expert system to exceed the capabilities of asimple information retrieval system. The capabilities of expert systems have opened up doors for military application. Thesefunctions can be applied to a number of military situations, from battlefield management, tosurveillance, to data processing. Integrating expert systems into military AI technology givesthose systems the ability to interpret, monitor, plan, predict, and control system behavior.8 Asystem is able to monitor its behavior, comparing and interpreting observations collected throughsensory data. The ability to monitor and interpret is important for AI specializing in surveillanceand image analysis, a vital capability for unmanned aerial vehicles. Expert systems also functionas battlefield aids, helping to plan by designing actions, while also prediction, inferringconsequences based on large amounts of data.9 Military application of expert systems in their AIsystems give them an advantage on and off the battlefield, aiding in decision making andstreamlining battlefield management and surveillance.6 Mishkoff, 55.8 Mishkoff, 59.9 Mishkoff, 59.
  4. 4. 4 AI benefits society in a number of ways, including socially, economically, andtechnologically. AIs rapid data processing and accuracy can help in many different sectors ofsociety. Although these benefits are progressive and necessary in connection with otheremerging technologies, specifically computer technology, society must be wary of over-relianceon AI technology and integration. Over integration of AI into society has begun the trend oftaking a human out of loop, relying more on AI to carry out tasks, ranging small to large. And asAI technology continues to develop, autonomous AI systems will further be relied on to carry outtasks in all aspects of society, especially in military systems and weapons, and as there is lesshuman control, humans must be cautious of putting all the eggs in one basket. The dangers ofusing AI in the military often outweigh the benefits, dangers including malfunction, unethicaluse, lack of testing, and the unpredictable nature and actions of AI systems. The possibility of a loss of control over an AI system, of humans giving a thinkingmachine too much responsibility, increases the chances of that reliance backfiring on its humancreators. The backfire isnt just inconvenient, it can also be dangerous, especially if the backfiretakes place in a military system. Missiles, unmanned drones, and other advanced forms ofweaponry are relying on AI to aid them in functioning, and as AI technology becomes faster andsmarter, humans are relying on the AI technology more and more. These systems have theability to cause catastrophic damage, and taking humans out of the loop is especially dangerous. There has been extensive research and debate over AI in numerous regards. From thebirth of AI as a field at the 1956 Dartmouth Conference, there has been support and opposition,optimists, skeptics, and deniers from all fields including physics, philosophy, computer science,and engineering. I will recognize all these different viewpoints, but my argument is that of the
  5. 5. 5skeptics, recognizing the benefits and progress that AI can bring, but still being wary of over-reliance on AI, specifically its integration into military systems. The idea of putting technology,such as advanced weaponry and missiles, under the responsibility of an AI system, whether it beAI software or hardware is especially dangerous. AI machines may lack the ability to thinkmorally, ethically, or understand morality at all, so giving it the ability to kill while overlyrelying on it is a danger. Optimists such as founders of AI Marvin Minsky and John McCarthyfully support, embrace, and trust the integration of AI into society. On the other side of thespectrum are the deniers, the most famous being Hubert Dreyfus, who believe that a machinewill never have the capabilities to emulate human intelligence, denying the existence of AI alltogether. This section of my paper reviews the existing literature on AI and the diverse views ofits critics and supporters. The supporters of AI come from diverse fields of study, but all embrace the technologyand have an optimism and trust for it. Alan Turing, an English computer scientist, was one of thefirst scholars to write about AI, even before it was declared as a field. Turings paper,“Computing Machinery and Intelligence,” is mainly concerned with the question of “CanMachines Think?”.10 Turings work was some of the first looking into computer and AI theory.Turing introduces the “Turing Test,” which tests a machine, both software and hardware, to see ifit can exhibit intelligent behavior. Turing doesnt just introduce the Turing Test, but also showshis optimism for AI by refuting the “Nine Objections,” which were nine possible objections of amachines ability to think. Some of these objections include a theological objection, the inabilityfor computers to think independently, mathematical limitations, and complete denial of the10 Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433-460.
  6. 6. 6existence of thinking machines. Turing refutes these objections through both philosophical andscientific arguments supporting the possibility of a thinking machine. Turing argues that areason that people deny the possibility of thinking machines is not because they think it isimpossible, but rather because they fear it and that, “we like to believe that Man is in some subtleway superior to the rest of creation.”11 Turing argues that computers will have the ability tothink independently and have conscious experiences. Another notable early AI developer was Norbert Wiener, an American mathematician,who was the originator of cybernetics theory. In The Use of Human Beings: Cybernetics andSociety, Weiner argues that the automation of society is beneficial. Wiener shows that thereshouldnt be a fear of integrating technology into society, but instead people should embrace theintegration. Wiener says that cybernetics and the continuation of technological progress rely ona human trust in autonomous machines. Though Wiener recognizes the benefits and progressthat automation brings, he still does warn of relying too heavily on it. After the establishment of AI as a field at the Dartmouth Conference, the organizer of theconference, John McCarthy, wrote Defending AI Research. In this book, McCarthy collectednumerous essays that support the development of AI and its benefits to society. McCarthyreviews the existing literature of notable early AI developers and either refutes or supports theirclaims. In the book, McCarthy reviews the article “Artificial Intelligence: A General Survey.”12The article was written by James Lighthill, a British mathematician. In the article, Lighthill iscritical of the existence of AI as a field. McCarthy refutes Lighthills claims and defends AIexistence and development. McCarthy also defends AI research from those who claim “AI as an11 Turing, 44412 John McCarthy, Defending AI Research (California: CSLI Publications, 1996), 27-34.
  7. 7. 7incoherent concept philosophically,” specifically refuting the arguments of Dreyfus. McCarthyargues that philosophers often “say that no matter what it [AI] does, it wouldnt count asintelligent.”13 Lastly, McCarthy refutes the arguments of those who claim that AI research isimmoral and antihuman, saying that these skeptics and opponents are against pure science andresearch motivated solely by curiosity.14 McCarthy argues that research in computer science isnecessary for opening up options for mankind. 15 Hubert Dreyfus is been a prominent denier of the existence of AI for decades. Aprofessor of philosophy at UC Berkeley, Dreyfus has written numerous books in opposition toand critiquing the foundations of AI as a field. Dreyfuss main critique of AI is the idea that amachine can never have the capability to fully emulate human intelligence. Dreyfus argues thatthe power of a biological brain can not be matched, even if a machine has superior dataprocessing capabilities. A biological brain not only reacts to what it perceives in theenvironment, but relies on background knowledge and experience to think. 16 Humans alsoincorporate ethics and morals into their decisions, and a machine can only use what it isprogrammed to think. What Dreyfus is arguing is that the human brain is superior to AI, and thata machine cant emulate human intelligence. Dreyfuss view that, “scientists are only beginningto understand the workings of the human brain, with its billions of interconnected neuronsworking together to produce thought. How can a machine be built based on something of whichscientists have so little understanding?”17 shows his view on AI. When looking at the relationship between the military and computer technology and AI,13 McCarthy, vii.14 McCarthy, 2.15 McCarthy, 20.16 Hubert Dreyfus, Mind Over Machine, 31.17 David Masci. 1997. “Artificial Intelligence.” CQ Researcher, 7.
  8. 8. 8there has been much debate over how much integration is safe. As the military integratesautonomous systems in their communication, information, and weapon systems, the danger ofover reliances rises. One of the first people to recognize this danger was the previouslymentioned Norbert Wiener. Even though Wiener was supportive of AI and its integration intosociety, he had a very different viewpoint concerning its use in military and weapon technology.Wiener wrote a letter in 1947 called “A Scientist Rebels,” which argues and resists governmentand military influence on AI and computer research. Wiener warns of the “gravestconsequences” of the governments influence on development of AI.18 Wiener looks at thedevelopment of the atomic bomb as an example, and how the scientists work falls into the handsof “he is least inclined to trust,” in this case the government and military. The idea that civilianscientific research can be integrated by the military and used in weaponry is a critique of themilitarys influence on AI development. Scientific research may seem innocent, but as it ismanipulated through military influence, purely scientific research is integrated into wartechnology. Paul Edwardss The Closed World gives a history of the relationship and the impact thatthe military had on AI research and development and vise versa. Edwards looks at why themilitary put so much time and effort into computers. Edwards looks at the effects that computertechnology and the integration of AI data processing systems had on the history of the Cold War.Edwardss broad historic look at computer and AI development gives insight to a militaryconnection to the progressing technology that still exists today. Computer development began inthe early 1940s, and from that time to the early 1960s, the U.S. military played an important role18 Norbert Wiener. “From the Archives.” 38.
  9. 9. 9in the progressing computer technologies. After WWII, the militarys role in computer researchgrew exponentially. The U.S. Army and Air Force began to fund research projects, contractinglarge commercial technology corporations such as Northrop and Bell Laboratories. 19 Thisgrowth in military funding and purchases enabled American computer research to progress at anextremely fast pace, however, due to secrecy, the military was able to keep control over thespread of research.20 Because of this secrecy, military sponsored computer projects were tightlycontrolled and censored. Due to heavy investment, the military did have a role in the“nurturance” of AI due to their relationship with the government controlled Advanced ResearchProjects Agency (ARPA). AI research received over 80% of its funding from ARPA, keeping themilitary in tune with AI research and development. 21 The idea that, “the computerization ofsociety has essentially been a side effect of the computerization of war,” sums up the effect of themilitary on computer and AI development. Paul Lehners Artificial Intelligence and National Defense looks at how AI can benefit themilitary, specifically through software applications. Written in 1989, Lehner’s view representsthat of the later years of the Cold War, one where the technology had not fully developed, but thetechnology was exponentially progressing. Lehner discusses the integration of “expert systems,”software that can be used to aid and replace human decision makers. Lehner recognizes AIs dataprocessing speed and accuracy and the benefits that the “expert system” could bring whenapplied to the military. Armin Krishnans Killer Robots looks at the other way that AI is beingintegrated into the military, through hardware and weapons, also evaluating the moral and ethical19 Edwards, 60.20 Edwards, 63.21 Edwards, 64.
  10. 10. 10issues surrounding the use of AI weaponry. Krisnan’s book was written in 2009, and looks at AIin the military currently, specifically looking at the ethical and legal problems associated withdrone warfare and other robotic soldier systems. Some of the ethical concerns Krishnan bringsup are: diffusion of responsibility for mistakes or civilian deaths, moral disengagement ofsoldiers, unnecessary war, and automated killing. Recently there has been much debate over the legal concerns regarding the use of AI inmilitary systems and weaponry. One of the leading experts on the legality of AI integration isPeter W. Singer, the Director of 21st Century Defense Initiative at Brookings. In his article“Robots At War: The New Battlefield,”(2009) Singer raises the numerous legal concerns. Thelaws of war were outlined in the Geneva Convention laws in the middle of the 20th century.However, due to the progressing and changing war technologies, these 20th century laws of warare having trouble keeping up with 21st century war technology. Singer argues that the laws ofwar need to be updated to include new, AI systems and their integration. Due to high numbers ofcivilian deaths from AI systems, specifically drones, Singer also argues that these can be seenwar crimes. Lastly, Singer brings up the question of who is responsible lawfully for anautonomous machine: the commander, the programmer, the designer, the pilot, or the droneitself? Singers interesting look at the legal concerns over changing war technology is also statedin his participation on the U.S. Congressional hearings on unmanned military systems. Many scholars have also looked at what the future holds for AI. In 1993, Vernor Vingecoined the term “singularity” to describe the idea that one day, AI technology will surpass humanintelligence. This is when computers will become more advanced than human intelligence,moving human kind into a post-human state. This is the point where AI “wakes up,” gaining the
  11. 11. 11ability to think for itself. This idea of “singularity” is expanded on in Katherine Hayless HowWe Became Posthuman. Hayles looks at this as a time period in the near future whereinformation is separated from the body, where information becomes materialized and can bemoved through different bodies. Hayless view shows that AI isnt just mechanically advancing,but also mentally and psychologically advancing. In the view of singularity, humans are headingin a direction where computers and humans will have to integrate with each other. But astechnology continues to progress and AI systems become more advanced, it is important torecognize that the future may be integrated with AI technology. I. The History of AI: The Early 1900s to 1956 Beginning in the early 1900s, computer scientists, mathematicians, and engineers beganto experiment with creating a thinking machine. During World War II, the military began usingcomputers to break codes, ushering in the development of calculating computers. ENIAC, theElectronic Numerical Integrator And Computer, was the first electronic computer to successfullyfunction.22 Early on, the majority of computer and AI projects were military funded, giving themilitary major influence over allocation and integration of the technology. As computertechnology began to progress, so did AI as a branch of computer science. The first person to consider the possibilities of creating AI in the form of a thinkingmachine was Alan Turing. In his article “Computing Machinery and Intelligence,” Turingrecognized the possibilities that a machine could plausibly emulate human thought. Turingspaper was very important to the development of AI as a field, being the first to argue theplausibility of AI existence, while also establishing a base for the field. Turings refuting of the22 Arthur Burks, “The ENIAC,” Annals of the History of Computing 3, no. 4 (1981): 389.
  12. 12. 12nine objections goes against the views of the skeptics and deniers, recognizing a diverse varietyof arguments against AI. Another major figure in the development of computers and artificial intelligence wasHungarian mathematician John von Neumann. Von Neumann made many importantcontributions in a variety of fields, but had a very large impact on computer science. Todayscomputers are based on “von Neumann architecture,” building a computer to, “use a sequentialprogram to held in the machines memory to dictate the nature and the order of the basiccomputational steps carried out by the machines central processor.”23 He also used thisarchitecture and compared it to a human brain, arguing that their functions are very similar. VonNeumanns 1950 “The Computer and the Brain,” was an important work concerning artificialintelligence, strengthening Turings claim that computers could emulate human thought.24 In hisbook, Von Neumann compares the human brain to a computer, pointing out similarities in thetheir architecture and function. In some cases, the brain acts digitally, because its neuronsthemselves operate digitally. Similar to a computer, the neurons fire depending on an order toactivate them.25 The result of von Neumanns work strengthened the plausibility of creating athinking machine. Ultimately, the work of Turing, Wiener, and Von Neumann show an optimism that theearly computer developers had. All three of them shared a faith in computer science and AI andsupported its progress. Turing finished his paper with, “We can only see a short distance ahead,but we can see plenty there that needs to be done.”26 Even though these early computer23 von Neumann, The Computer and the Brain, xii.24 von Neumann25 von Neumann, 29.26 Turing, 460.
  13. 13. 13developers shared this optimism, they were also wary of the dangers of the progressing computertechnology. Specifically Wiener, who had earlier written his letter “A Scientist Rebels,” had askeptical view of the future of computer technology. In Cybernetics, Wiener states, What many of us fail to realize is that the last four hundred years are a highly special period in the history of the world. The pace at which changes during these years have taken place is unexampled in earlier history, as is the very nature of these changes. This is partly the results of increased communication, but also of an increased mastery over nature, which on a limited planet like the earth, may prove in the long run to be an increased slavery to nature. For the more we get out of the world the less we leave, and in the long run we shall have to pay our debts at a time that may be very inconvenient for our own survival.27This quote from Wiener reflects the skepticism of Wiener. He understood the benefits that AIand computer science could bring to society, but were wary of over-reliance on the technology.Wiener’s quote is a warning of how fragile the world is, and that we need to be careful of therapid development of AI technology. As humans “master nature” through technology, theybecome more and more vulnerable to their own creations. II. The History of AI: 1956, The Cold War, and an Optimistic Outlook Following the work of Turing, Von Neumann, and Wiener, computer scientists JohnMcCarthy and Marvin Minsky organized the Dartmouth conference in the summer of 1956. Thisconference would lead to the birth of AI as a field, a branch of computer science. The27 von Neumann, 46.
  14. 14. 14conference was based on the idea that, “machines use language, form abstractions and concepts,solve kinds of problems now reserved for humans, and improve themselves.”28 Using this idea,the goal of the conference was to establish AI as a field and show that it was plausible. As aresult, AI began to gain momentum as a field. The military had a major influence over the research and development of AI andcomputer science beginning in the 1940s. Shortly after World War II, as the Cold War era began,AI research and development began to grow exponentially. Military agencies had the financialbacking to provide the majority of the funding, as U.S. Army, Navy, and Air Force began to fundresearch projects and contract civilian science and research labs for computer sciencedevelopment. Between 1951 and 1961, military funding for research and development rose from$2 billion to over $8 billion. By 1961, research and development companies Raytheon andSperry Rand were receiving over 90% of their funding from military sources. The large budgetfor research and development enabled AI research to take off, as ARPA received 80% of itsfunding from the federal government.29 Because of the massive amount of funding from militarysources, American computer research was able to surpass the competition and progress at anexponential rate. The U.S. Military was able to beat out Britain, their only plausible rival,making the U.S. the leaders in computer technology. There were numerous consequences of the military influence of having their hand inresearch and development of computer science early in the Cold War. As a result of theiroverwhelming funding, the military was able to keep tight control over the research and28John McCarthy, Marvin Minsky “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (proposal,Dartmouth College, August 31, 1955).29 Edwards, 64.
  15. 15. 15development, directing it in the direction they desired. This direction was primarily concernedwith developing technology that could benefit the military themselves, whether it be forcommunication or weaponry or national defense. Wanting to keep their influence as strong aspossible, the military kept tight control through secrecy of the research. 30 The military wanted tomake sure that researchers they had on contract were always aware of the interests of nationalsecurity, censoring the communication between researchers and scientists in differentorganizations. A problem that arose from this censorship was that researchers could no longeropenly share ideas, impeding and slowing down development. This showed that the military waswilling to wait longer to ensure that national security measures were followed. As a result of the heavy funding from the military, AI turned from being just theory tohaving commercial interests. Parallel to the rapidly progressing computer technology, militaryresearch agencies began to also progress in AI development, studying cognitive processes andcomputer simulation.31 The main military research agency to look into AI was the AdvancedResearch Projects Agency (ARPA, renamed DARPA in 1972). Joseph Licklider, head of ARPAsInformation Processing Techniques Office, was a crucial figure in increasing development of AItechnology, establishing his office as the primary supporter of “the closed world military goals ofdecision support and computerized command and control,” which found “a unique relationshipto the cyborg discourses of cognitive psychology and AI.”32 Thus unique relationship is the basisof AI, mastering cognitive psychology and then integrating and emulating that psychology into amachine. This branch of ARPA not only shows the militarys interest and impact on research and30 Edwards, 62.31 Edwards, 259.32 Edwards, 260.
  16. 16. 16development of AI, but also the optimism that the military had for its development. ARPA wasable to mix basic computer research with military ventures, specifically for national defense,allowing the military to control the research and development of AI technology. The military influence over DARPA continued into the 1970s, as DARPA became themost important research agency for military projects. The military began to rely on AI formilitary use at an exponential rate. DARPA began to integrate AI technology into a number ofmilitary systems including soldier aids for both pilots and ground soldiers and battlefieldmanagement systems that relied on expert systems. 33 All these aspects of AIs integration into warfare are known as the “robotic battlefield” orthe “electronic battlefield.” AI research opened the doors for this new warfare technology,integrating AI and computer technology to create electronic, robotic warfare and automatedcommand and sensor networks for battlefield management. During the Vietnam War, militaryleaders shared an optimism for new AI technology. General William Westmoreland, head ofmilitary operations for the U.S. in Vietnam from 1964 to 1968 predicted that, “on the battlefieldof the future, enemy forces will be located, tracked, and targeted almost instantaneously throughthe use of data-links, computer assisted intelligence evaluation and automated fire control.”34Westmoreland also saw that as the military began to increasingly rely on AI technology, the needfor human soldiers would decrease. Westmoreland’s prediction not only shows the optimism thatmilitary leaders had of AI technology, but also the over reliance that the military would have onthose weapons. From the 1950s to the 1980s, DARPA continued to be the military’s main research and33 Edwards, 297.34 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, 19.
  17. 17. 17development agency. DARPA received heavy funding from the federal government, as militaryleaders continued to support the need for the integration of new AI technology. The militaryleader’s optimism in AI technology is reflected by the ambitious goals that DARPA had. In1981, DARPA aimed to create a “fifth generation system,” one that would “have knowledgeinformation processing systems of a very high level. In these systems, intelligence will begreatly improved to approach that of a human being.”35 Three years later in 1984, DARPA’s“Strategic Computing” stressed the need for the new technology stating, “Using this newtechnology [of artificial intelligence], machines will perform complex tasks with little humanintervention, or even with complete autonomy.”36 It was in 1984 that the U.S. military began notjust researching and developing AI, but actually integrating it into military applications for useon the battlefield. DARPA announced the creation of three different projects, an all purposeautonomous land vehicle, a “pilot’s associate” to assist pilots during missions, and a battlefieldmanagement system for aircraft carriers. The military was beginning to rely on this AItechnology, using it to assist human military leaders and soldiers. Fearing they would loseground in their progress to Britain, China and Japan, DARPA spent over $1 billion to maintaintheir lead.37 President Ronald Reagan continued the trend of the federal government using DARPA foradvanced weapon development and showed the military’s commitment to developing AI militaryweapons and systems. Reagan’s Strategic Defense Initiative (SDI), later nicknamed “Star Wars,”was a proposed network of hundreds of orbiting satellites with advanced weaponry and battle35 Paul Lehner, Artificial Intelligence and National Defense: Opportunity and Challenge, 164.36 David Bellin, Computers in Battle: Will They Work?, 171.37 Lehner, 166.
  18. 18. 18management capabilities. These satellites would be equipped with layers of computers, “whereeach layer of defense handles its own battle management and weapon allocation decisions.”38Reagan’s SDI is a perfect example of the government and military’s overly ambitious integrationof AI technology. Reagan was willing to put both highly advanced and nuclear weapons in thepartial control of AI technology. Overall, Reagan’s SDI was a reckless proposition by themilitary, taking man out of the loop while putting weapons of mass destruction under the controlof computer systems. As a result of the military’s commitment to the research and development of AI, AItechnology has developed rapidly and its integration into both society and military applications.Before looking at the future of AI, it is important to first look at the different levels of autonomy,and where the technology currently is present day. In a nutshell, autonomy is the ability of amachine to function on its own with little to no human control or supervision. There are threetypes of machine autonomy: pre-programmed autonomy, limited autonomy, and completeautonomy. Preprogrammed autonomy is when a machine follows instructions and has nocapacity to think for themselves.39 An example of preprogrammed autonomy is in a factorymachine programmed for one job, such as welding or painting. Limited autonomy is thetechnology level that exists today, one where the machine is capable of carrying out mostfunctions on its own, but still relies on a human operator for more complex behaviors anddecisions. Current U.S. UAVs possess limited autonomy, using sensors and data processing tocome up with solutions, but still relying on human decision making. Complete autonomy is the38 Lehner, 159.39 Krishnan, 44.
  19. 19. 19most advanced level, operating themselves with no human input or control.40 Although completeautonomy is still being developed, AI technology continues to progress at a rapid pace, openingthe doors for complete autonomy, with DARPA estimating that complete autonomy will beachieved before 2030.41 In a 2007 interview with Tony Tether, the Director of DARPA, Tether showed hisagency’s optimism and commitment to the development of future of AI technology. Tether refersto DARPA’s cognitive program, the program focusing on research and development of thinkingmachines, as “game changing,” where the computer is able to “learn” its user.42 DARPA isconfident that they will be able to create fully cognitive machines, making AI smarter and moreclosely emulating human intelligence. Tether discusses the Command Post of the Future(CPOF), a distributed, computer run command and control system that functions 24/7, takinghuman operators out of the loop. The CPOF, though beneficial for its accurate and rapid dataprocessing, is a dangerous example of over reliance on AI. Tether says, “those people who arenow doing that 24-by-7 won’t be needed,” but it is important, not just for safety, but to retain fullcontrol, to have a human operator over military weapons and systems. 43 This still shows themilitary’s influence over the research and development, directing DARPA’s research towards anover-reliance on AI machines. But what happens when humans rely on AI so much that there is no turning back?Vinge’s Singularity Theory is the theory that AI will one day surpass human intelligence, andhumans will eventually integrate with AI technology. Vinge’s Singularity points out the ultimate40 Krishnan, 45.41 Krishnan, 44.42 Schatman, “Darpa Chief Speaks.”43 Schatman.
  20. 20. 20outcome of over reliance and over optimism in AI technology: the loss of control of AI and theend of the human era. Vinge warns that between 2005 and 2030, computer networks might“wake up,” ushering in an era of the synthesis of AI and human intelligence. In her book HowWe Became Posthuman, Hayles continues Vinge’s Singularity Theory and looks at the separationof humans from human intelligence, an era where the human mind has advanced psychologicallyand mentally when integrated with AI technology.44 Hayles argues that the “the age of thehuman is drawing to a close.”45 Hayles looks at all the ways that humans are already beginningthis integration with intelligent machines, such as computer assisted surgery in medicine and thereplacing of human workers with robotic arms in labor, showing that AI machines have theability to integrate with or replace humans in a diverse number of aspects of society.46 III. Skepticism: The Dangers of Over-Reliance on AI Although over-reliance on AI for military purposes is dangerous, AI does bring manybenefits to society. Because of these benefits, humans are drawn to AI technology, becomingoverly optimistic and committed to the technology. The numerous benefits are what give themilitary that optimism, however in this section, I will discuss AI’s benefits to civilian societyfollowed by then by the limitations and dangers of AI on both civilian society and the military. AI has the ability to amplify human capabilities, surpassing the accuracy, expertise, andspeed of a task compared to a human. Hearing, seeing, and motion through speech recognition,computer vision, and robotics are amplified by AI systems. Extremely rapid and efficient dataprocessing and accurate data processing give AI technology the advantage to humans. In order44 Hayles, How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, 2.45 Hayles, 283.46 Hayles, 284.
  21. 21. 21to look at these benefits, I will use examples of how AI can be applied to diverse sections ofsociety. Speech recognition understands and creates speech, increasing speed, ease of access,and manual freedom when interacting with the machine. 47 In business, office automation isrelying on new AI speech recognition capabilities to streamline business operations. Data entry,automatic dictation and transcription, and information retrieval all benefit from AI speechrecognition. Human users benefit from this technology through easier, streamlinedcommunication.48 AI robotics is another beneficial emerging technology for a number of reasonsincluding: increased productivity, reduced costs, replacing skilled labor, and increased productquality. 49 AI robotics gives the AI system the ability to perform manual tasks, making themuseful for integration into industrial and manufacturing sectors of society, such as automobileand computer chip factories. In medicine, surgeons and doctors are now integrating AItechnology to assist in challenging surgery operations and to identify and treat diseases.50 AI haseven found its way into everyday life, assisting the elderly in senior facilities, assisting pilots oncommercial airlines, and being integrating into human homes, creating “smart houses.”51I recognize this integration of AI is both beneficial and is not dangerous. AI is helping progresshealth, economic, and industrial technology, making it safer, more advanced, and more efficient.Although there are numerous benefits, it is also important to understand both the limitations anddangers of AI technology, specifically with its integration into military systems. Hubert Dreyfus leads the charge against the integration of AI, arguing both the limitations47 Mishkoff, 108.48 Mishkoff, 108.49 Mishkoff, 120.50 Von Drehle, “Meet Dr. Robot,” 44.; Velichenko. “Using Artificial Intelligence and Computer Technologies for DevelopingTreatment Programs for Complex Immune Diseases,” 635.51 Anderson, “Robot Be Good,” 72.
  22. 22. 22and danger of AI machines. Dreyfus claims in What Computers Can’t Do that early AIdevelopers were, “blinded by their early success and hypnotized by the assumption that thinkingis a continuum,” meaning that Dreyfus believes this progress cannot continue. 52 Dreyfus isspecifically wary of the integration of AI into systems when they have not been tested. Overoptimism and reliance of AI supporters gives the AI machine the ability to functionautonomously when it has not been fully tested. In Mind Over Machine, Dreyfus expands hisskepticism, warning of the dangers of A.I. decision making because to him, decisions must bepre-programed into a computer, which leads to the A.I.’s “ability to use intuition [to be] forfeitedand replaced by merely competent decision making. In a crisis competence is not goodenough.”53 Dreyfus takes a skeptical approach by recognizing the benefits of AI on society,specifically information processing, but strongly opposes the forcing of undeveloped AI onsociety. He says that, “AI workers feel that some concrete results are better than none,” that AIdevelopers continue to integrate untested AI into systems without working out all theconsequences of doing so.54 Dreyfus is correct in saying that humans must not integrateuntested, under developed AI into society, but rather always be cautious. This skeptical approachis important for the safe integration of AI, specifically when removing a human operator andreplacing him with an autonomous machine. Since the 1940s, there has been skepticism of AI in military applications from a diversegroup of opponents. The military’s commitment to the use and reliance of using autonomousmachines for military functions comes with many dangers, removing human operators and52 Hubert Dreyfus, What Computers Can’t Do, 302.53 Hubert Dreyfus, Mind Over Machine, 31.54 Hubert Dreyfus, What Computers Can’t Do, 304.
  23. 23. 23putting more decisions into the hands of the AI machine. Dreyfus argues the danger inimplementing “questionable A.I.-based technologies” that have not been tested. To Dreyfus,allowing these automated defense systems to be implemented, “without the widespread andinformed involvement of the people to be affected” is not only dangerous, but alsoinappropriate.55 It is inappropriate to integrate untested AI into daily life, where that AI maymalfunction or make a mistake that could negatively impact human life. Dreyfus is wary of themilitary decision-makers being tempted to “install questionable AI-based technologies in avariety of critical contexts,” especially those applications that involve weapons and human life.56Whether its to justify the billions of dollars spent for research and development or the temptationof the advanced capabilities of the AI machines, military leaders must be cautious of overreliance on AI technology for military applications. Dreyfus was not the first skeptic of technology and its integration into militaryapplications. Wiener’s letter “A Scientist Rebels” showed both early scientists’ resistance andskepticism of research and development’s relationship with the military. The point that Wienerwants to make is that even if scientific information seems innocent, it can still have catastrophicconsequences. Wiener’s letter was written shortly after the bombings of Hiroshima andNagasaki, where the atomic bomb developer’s work fell into the hands of the military. ToWiener, it was even worse that the bomb was used “to kill foreign civilians indiscriminately.”57The broad message of Wiener’s letter is that scientists should be skeptical of the militaryapplication of their research. Though their work may seem innocent and purely empirical, it can55 Hubert Dreyfus, Mind Over Machine, 12.56 Hubert Dreyfus, Mind Over Machine, 12.57 Wiener, “From the Archives,” 37.
  24. 24. 24still have grave consequences by falling into the hands of the military. Though Wiener is notexplicitly talking about AI research, his skepticism is important. Wiener emphasizes the need forresearchers and developers to be wary of their work, and warns them of the dangers ofcooperating with the military. Wiener’s criticism of the military’s relationship with research and development has notchanged that relationship, and the military continues to develop and use more AI technology inits weapons and systems. The military application of AI brings a number of dangers both tofriendlies, enemies, and civilians. Though AI has many benefits in the military, the dangersoutweigh those benefits. The idea of taking a human out of the loop is not only dangerous, butwhen human life is on the line, can a thinking machine be trusted to function like a human?Functioning completely autonomously, how do we know that that machine will emulate thethought, decision making, and ethics of a human? The following are some of the dangers ofintegrating AI technology into military applications. As previously warned by Wiener, the government misuse of AI in the military could be adangerous outcome of AI’s integration. Governments like the United States have massivedefense budgets, giving them the resources to build large armies of thinking machines. Thisincreases the chances of unethical use of AI by countries, specifically the U.S., giving thesecountries the opportunity to not just use AI technology for traditional warfare, but expanding itsuse for any sort of security. The use of AI opens the doors for unethical infringement upon civilliberties and privacy within the country.58 Another major danger of the use of AI in the military is the possibility of malfunctioning58 Krishnan, 147-148.
  25. 25. 25weapons and networks, when the weapon or system acts in an unanticipated way. As previouslystated, computer programming is built on the idea of programming, finding errors throughmalfunction, and fixing those errors. However, when using AI technology that might not beperfected, the risk of malfunction is greater. Software errors and unpredictable failures leadingto malfunction are both liabilities to the AI military system. These chances of malfunction makeAI military systems untrustworthy, a huge danger when heavily relying on AI software integratedinto military networks.59 It is very challenging to test for errors in the military software.Software often can pass practical tests, however there are so many situations and scenarios thatperfecting the software is nearly impossible. 60 The larger the networks, the greater the dangersof malfunction. Thus, when AI conventional weapons are networked and integrated into largerAI defense networks, “an error in one network component could ‘infect’ many othercomponents.”61 The malfunction of an AI weapon is not only dangerous to those who arephysically affected, but also opens up ethical and legal concerns. The malfunction of an AIsystem could be catastrophic, especially if that system is in control of WMDs. AI controlledmilitary systems increase the chances of accidental war considerably. However, the danger of malfunction is not just theory. July 1988 was an example of anAI system malfunction. The U.S.S. Vincennes, a U.S. battle ship nicknamed “Robo-cruiser”because of its automated Aegis system, an automated radar and battle management system, waspatrolling the Persian Gulf. An Iranian civilian airliner carrying 290 people registered on thesystem as an F-14 Iranian fighter, and the computer system considered it an enemy. The system59 Bellin, 209.60 Bellin, 209.61 Krishnan, 152.
  26. 26. 26fired and took down the plane, killing all 290 people. This event showed that humans are alwaysneeded in the loop, especially with machine autonomy growing. Giving a machine full controlover weapon systems is reckless and dangerous, and if the military continues to phase out humanoperators, these AI systems will be become increasingly greater liabilities.62 The weakness in the software and functioning capabilities of AI military systems alsomake them vulnerable to probing and hacking, exposing flaws or losing control of the unmannedsystem.63 Last year, Iran was able to capture a U.S. drone by hacking its GPS system andmaking it land in Iran instead of what it thought was Afghanistan. The Iranian engineer whoworked on the team to hijack the drone said that they “electronically ambushed” the drone, "Byputting noise [jamming] on the communications, you force the bird into autopilot. This is wherethe bird loses its brain." The Iranian’s successful hijacking of the drone shows the vulnerabilitiesof software on even advanced AI systems integrated into drones.64 Generally war is not predictable, and AI machines function off of programs written forwhat is predictable. This is a major flaw in AI military technology, as the programs that make AIfunction consist of rules and code. These rules and codes are precise, making it nearlyimpossible for AI technology to adapt to a situation and change its functions. Because war isunpredictable, computerized battle management technology lacks both experience and morality,both needed to make informed and moral decisions on the battlefield. The ability to adapt isnecessary for battlefield management, and in some cases, computer programming limits thetechnology from making those decisions. 6562 Peter Singer, “Robots At War: The New Battlefield,” 40.63 Alan Brown, “The Drone Warriors,” 24.64 Scott Peterson, “Iran Hijacked US Drone, says Iranian Engineer.”65 Bellin, 233.
  27. 27. 27 The last danger, the “Terminator Scenario” is more of a stretch, but still is a possibility.In the “Terminator Scenario,” machines become self aware, see that humans are their enemy, andtake over the world, destroying humanity. As AI machines become increasingly intelligence,their ability to become self aware and intellectually evolve will also develop. The idea of AImachines beginning to “learn” their human operators and environments is the start of creatingmachines that will become fully self aware. If these self aware machines have enough power, forexample their integration into military systems, they have the power to dispose of humanity.66Though the full destruction of humanity is a stretch, the danger of AI turning on their humancreators is still a possibility and should be recognized as an apparent consequence of integratingAI into military systems. IV. A Continuing Trend: The Military’s Exponential Use of Autonomous AI Though these dangers are apparent, and in some cases have lead to loss of human life, theU.S. military continues to exponentially rely on AI technology in its military systems, integratedinto both its weapon systems and battle network systems. The military is using AI technology,such as autonomous drones, AI battlefield management systems, and AI communication anddecision making networks for national security and on the battlefield, ushering in a new era ofwar technology. The idea of taking man out of the loop on the battlefield is dangerous andreckless. Removing human operators is not only a threat to human life, but also opens the debateover ethical, legal, and moral problems regarding the use of AI technology in battle. AI has progressively been integrated into military applications, the most common beingweapons (guided missiles and drones) and expert systems for national defense and battlefield66 Krishnan, 154.
  28. 28. 28management. This increased integration has led to both an over reliance and over optimism ofthe technology. The rise of drone warfare through the use of UAVs (Unmanned Aerial Vehicles)and UCAVs (Unmanned Combat Aerial Vehicles), has brought numerous benefits to militarycombat, but also many concerns. As UCAVs become exponentially more autonomous, theirresponsibilities have grown, utilizing new technology and advanced capabilities to replacehuman operators and take humans out of the loop.67 The U.S. military’s current level of autonomy on UCAV’s is supervised autonomy, wherea machine can carry out most functions without having to use pre-programmed behaviors. Withsupervisor autonomy, an AI machine can make many decisions on its own, requiring little humansupervision. In this case, the machine still relies on a human operator for final complexdecisions such as weapon release and targeting, but is able to function mostly on its own.68Supervised autonomy is where the military should stop its exponential integration. It is able toput complex legal and ethical decisions in the hands of a human operator, while still using thebenefits that AI has. When the final decision involves human life or destruction, it is importantto have a human operator making that decision, rather than allowing the computer to decide.Supervised autonomy still allows a human operator to monitor the functions of the UCAV, whilekeeping it ethically and legally under control. It is especially dangerous that the U.S. military isworking towards the creation of completely autonomous machines, ones that can operate on theirown with no human supervision or control. Complete autonomy gives the machine the ability tolearn and think and adjust behavior in specific situations. 69 Giving these completely autonomous67 Hugh McDaid, Robot Warriors: The Top Secret History of the Pilotless Plane, 162.68 Krishnan, 44.69 Krishnan, 44.
  29. 29. 29machines the ability to make their own decisions is dangerous, as their decisions would beunpredictable and uncontrollable. The U.S. military’s path to creating and utilizing completelyautonomous machines is reckless, and supervised autonomy is farthest the military should gowith AI technology and warfare. In the last decade, the use of military robotics has grown for a number of reasons,including the numerous benefits that AI robotics brings to the battlefield. Originally used forpurely reconnaissance, the military is now utilizing UAVs as weapons. The use of UAVs andother AI weapons are heavily supported by the low ranking military personnel, the ones who aredirectly interacting with the drones. Higher ranking military officials and political leaders aresplit, with some fully supporting use while others recognize the dangers and concerns of theiruse. For now, the benefits that UAVs possess continue the integration of them into the U.S.military. One of the benefits of AI weaponry is it reduces the man power requirements. In firstworld countries, especially the U.S., the pool of prospective soldiers is shrinking. Both physicalrequirements and the attractiveness of military service are keeping Americans away from enlistedin the military. As the military budget decreases, UCAVs are able to replace human soldiers,cutting personnel costs from human soldiers.70 Another benefit of replacing human soldiers withAI robotics is that it takes humans out of the line of fire, while also eliminating human fallibility.The reduction is casualties of war is very appealing to not only the fighting soldiers, but alsotheir family, friends, and fellow citizens. Being able to take soldiers out of the line of fire andreplace them with robotics saves soldiers lives. These robotics are also able to reduce mistakes70 Krishnan, 35.
  30. 30. 30and increase performance as compared to their human counterparts. The amplified capabilitiesof the machines give them the ability to outperform human soldiers.71 The ability to function24/7, low response time, advanced communication networks, rapid data and informationprocessing, and targeting speed and accuracy are some of the many benefits of AI robotics on thebattlefield. The benefits of AI military robotics are very important to the lower ranking militarypersonnel. These soldiers interact with the robotics on the battlefield, recognizing the benefits itbrings to them personally, while failing to recognize the ethical and legal concerns that also comealong with the drones. The following are quotes from enlisted, low ranking U.S. soldiers:72• “Its surveillance, target acquisition, and route reconnaissance all in one. We saved countless lives, caught hundreds of bad guys and disabled tons of IEDs in our support of troops on the ground.” -Spc. Eric Myles, UAV Operator• “We call the Raven and Wasp our Airborne Flying Binoculars and Guardian Angels.” -GySgt. Butler• “The simple fact is this technology saves lives.” -Sgt. David NorsworthyIt is understandable why low ranking soldiers embrace the technology and support their use.UCAVs have proven to be highly effective on the battlefield, saving the lives of U.S. soldiers andeffectively combatting enemies, utilizing their advanced AI functions. Though UCAVs areeffective on the battlefield and especially benefit the soldiers on the front line, the ethical andlegal concerns are very important consequences of the overall use of AI technology. However, higher ranking military leaders and political leaders are split in their support.Some of these leaders fully support the technology, while others are skeptical of too muchautomation and the dangers of over reliance. German Army General Wolfgang Schneiderhan,71Krishnan, 40.72U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: UnmannedSystems and the Future of War Hearing, Fagan, 63.
  31. 31. 31who also served as Chief of Staff of the German Army from 2002 to 2009 shows this skepticismin his article, “UV’s: An Indispensable Asset in Operations.” Schneiderhan not only looks at thedangers of taking a human out of the loop, but also the importance of humanitarian law,specifically involving human life. Schneiderhan explicitly warns that, “unmanned vehicles mustretain a ‘man in the loop’ function in more complex scenarios or weapon employment,”especially wary of “cognitive computer failure combined with a fully automated and potentiallydeadly response.”73 Schneiderhan’s skepticism both recognizes the main dangers of over-reliance of AI for military use, while also stressing the importance of keeping a human operatorinvolved in decision making. Schneiderhan argues that a machine should not be makingdecisions regarding human life, but rather decisions should be made by a conscious human whohas both experience and situational awareness, while also understanding humanitarian law.74Schneiderhan’s skepticism contrasts with the over-optimism that many U.S. military leadersshare about the use of AI in weaponry. Navy Vice-Admiral Arthur Cebrowski, chief of the DoD’s Office for ForceTransformation stressed the importance of AI technology for “the military transformation,” usingthe advanced capabilities and benefits to develop war technology. Cebrowski argues that AItechnology is “necessary” to move money and manpower to support new technologies, includingAI research and development, instead of focusing on improving old technologies.75 Navy RearAdmiral Barton Strong, DoD Head of Joint Projects argues that AI technology and drones will“revolutionize warfare.” Strong says that because “they are relatively inexpensive and can73 Schneiderhan, “UVs, An Indispensable Asset in Operations,” 91.74 Schneiderhan, 91.75 U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance andReconnaissance, 7.
  32. 32. 32effectively accomplish missions without risking human life,” drones are necessary fortransforming armies.76 General James Mattis, head of U.S. Joint Forces Command and NATOTransformation argues that AI robots will continue to play a larger role in future militaryoperations. Mattis fully supports the use of AI weapons, and since commanding forces in Iraq,the UAV force has increased to over 5,300 UAV drones. Mattis even understands therelationship that can form between a soldier and a machine. Mattis embraces the reduction ofrisk to soldiers, the efficient gathering of intelligence, and their ability to strike stealthily.Mattis’s high ranking and support of UAVs will lead to even more use of UAVs. 77 From asoldier’s point of view, the benefits that drones bring far exceed the legal and ethical concernsthat those soldiers are not responsible for. Drones are proving effective on the battlefield,leading to support from the low and high ranking military leaders. However, civilian researchersand scientists continue to be skeptical of the use of AI in the military, especially when involvinghuman life. Looking more closely at the benefits of UCAVs, it is clear why both low ranking andmilitary leaders are optimistic and supportive of the use of UCAVs. The most clear reason is thereduction of friendly military casualties, taking U.S. human soldiers out of the line of fire.78When soldier causalities plays a large part in public perception of war, reducing loss of humanlife makes war less devastating on the home front. The advanced capabilities of AI integratedinto military robots and systems is another appealing benefit of AI. Rapid informationprocessing, accurate decision making and calculations, 24/7 functionality, and battlefield76 McDaid, 6.77 Brown, 23.78 John Keller, “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,”6.
  33. 33. 33assessment amplify the capabilities of a human soldier, making UCAVs extremely efficient anddangerous. By processing large amounts of data at a rapid speed, UCAVs can, “hit the enemywhere it hurts” and take advantage of calculated vulnerabilities before the enemy can prepare adefense.79 In a chaotic battle situation, where a soldier has to process numerous differentenvironmental, physical, and mental factors, speed and accuracy of decision making is essentialto a soldier. AI have the ability to cope with the chaos of a battlefield, making decisions fasterand more efficiently, processing hundreds of variables, than human soldiers. 80 While soldiersare hindered by fear and pain, AI machines lack this emotion, instead being able to functionsolely on the battlefield. The advanced capabilities and abilities of UCAVs have proven to beextremely effective on the battlefield. Though UCAVs are efficient and deadly soldiers, theyalso open the doors for numerous ethical, legal, and moral concerns. V. Ethical Concerns Military ethics is a very broad concept, so in order to understand the ethical concernscaused by the use of AI in the military, I will first discuss what military ethics are. In a broadsense, ethics look at what is right and wrong. Military ethics is often a confusing andcontradictory concept because war involves violence and killing against others, often consideredto be immoral in general. Though some argue that military ethics cannot exist because of thekilling of others, I will look at military ethics where killing is ethical. In this definition ofmilitary ethics, war is ethical if it counters hostile aggression and is conducted lawfully.81 Forexample, the U.S.’s planned raid on Osama Bin Laden’s compound leading to his killing could79 Keller, 10.80 The Economist. “No Command, and Control,” 89.81 Krishnan, 117.
  34. 34. 34be viewed as ethical. Bin Laden was operating an international terrorist organization that hadsuccessfully killed thousands of civilians through their attacks. However, the use of WMDs, forexample, the U.S.’s bombing of Hiroshima and Nagasaki is often viewed as unethical. In thecase of those bombings, thousands of civilians were killed, and it can be debated that the use ofWMDs is not lawful due to their catastrophic damage to a civilian population. The bombings ofHiroshima and Nagasaki can be viewed as war crimes against a civilian population, breakingnumerous laws of war established in the Rules of Aerial Warfare (the Hague, 1923), includingArticle XXII that states: “Aerial bombardment for the purpose of terrorizing the civilianpopulation, of destroying or damaging private property not of military character, or of injuringnon-combatants is prohibited.” 82 As shown in the examples, civilian causalities are one of the most unethical concernswith war in general. As previously stated, the tragedy in the Persian Gulf in 1988 showed theconsequences of an AI systems’s mistake on a large group of civilians. As the military continuesto progressively utilize UCAVs for combat, civilian deaths from UCAVs have also risen. TheU.S. military has relied on UCAVs heavily for counter terrorism operations in Pakistan. Becauseof the effectiveness of the strikes, the U.S. continues to utilize drones for airstrikes on terroristleaders and terrorist training camps. However, with increasing drone strikes, the death toll ofcivilians and non militants has increased exponentially, and has even outnumbered the death tollof targeted militants.83 This is where the unethical nature of UCAV airstrikes is beginning tounfold. The effectiveness of the airstrikes is appealing to the military and they continue to utilizethem, yet ignore the thousands of civilians who are also killed. Marge Van Cleef, Co-Chair of82 The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague.83 Leila Hudson, “Drone Warfare: Blowback From The New American Way of War,” 122.
  35. 35. 35the Women’s International League for Peace and Freedom takes the ethical argument a stepfurther, claiming that drone warfare is terrorism itself. Van Cleef says that, “families in thetargeted regions have been wipe out simply because a suspected individual happened to be nearthem or in their home. No proof is needed.”84 The use of UCAVs has proven to be unethical forthis reason, that civilians are continuously killed in drone strikes. Whether it be throughmalfunction, lack of information, or another mistake, UCAVs have shown that they are not ableto avoid the killing of civilians. However, civilians are not the only victims of UCAV use. Moral disengagement, changing the psychological impact of killing, is another majorethical concern of UCAV use. When a soldier is put in charge of a UCAV and gives that UCAVthe order to kill, having a machine as a barrier neutralizes a soldier’s inhibition to kill. Becauseof this barrier, soldiers can kill the enemy from a large distance, disengaging the soldier from theactual feeling of taking a human life. Using UCAVs separates a soldier from emotional andmoral consequences to killing. 85 An example of this moral disengagement is of a UCAVoperator in Las Vegas spending his day operating a UCAV, carrying out airstrikes and othermissions thousands of miles away, then joining his family for dinner that night. Being in thesetwo situations daily not only leads to emotional detachment from killing, but also hides thehorrors of war. Often on the virtual battlefield, “soldiers are less situationally aware and also lessrestrained because of emotional detachment.”86 Because of this emotional detachment to kill,UCAVs are unethical in that they make the psychological impact of killing non-existent. One of the main deterrents of war is the loss of human life. But when humans are taken84 Marge Van Cleef, “Drone Warfare=Terrorism,” 20.85 Krishnan, 128.86 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: UnmannedSystems and the Future of War Hearing, Barrett, 13.
  36. 36. 36out of the line of fire and human causalities shrink as AI weapons increase, is it easier to go towar? An unethical result of rising use of robotic soldiers is the possibilities of unnecessary war,when the perception of war is changed due to the lack of military casualties.87 Unmannedsystems in war, “further disconnect the military from society. People are more likely to supportthe use of force as long as they view it as costless.”88 When the people at home only see the lackof human causalities, the horrors of war are hidden and they may think that the impact of goingto war is less than it really is. This false impression that, “war can be waged with fewer costsand risks” creates an illusion that the war is easy and cheap.89 This can lead nations into a warthat might not be necessary, giving them the perception, “gee, warfare is easy.”90 These three ethical concerns all fall under the idea of automated killing, which is anethical concern in itself. Giving a machine full control over the decision to end a life is unethicalfor a number of reasons: machines lack empathy, morals, have no concept of the finality of life,and life and human experiences. AI machines are programmed far differently from humans, sothe decision to end of human life should never be left up to a machine. When looking at amachines morals, they may still have the ability to comprehend environments and situations, butwill not have the ability to feel remorse or fear punishment. 91 In the event that an AI machinekills a human wrongly, will it feel remorse for that killing? It is unethical and dangerous to useAI weaponry because humans have the ability to think morally, while a machine may just“blindly pull the trigger because some algorithm says so.”92 AI machines also lack empathy, the87 Singer, 44.88 Singer, 44.89 Cortright, “The Prospect of Global Drone Warfare.”90 Singer, 44.91 Krishnan, 132.92 Krishnan, 132.
  37. 37. 37ability to empathize with human beings. If an AI machine can’t understand human suffering orhas never experienced it themselves, it will continue to carry out unethical acts without beingemotionally effected. Fitting in with empathy and morals, AI machines lack the concept of thefinality of life and the idea of being mortal. Both not knowing and not experiencing death andthe end of life, an AI machine doesn’t have the ability to take finality of life into considerationwhen making an ethical decision. With no sense of the ability to die, an AI machine lacksempathy for death, allowing it to refrain from moral decisions.93 Automated killing opens thedoors for all these ethical concerns. VI. Legal Concerns However, ethical concerns are not the only problem with the use of AI machines in themilitary. There are also a number of legal concerns regarding the use of AI weaponry,specifically with the rise of drones. Today, modern warfare is still governed by the laws of theGeneva Convention, a series of laws to establish the laws of war, armed conflict, andhumanitarian treatment. However, the Geneva Convention was drafted during the 1940s, a timewhen warfare was radically different. This means that the laws of war are outdated, the 20thcentury military laws are not able to keep up with 21st century war technology.94 The laws ofarmed conflict need to be updated before the use of UCAVs continues to establish the legality ofusing them in the first place. For example, an article of the Geneva Conventions protocol states:"effective advance warning shall be given of attacks which may affect the civilian population,unless circumstances do not permit." 95 However, the killing of civilians by UCAVs without prior93 Krishnan, 133.94 U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: UnmannedSystems and the Future of War Hearing, Singer, 7.95 Michael Newton, “Flying Into the Future: Drone Warfare and the Changing Face of Humanitarian Law.”
  38. 38. 38warning violates the humanitarian protections established by the Geneva Convention, illegallycarrying out attacks resulting in civilian deaths. Only combatants can be lawfully targeted inarmed conflict, and any killing of non-combatants violates armed conflict law.96 Armed conflictis changing at such a fast pace, it is hard to establish humanitarian laws for war that can adapt tochanging technologies. As of now, the actions of UCAVs could be deemed as war crimes, the violation of armedconflict laws. One legal concern with the use of UCAVs is the debate over whether they areconsidered “state sanctioned lethal force” or not. If they are state sanctioned, such as a soldier inthe U.S. Army, they are legal and must follow the laws of armed conflict. However, numerousdrones are operated by the CIA, meaning they are not state sanctioned. Because these drones arenot state sanctioned, they are violating international armed law, as being state sanctioned givesthe U.S. military the right to use lethal force. The killing of civilians in general, but specificallyby non-state sanctioned weapons can be seen as war crimes. 97 Another legal problem of drone warfare concerns liability of the weapon, who is to blamefor an AI malfunction or mistake. There are so many people involved in the development,building, and operation of a drone, making it hard to decide who is responsible for an error. Is itthe computer scientist who programmed the drone, the engineer who built the drone, the operatorof the drone, or the military leader who authorized the attack? It can even be argued that thedrone is solely responsible for its own actions, and should be tried and punished as though it is ahuman soldier. Article 1 of the Hague Convention requires combatants to be, “commanded by a96 Ryan Vogel, “Drone Warfare and the Law of Armed Conflict,” 105.97 Van Cleef, 20.
  39. 39. 39person responsible for his subordinates.”98 This makes sense for human soldiers, but makes itvery hard to legally control an autonomous machine, one that cannot take responsibility for itsown actions when acting autonomously. Because UCAV use is rising, there needs to beestablished legal accountability laws in the event of a robotic malfunction or mistake leading tohuman or environmental damage.99 The field of AI continues to develop at an extremely rapid pace, opening up the door forincreased optimism and reliance on the new technologies. However, this exponential growthcomes with numerous ethical, legal, and moral concerns, especially in regards to its relationshipwith the military. The military has influenced the research and development of AI since it wasestablished in the 1950s, and continues to have a hand in AI growth through heavy funding andinvolvement. Though AI brings great benefits to society politically, socially, economically, andtechnologically, we should be warned of over reliance on the technology. It is important toalways keep a human in the loop, whether it be for civilian or military purposes. AI technologyhas the power to shape the society we live in today, but each increase in autonomy should betaken with a grain of salt.98 Krishnan, 103.99 Krishnan, 103.
  40. 40. 40 BibliographyAdler, Paul S. and Terry Winograd. Usability: Turning Technologies Into Tools. New York: Oxford University Press, 1992.Anderson, Alan Ross. Minds and Machines. New Jersey: Prentice-Hall Inc.. 1964.Anderson, Michael, and Susan Leigh Anderson. “Robot Be Good.” Scientific American 303, no. 4 (2010): 72-77.Bellin, David and Gary Chapman. Computers in Battle: Will They Work?. New York: Harcourt Brace Jovanovich Publishers, 1987.Brown, Alan S. “The Drone Warriors.” Mechanical Engineering 132, no. 1 (January 2010): 22-27.Burks, Arthur W. “The ENIAC: The First General-Purpose Electronic Computer,” Annals of the History of Computing 3, no. 4 (1981): 310–389.Cortright, David. “The Prospect of Global Drone Warfare.” CNN Wire (Oct 19, 2011).Dhume, Sadanand. “The Morality of Drone Warfare: The Reports About Civilian Casualties are Unreliable.” Wall Street Journal Online, (Aug 17, 2011).Dreyfus, Hubert L. Mind Over Machine. New York: The Free Press, 1986.Dreyfus, Hubert L. What Computers Cant Do: The Limits of Artificial Intelligence. New York: Harper Colophon Books, 1979.Edwards, Paul N. The Closed World: Computers and the Politics of Discourse in Cold War America. Massachusetts: MIT Press, 1996.Ford, Nigel. How Machines Think. Chichester, England: John Wiley and Sons, 1987.
  41. 41. 41Hayles, Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: The University of Chicago Press, 1999.Heims, Steve J. John Von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death. Massachusetts: MIT Press, 1980.Hogan, James P. Mind Matters. New York: Ballantine Publishing Group, 1997.Hudson, Leila, Colin Owens, and Matt Flannes. “Drone Warfare: Blowback From The New American Way of War.” Middle East Policy 18, no. 3 (Fall 2011): 122-132.Keller, John. “Air Force to Use Artificial Intelligence and Other Advanced Data Processing to Hit the Enemy Where It Hurts,” Military & Aerospace Electronics 21, no. 3 (2010): 6-10.Krishnan, Armin. Killer Robots: Legality and Ethicality of Autonomous Weapons. Vermont: Ashgate, 2009.Lehner, Paul. Artificial Intelligence and National Defense: Opportunity and Challenge. Pennsylvania: Tab Books Inc., 1989.Le Page, Michael. “What Happens When We Become Obsolete?” New Scientist 211, no. 2822 (July 2011): 40-41.Lyons, Daniel. "I, ROBOT." Newsweek 153, no. 21 (May 25, 2009): 66-73. Military & Government Collection..Masci, David. “Artificial Intelligence”. CQ Researcher. 7, no. 42 (1997): 985-1008.McCarthy, John, and Marvin Minsky. “A Proposal for the Darthmouth Summer Project on Artificial Intelligence.” AI Magazine 27, no. 4 (Winter 2006): 12-14.McCarthy, John. Defending A.I. Research. California: CSLI Publications, 1996.
  42. 42. 42McDaid, Hugh and David Oliver. Robot Warriors: The Top Secret History of the Pilotless Plane. London: Orion Books Ltd., 1997.McGinnis, John. “Accelerating AI.” Northwestern University Law Review 104, no. 3 (2010): 1253-1269.Michie, Donald. Machine Intelligence and Related Topics. New York: Gordon and Breach, 1982.Minsky, Marvin and Seymour Papert. “Artificial Intelligence.” Lecture to Oregon State Systemss 1974 Condon Lecture, Eugene, OR, 1974.Mishkoff, Henry C. Understanding Artificial Intelligence. Dallas, Texas: Texas Instruments, 1985.Newton, Michael A. “Flying Into the Future: Drone Warfare and the Changing Face of Humanitarian Law.” Keynote Address to University of Denvers 2010 Sutton Colloquium, Denver, CO, November 6, 2010.Perlmutter, David D. Visions of War: Picturing War From The Stone Age to the Cyber Age. New York: St. Martins Press, 1999.Pelton, Joseph N. “Science Fiction vs. Reality.” Futurist 42, no. 5 (Sept/Oct 2008): 30-37.Peterson, Scott. “Iran Hijacked US Drone, says Iranian Engineer.” Christian Science Monitor, (15 December, 2011).Schneiderhan, Wolfgang. “UVs, An Indispensable Asset in Operations.” NATOs Nations and Partners for Peace 52, no. 1 (2007): 88-92.Shachtman, Noah. “Darpa Chief Speaks.” Wired, (20 February 2007).Shapiro, Kevin. “How the Mind Works.” Commentary 123, no. 5 (May 2007): 55-60.
  43. 43. 43Singer, P.W. “Robots At War: The New Battlefield.” Wilson Quarterly 33, no. 1 (Winter 2009): 30-48.The Economist. “Drones and the man: The Ethics of Warfare.” The Economist 400, no. 8744 (July 2010): 10.The Economist. “No Command, and Control.” The Economist 397, no. 8710 (Nov 2010): 89.The Hague. 1923. Draft Rules of Aerial Warfare. Netherlands: The Hague.Triclot, Mathieu. “Norbert Wieners Politics and the History of Cybernetics.” Lecture to ICESHSs 2006 The Global and the Local: The History of Science and the Cultural Integration of Europe, Cracow, Poland, September 6-9, 2006.Tucker, Patrick. “Thank You Very Much, Mr. Roboto.” Futurist 45, no. 5 (2011): 24-28.Turing, Alan. “Computing Machinery and Intelligence.” Mind 59 (1950): 433-460.U.S. House of Representatives. Subcommittee on National Security and Foreign Affairs. Rise of the Drones: Unmanned Systems and the Future of War Hearing, 23 March 2010. Washington: Government Printing Office, 2010.U.S. Senate. Foreign Affairs, Defense, and Trade Division. Military Transformation: Intelligence, Surveillance and Reconnaissance. (S. Rpt RL31425). Washington: The Library of Congress, 17 January 2003.U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles: Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The Library of Congress, 25 April 2003.
  44. 44. 44U.S. Senate. Foreign Affairs, Defense, and Trade Division. Unmanned Aerial Vehicles: Background and Issues for Congress Report. (S. Rpt RL31872). Washington: The Library of Congress, 21 November 2005.Van Cleef, Marge. “Drone Warfare=Terrorism.” Peace and Freedom 70, no. 1 (Spring 2010): 20.Velichenko, V., and D. Pritykin. “Using Artificial Intelligence and Computer Technologies for Developing Treatment Programs for Complex Immune Diseases.” Journal of Mathematical Sciences 172, no. 5 (2011): 635-649.Vinge, Vernor. “Singularity.” Lecture at the VISION-21 Symposium, Cleveland, OH, March 30-31, 1993.Vogel, Ryan J. “Drone Warfare and the Law of Armed Conflict.” Denver Journal of International Law and Policy 39, no. 1 (Winter 2010): 101-138.Von Drehle, David. “Meet Dr. Robot.” Time 176, no. 24 (2010): 44-50.von Neumann, John. The Computer and the Brain. New Haven: Yale University Press, 1958.Wiener, Nobert. “From the Archives.” Science, Technology, & Human Values 8, no. 3 (Summer, 1983): 36-38.Wiener, Nobert. The Human Use of Human Beings: Cybernetics and Society. New York: Avon Press, 1950.

×