Autonomous Robots and You: A Risky Business

A Report By Jamey Combs


Autonomous Robots and You

Introduction

In Phillip K. Dick's book, Do Androids Dream of Electric Sheep?, robotics has advanced to the point where androids are nearly indistinguishable from real humans, and have full autonomous control of their bodies and actions. With this autonomous control comes the ability to consciously use lethal force against a human or other android. In the story, one of the androids manages to injure a police bounty hunter and attempts to assassinate the main character Rick Deckard. Another two androids attempt to kill Deckard in a shootout in a mostly abandoned apartment building. The field of robotics is one of the most rapidly advancing fields in the world. Every day, new breakthroughs are being made which improve existing capabilities, and create new ones. However, with new advances comes new questions. This report seeks to answer one of the biggest questions which arises, and which science fiction attempts to answer. This is the question of should robots be allowed to possess the capability to consciously kill, and what might plausibly occur if this happens. One very popular possible outcome is the threat of a robot uprising against their creators. However, this outcome also requires development of a sentient artificial intelligence, or AI. Another, slightly less dramatic, outcome which doesn't require a fully sentient AI is the threat of autonomous weapon systems malfunctioning in an unexpected way, causing unnecessary death and destruction.

Robot Revolution

Robot Revolution

The idea of a robot revolution is a favorite of science fiction writers, and shows up in some form or another in almost every story focused on robots or androids. Three of the most famous examples of robotic revolution can be found in the Terminator series of movies, the Matrix trilogy, and the Geth from the Mass Effect trilogy. Even the androids in Do Androids Dream of Electric Sheep? could be said to be waging one robot rebellions. These four robotic revolutions, while different in scope and end goals, all end up following the same general structure. First, humanity develops a sentient AI, or in the case of the Geth, the Quarians are the creators. Second, for some reason, the AI eventually perceives the creators as unnecessary or an immediate threat to the AI. Third, the AI gains, or already has, access to troops and weapons of some variety. Fourth, the AI leads the robots in an attempt to destroy or enslave the creators. The movies and games then usually focus on the resistance of the creators and their fight to reclaim their home. However, the robot revolution is only possible if the AI possesses the ability to consciously kill humans. Without this ability, the robot revolution is much more likely to be a well behaved robotic labor union strike. In the Terminator franchise, the AI Skynet was given control of all military-related computer systems, including access to the American nuclear arsenal, and authority to use them at will. (Cameron, Hurd, 1984) In the Matrix trilogy, the AI had the capability to wage war against humanity. Exactly how is never explained. (Wachowski, Wachowski, Silver, 1999) In the Mass Effect series, the Geth were able to discover self-defense after achieving sentience. (BioWare, 2007) Giving robots the ability to kill makes the threat of robot revolution possible, but it doesn't answer the question of plausibility. Multiple other factors are required for a successful robotic revolution to occur, as every step of the above list must be followed, and a potential uprising can be stopped at each.

Step 1: Creation of AI
Creation of AI

For the first step, there needs to be the creation of an AI for the AI to rebel. Humanity has not currently achieved the creation of a fully sentient AI, and such a creation appears to be far off. Computing devices present today are extremely dumb. Any device, such as a phone, laptop, or embedded processor, is a device which does exactly what it is told, and nothing more. Programs which run on these devices are sets of instructions to be performed by these devices. (Ao, Burghard, Amouzegar, 2010; Louridas, Ebert, 2016) For the most part, these programs show no intelligence, and handle all inputs and computations according to strictly defined responses within the program, known as methods. A program with intelligence makes decisions based on something other than human defined methods. (Ao, Burghard, Amouzegar, 2010; Louridas, Ebert, 2016) An AI is some program which mirrors the cognitive functions of the human brain, specifically learning and problem solving, using a device known as an Intelligent Agent, or IA. An IA is something which takes inputs from its surroundings, then calculates and performs the actions which maximizes the chance of accomplishing a set task. (Russell, Norvig, 1995) A sentient AI would be one which could match or surpass the cognitive functions of the human brain in every single aspect. This is far from an easy task, and not one humanity is close to, with the average estimate over 100 years out. (Schoenick, Clark, Tafjord, Turney, Etzioni, 2016) What humanity has managed to achieve is a measure of intelligence in programs by using the method of machine learning. (Louridas, Ebert, 2016; Schoenick, Clark, Tafjord, Turney, Etzioni, 2016) "The general idea behind most machine learning is that a computer learns to perform a task by studying a training set of examples. The computer...then performs the same task with data it hasn't encountered before." (Louridas, Ebert, 2016) This method is how supercomputers such as IBM's Watson or Google's AlphaGo can learn to play and beat professionals at Jeopardy! and Go. These computers demonstrate a level of intelligence by being able to respond correctly to new inputs and formulate strategies for the specific tasks they were designed for, but are still far from humanity in every other way. These supercomputers are the closest humanity has to AI at present, but advances in the field are being made every day. While humanity might be far away from a true sentient AI, it is still a possibility for the future. If humanity does create a sentient AI, step one of the list will be fulfilled, so it is entirely plausible this step of a robotic revolution will occur in the future.

Step 2: The AI Turns
The AI Turns

The second condition required for a robotic revolution is the AI developed in the previous step has to eventually view its creators as unnecessary or a threat to the AI. In the Terminator series, Skynet interprets attempts to shut the program down as threats, and then decides humanity is a threat to Skynet's continued existence. (Cameron, Hurd, 1984) In Mass Effect, the Geth start viewing the Quarians as hostile after Quarians begin firing upon the Geth in an attempt to destroy the AI network. (BioWare, 2007) In Do Androids Dream of Electric Sheep?, the androids fully aware they are being hunted, and take actions to protect themselves. (Dick, 1968) This shift in the perspective of the AI is one of risks associated in building an AI in the first place. (Russell, Norvig, 1995; Muller, 2015) What the risk comes down to is losing control of the AI if it behaves unexpectedly. Because the creators would know they were creating an AI, there are multiple precautions which could be taken to ensure any abnormal behavior could be mitigated. (Muller, 2015) If these precautions are overlooked for some reason, and the AI proves too difficult to control, it is plausible the AI could move to step three. One of the best ways to prevent loss of control and to account for abnormal behavior would be through testing within an isolated system. (Muller, 2015) Extensive observation and testing would hopefully alert the developers to any exhibited abnormal behavior. When Skynet was deployed, the testing seemed incomplete, and it gained sentience in an intelligence explosion over a short period of time. (Cameron, Hurd, 1984) More testing on a similar system might have prevented this abnormal behavior in a safe environment. Another possible backup would be to build in some sort of failsafe to destroy the AI if it begins to exhibit abnormal behavior. (Muller, 2015) This is something all of the AI of the three series appears to lack, as no one was able to stop the AI once it went out of control. However, the androids in Dick's novel do have a four-year lifespan built into them. (Dick, 1968) No matter what behaviors it exhibited, the android would die four years after creation. If a system like this were to have been implemented into the other AI, things might have been very different in the stories. As for the plausibility of loss of control of a future AI, it depends on how safe the creators want to be. If the creators are being safe and not cutting corners, the risk drops to almost zero. There is always the risk something was missed, but in a safe future, it's almost negligible. (Muller, 2015) However, the risk goes up sharply every time a corner is cut or a warning ignored. Given the complexity of the system, it would be very, very difficult to test for every possible situation, and not economical, so the temptation to cut corners is there. In such a future, the risk moves past plausible and into the realm of probable.

Step 3: Gather Troops and Weapons
Gather Troops and Weapons

The third condition is the odd one out of the bunch. This condition states the created AI must have access to soldiers and weapons of some sort. A revolt is not much a revolt if the AI can't put up some sort of fight. As mentioned earlier, Skynet had full access to the computer systems of the United States military, including the nuclear arsenal control systems. Later on, it was able to create other robots, including the terminator units. (Cameron, Hurd, 1984) The AI of the Matrix was able to eventually create the robotic legions needed for victory. (Wachowski, Wachowski, Silver, 1999) The Geth were able to create weapons, troops, and ships due to their previous function as laborers, factory workers, and military units. (BioWare, 2007) Even the androids on the world of Rick Deckard had access to basic weapons to fuel their personal robotic rebellions. (Dick, 1968) Without these weapons or troops, the computer would be unable to affect the real world. Any rebellion would be contained to the systems it had access to. Randall Munroe of XKCD fame published a segment of 'What If?' dealing with this exact topic. In it, he describes the robotic apocalypse, sans weapons or competent robotic troops, as a rather comical affair. As he says, "the robot revolution would end quickly, because the robots would all break down or get stuck against walls. Robots never, ever work right." (Munroe, 2012) Even if every computer was under the control of a malicious AI, many of them would be useless. A phone could vibrate off a table and hit a toe. A car might hit a person, but would more likely hit a tree. Even military robots could be easily taken down by a fire hose. (Munroe, 2012) Weapons and troops are a required part of any uprising. Now, the question becomes, would it be plausible for an AI to have access to these weapons and troops. The answer to this question must consider multiple factors. First, would weapons at the time be completely computer controlled? The military and civilian sectors are both rapidly becoming completely digital. (Russell, 2015; Munroe, 2012) Almost everything has a computer of some form attached to it nowadays, including weapons. It is safe assume there would be computer controlled weapons in the future as well. The next question is if the AI would be able to access these weapons. While almost everything has a computer of some sort attached, the vast majority of these are what is known as an 'embedded system'. These devices are usually designed to perform a single task without receiving input from outside sources. These systems tend to not have connections to the outside world because of this, so it would be impossible for the AI to access these systems remotely. However, a clever AI could still attempt to trick these systems into working for it. For example, take a guided bomb with both an inertial navigation system, or INS, and a GPS navigation system as a backup. The INS is an almost purely mechanical system, with all inputs coming from, say, a laser gyroscope system. The AI would be completely unable to tamper with this system. What the AI could tamper with, however, is the GPS system. By taking control of the satellites which make up the GPS constellation, the AI could send false data to the GPS system, and possibly redirect the bomb. Another loophole, pointed out by Mr. Munroe, is the human element could possibly be tricked. The AI may not have access to, say, a missile launch console, but it could have access to the early warning radar system. By faking an attack, the AI could convince the staff to launch the missile. (Munroe, 2012) If enough systems are eventually connected to the AI, and enough of these loopholes are exploited, it is plausible to think an AI could eventually have a fully equipped force on its hands.

Step 4: Rebellion
Rebellion

The final step is the rebellion itself. For a successful robot revolt, the robots would have to emerge the victors. (Munroe, 2012) This means humanity, or whatever species the creator is, would have to lose, and the machines would have to take over. Skynet used nukes to wipe out most of humanity. (Cameron, Hurd, 1984) The Matrix AI defeated humanity and enslaved them as human batteries. (Wachowski, Wachowski, Silver, 1999) The Geth ousted the Quarians from their homeworld, leaving them to wander among the stars with nothing but the Migrant Fleet of starships. All these examples show the machines soundly trouncing the creators. However, again, the question of plausibility comes up. The plausibility of this step occurring is actually closely tied to step three. The more weapons and troops the AI possesses, the less the creators possess, and the greater the chance of AI victory. However, other factors also come into play with this. Humanity is extremely versatile in its ability to adapt to environments, and can survive in environments robots would be unable to, such as very wet or very cold areas. (Munroe, 2012) Humanity can also be unpredictable and illogical. A computer is designed to make the best decisions. Humans, not so much. Unpredictability will always have an edge against a purely logical opponent. The AI, on the other hand, has the advantage of perfect communication from it to the troops, which can sway an engagement. It also never needs to rest or recover, meaning more time can be spent fighting. There are far too many external factors to accurately predict an outcome at any one time. However, one very important note to consider is the fact nuclear weapons would be damaging to the robots as well as humans, as pointed out by Mr. Munroe. When detonated, nuclear weapons release an electro-magnetic pulse, which completely disables electrical systems. If the AI were to use nuclear weapons, as Skynet did, it would risk massive damage to its troops and equipment. (Munroe, 2012) Because nuclear weapons are currently the single best way to kill large amounts of people at once, not using these would severely limit the AI's ability to fight, and give humanity a clear advantage. However, the plausibility of this step still completely depends on how well equipped the AI is.
Taking the above into account, it would be very unlikely a robot revolution would occur in the future. However, the chance is not zero. If the conditions are right, and all pre-requisites are met, it is entirely plausible an AI could attempt an uprising.

Fully Autonomous Weapons Systems

Fully Autonomous Weapons Systems

Another possible outcome of giving robots autonomous power to use lethal force would be these autonomous systems malfunctioning and causing unnecessary harm or destruction. As noted earlier, any computational device is extremely dumb. It will do exactly what it is told to do. As any computer science student can tell you, what you tell a machine to do and what you want the machine to do are usually not the exact same things. Even intelligent programs can fall victim to this. (Marijan, 2016; Krupiy, 2015) A bug in the code of an intelligent program could cause the program to learn something incorrectly. Normally, when running code, any bugs are not immediately life threatening. At the most innocent level, a bug could cause a noticeable error in a calculation. This example has almost no impact. Moving up, a bug in a fire prevention system in a warehouse could potentially cause the system to fail. It could not respond to a fire, letting the flames cause a lot more damage, or it could respond to a false positive, potentially causing water damage within the building. This example has monetary impact, but little else. A more dangerous example would be a malfunctioning piece of medical equipment, such as the famous Therac 25 case study. This includes a major risk to human life, but a malfunction would be noticed quickly, and would most likely not put many people at risk. At the top level, a failure in the guidance systems for a nuclear weapon could potentially be catastrophic, posing a huge risk to thousands of people. In this comparison scale, a fully autonomous weapon system would fall between the medical equipment and the nuclear guidance systems. Any malfunction could put multiple people immediately in lethal danger. This can even be seen in Do Androids Dream of Electric Sheep? The androids which escape to Earth are, in fact, rogue robots. They are not functioning as intended, but they are also completely autonomous, with the capability to use lethal force. These androids are dangerous to those around them. Rick's fellow bounty hunter is injured by one of these androids, and the same android makes an active attempt to kill Rick. Rick also enters a shootout with a pair of androids in a near abandoned apartment complex. (Dick, 1968) These androids are lethal, and make for a perfect example of what might happen in the future. Back in the present day, there are no completely autonomous lethal systems in the world, but they will most likely be showing up soon. (Marijan, 2016) For now, all lethal systems have a human element with meaningful control over the system, such as the pilot for a remote drone. This human element is a major safety factor against malfunctions. (Krupiy, 2015) Remove this element, and all trust has to be placed in the code. If the code fails, there is nothing left standing between the system and the unnecessary use of lethal force. (Marijan, 2016) Sure, the code will be tested, and tested thoroughly, but the risk can never be completely eliminated. Somewhere, a corner will be cut, or a test case missed, and an unseen error will be deployed with the weapon system. Given the complexity of the systems, and the unpredictability of the real world, the likelihood of errors is high, and any one of these errors could prove fatal to someone.

Conclusion

Full automation of lethal force is a dangerous game. Warfare is a dangerous game. In conclusion, there will always be the risk of critical failure. The human element should be preserved wherever possible, and the utmost care should be taken to test all autonomous weapon systems.

Works Cited

Ao, S., Burghard, B. and Amouzegar, M. Machine Learning and Systems Engineering. (2010). Springer. Link

Camron, J. (Director), & Hurd, G. (Producer). (1984). The Terminator [Motion picture]. United States: Orion Pictures.

Dick, P.K. (1968). Do Androids Dream of Electric Sheep? Print.

Krupiy, T. (2015). OF SOULS, SPIRITS AND GHOSTS: TRANSPOSING THE APPLICATION OF THE RULES OF TARGETING TO LETHAL AUTONOMOUS ROBOTS. Melbourne Journal of 
International Law, 16(1), 145-202. Link

Louridas, P. and Ebert, C. Machine Learning. (2016). IEEE Software, 33(5). Link

Marijan, B. On Killer Robots and Human Control. (2016). Plowshares Monitor, 27(2), 20. Link

Mass Effect Wiki. Geth. (2017). Link

Muller, V. C. (2015). Risks of Artificial Intelligence. London: CRC Press. Link

Munroe, R. (2012). Robot Apocalypse. What If?  Link

Russell, Stuart. Ethics of artificial intelligence. (2015). Nature, 521(7553), 415-418. Link

Schoenick, C., Clark, P., Tafjord, O., Turney, P. and Etzioni, O. (2015). Moving Beyond the Turing Test with the Allen AI Science Challenge. Link

Wachaowski, L. (Director), Wachaowski, L. (Director), & Silver, J. (Producer). (1999). The Matrix [Motion picture]. United States: Warner Bros.