A Feasibility Study of Artificial Intelligence in:

Do Androids Dream of Electric Sheep

Roy Batty - Android

Social Interactions and A.I.

In Do Androids Dream of Electric Sheep, the Androids interaction with the world around them is seamless, offering a difficult task of discerning their biological authenticity. Not only can they operate in a physical manor that hides their identity, but they also have smooth social interaction. For this study, we will be focusing on the most technologically advanced android in the book, the Nexus-6 brain type. As mentioned in the book, older models were easier to discern, and of less complexity.

To define the feasibility of these androids artificial intelligence, a thorough examination of what they are capable of is needed. First, the specifications of the brain type are needed. Rick states from his reading about the brain type “the Nexus-6 did have two trillion constituents plus a choice within a range of ten million possible combinations of cerebral activity. In .45 of a second an android equipped with such a brain structure could assume any one of fourteen-basic reaction-postures.” (Dick, 12). Rick goes on to explain that no intelligence test can trap a robot of this complexity. If you simply think about the processing power of the brain type just described, any question involving intelligence will not be an issue. The most plausible explanation for technology of this type is a highly evolved dynamic machine learning system. Systems like Watson, can interact at an astounding amount already, but would need far more improvement to become what is needed. These androids, as Deckard experiences first hand, can interact at an amazing capacity. By far the most difficult, and important factor in making these androids indistinguishable from humans, is their ability to interact seamlessly. These androids can speak, respond, prompt, intonate, and inflect. Evidence of complex social interaction is apparent in Rick and Rachael. The following excerpt is small, and seemingly unimportant, at first.

"What have you got against me?" he asked her as together they descended. She reflected, as if up to now she hadn't known. "Well," she said, "you, a little police department employee, are in a unique position. Know what I mean?" She gave him a malice-filled sidelong glance.

While at first, this passage seems simple and bland, a random interaction between human and android, there is more to it. If you look from the eyes of designing a dynamic artificial intelligence system, nearly everything about that interaction would be a nightmare. The response is casual, full of specific adjectives that require explicit timing and intonation. The prompting statement would require extensive programming in tone and pitch balance. The adaptive facial control to be able to display an emotion as specific as a “mallace-filled sidelong glance”, would require a complex dynamic A.I. system. Systems such as TARS in Interstellar can accomplish this as well. In the film, TARS converses with the crew in a natural and smooth manner, responding to jokes and quips in transition easily. Interestingly enough, TARS has a humor setting, that can be adjusted by the members of the crew.

To program a system to dissect “natural language”, based on intonation, pitch, volume, and input parsing is extraordinarily complex. For instance, think of all the ways you, a human, could say the sentence “She said she did not take his money.” While this seems like a strange sentence, the meaning of the sentence completely changes if you put emphasis on a different word. Try saying the sentence out loud, emphasizing a different word every time. Now try imagining programming a robot to not only parse the input of that sentence, but have a multitude of meanings based on the inflection of different words in that one sentence. Hard coding basically an infinite combination of language inputs is not feasible, meaning these natural language, social interactive machines must be using a data processing method to understand sentences. The most logical answer for processing a variety of inputs, with endless combinations, is a machine learning algorithm.

Composition of Machine Learning A.I.

To explain the capabilities of these android robots, they must be using some machine learning driven A.I. system. This is the most accurate method for how these robots are operating, as they are responding too seamlessly for pre-programmed actions. To design a system that would have a set answer for every single input the system will receive is inconceivable. In a perfect world, a conversation is about a singular topic and uses direct, and accurate vocabulary. For these instances, there may be easy programmatic solutions. For the world we live in, it is easy to see how varied conversations can be. Just in conversation between you and a friend, the subject may shift many times, and phrases that do not equate to their exact definition may be used. To design a system that can keep track of these issues, as well as seamlessly operate on a physical level, would not be feasible.

Machine learning is needed, as the android’s Nexus-6 brain type can learn over time what the most accurate response is to different scenarios. The description of the brain type gives some idea of how the android may operate. “two trillion constituents” can be taken as 2 trillion transistors in the brain type. Further meaning, there are 2 trillion options for a decision tree to explore, in terms of a response to an input. Machine learning gives this brain type the ability to train itself to respond to stimuli in “correct” ways. The Tyrell corporation works on these androids, training them and adjusting them to be better and better. With machine learning and other subsets of neural net applications, you can give an enormous amount of data to a system and allow it to “train.”

Machine Learning Diagram

Essentially, the data is a plethora of examples in where you tell the system which are “right” and “wrong.” You input data on layer 1 in the image above. Over time, the system will continue learning and gathering more data. While it learns, the system will develop links, like between layers 1 and 2, or 2 and 3. When the system is presented stimuli through layer 1, it can compare it to data that it has been trained with (Layer 2), and slowly make connections as to what is a “right” and “wrong” way to handle this situation (Layer 3). Of course, when I say slowly, I am speaking in terms of fractions of a second. As mentioned earlier, the Nexus-6 brain type can respond within .45 of a second with specific responses. The piece of information “ten million possible combinations of cerebral activity” concerning the brain type may detail the complexity of layers in a deep learning machine. Deep learning is a subset of machine learning that ups the number of "layer 2's"...by a lot. Hundreds if not thousands of layers. These added layers create more decision trees for the input to be filtered through.

Real World Applications and Examples

There are many real-world applications for systems using machine learning algorithms. Not only are these applications plausible in the modern world, but often expanded upon in science fiction. Philip K. Dick used technical ideas in the world around him and broadened the scope of their horizon to include seemingly impossible, but hopeful futures. In my opinion, machine learning and neural network applications will absolutely dominate the infrastructure of responsive systems. Being able to train a system to respond to stimuli in specific ways, and allowing a dynamically shifting system to decide for itself the best fitting response to specific issues is priceless. There are many adaptations to machine learning systems that can provide new applications every day. Common applications exist around us that we may not even think about. Tesla self-driving vehicles commonly use deep learning attributes or machine learning adaptations to learn to drive better. Another very important point to consider in using these methods to design a dynamic A.I. system, is data. Data, for systems like these, is pure gold. Having enormous databases allows the androids to train on more and more scenarios, and become “smarter.” The more data you give a system to train on, the greater success rate it will have at determining inputs and resolving outputs. Databases are sometimes more valuable than the actual components to create an android of this type. The Tyrell corporation is large and well-funded, and is constantly collecting data. That is the whole reason for Rachael offering her assistance to Deckard. She would see where the Nexus-6 brain type was failing, and would bring back data on what happened. The next variant of Android would only be better, as they have more experiences to learn from and become more “successful.” IBM’s super computer Watson, uses machine learning to pick up new talents, whether it be classifying responses to voice commands, or casually beating a world chess champion. Watson has been designed as a question answering super computer. It uses machine learning algorithms to understand how humans ask questions, and how it can answer them. For example, Watson has been known to beat human contestants on the game show, Jeopardy. Similarly, TARS on Interstellar can keep up with the crew’s humor, putting in quips of his own.

Machines that can respond in kind to our conversation have been around for a while. The older the machine is, the easier it is to cause the output to be standardized. One example would be the quirky artificial intelligence bot that goes by the name Cleverbot. Cleverbot can have casual conversations with you, and most of the time, will respond in kind. But, give it a confusing input and it will revert back to a generic tagline. The difference between this bot and the Andy’s in DADOES, is complexity and data. The Andy’s have millions of layers of data analysis and comparative possibilities, as well as experience with a multitude of conversations. The revisions in play with the Tyrell Corporation have provided a much larger database of “pass/fail” scenarios. Essentially this has increased the amount of data and layers they are using, making their android vastly more complex.

Machine Learning and Science Fiction

Artificial intelligence has been an apparent part of science fiction for the past few decades. Very popular examples are characters like Data, in Star Trek, or (my personal fav) TARS, from Interstellar. Taking a deeper look at TARS, he is an assistant to the team members in the film, and offers not only technical help, but social interaction as well. The ease of which TARS reacts to different inputs from the team is important in seeing how well built he is. Machine learning very well could be the culprit to his excellent performance. Generally, a system that is going to be dealing with a variety of issues that all need curated, explicit solutions, does not operate well on a rigid system. Machine learning would allow TARS to observe and analyze data in relation to an issue being faced, and come to a decision that is “correct”. Another example of pure machine learning that the viewer can actually see the process of, is Chappie in the movie Chappie (2015). In this film, a robot is modified to be a sentient machine learning being. Starting off as a “baby”, you see the owners of this robot “train” the bot to learn new words, actions, and responses. Over time, the bot develops its own personality, an amalgamation of the different inputs it has received. Eventually going on to make its own decisions and actions without the input of a “creator.” There are some examples of robots doing this in the real world, one applicable example is robots trained to play video games like Super Mario. The robot will try over and over again, sometimes thousands of times to learn how to beat a level. These robots actively look at what inputs cause what outputs, and they can store data on the outcomes. Slowly but surely these bots can learn how to beat these games.

Modern applications in relation to these references are nearly endless. Machine learning, specifically the Deep Learning variant, offer a dynamic system with curated outputs to unresolved inputs. The difficulty of integrating these systems is apparent. The largest difficulty being the data needed to make these systems accurate. A corporation as vast as the Tyrell Corp. can easily produce an accurate Deep Learning model. A combination of technological complexity, and abundant data, provide a platform for successful input parsing. This would explain the abilities of the Andy’s fluidity in a stimuli rich environment. On top of this, real world applications such as IBM’s Watson, Tesla’s Self Driving cars, and online chat bots, prove the validity of current applications. There is a significant gap between androids in DADOES and “androids” now, but other science fiction can help bridge the gap. The technical abilities of TARS in Interstellar, and the learning process of Chappie in Chappie, are further proof of advanced machine learning androids that have yet to reach the abilities of Andy’s in DADOES.


Works Cited

Dick, Philip K. Do Androids Dream of Electric Sheep? New York: Ballantine, 1996. Print.

Blade Runner. Dir. Ridley Scott. Perf. Harrison Ford and Rutger Hauer. Warner Bros. Entertainment, 1982. Film.

Nolan, Christopher, Jonathan Nolan, Emma Thomas, Lynda R. Obst, Matthew McConaughey, Anne Hathaway, Jessica Chastain, Bill Irwin, Ellen Burstyn, Michael Caine, John Lithgow, Wes Bentley, Casey Affleck, Hans Zimmer, Lee Smith, Hoyte . Hoytema, and Nathan Crowley. Interstellar. , 2015.

Chappie, Dir. Neil BLomkamp. Perf. Star Capital, 2015. Film.

Kurt-Loder. “Chappie.” Reason.com, 6 Mar. 2015, reason.com/archives/2015/03/06/chappie.

“Emerging Artificial Intelligence Applications in Computer Engineering.” Google Books, books.google.com/books?hl=en&lr=&id=vLiTXDHr_sYC&oi=fnd&pg=PA3&dq=subsets%2Bof%2Bmachine%2Blearning&ots=CYpsBw1Bip&sig=wIcXO3Qpw2-1G57w7GiB_OAztQM#v=onepage&q=subsets%20of%20machine%20learning&f=false Science Fiction Relations.

LeCun, Yann, et al. “Deep Learning.” Nature News, Nature Publishing Group, 27 May 2015, www.nature.com/articles/nature14539.

“Extreme Learning Machine: Theory and Applications.” Neurocomputing, Elsevier, 16 May 2006, www.sciencedirect.com/science/article/pii/S0925231206000385.

Marr, Bernard. “The Amazing Ways Tesla Is Using Artificial Intelligence And Big Data.” Forbes, Forbes Magazine, 17 Jan. 2018, www.forbes.com/sites/bernardmarr/2018/01/08/the-amazing-ways-tesla-is-using-artificial-intelligence-and-big-data/#f02588342704.

ARK Investment. “How Much Artificial Intelligence Does IBM Watson Have?” Seeking Alpha, 13 July 2017, seekingalpha.com/article/4087604-much-artificial-intelligence-ibm-watson.

Brundage, Miles. “The Anti-HAL: The Interstellar Robot Should Be the Future of Artificial Intelligence.” Slate Magazine, Slate, 14 Nov. 2014, slate.com/technology/2014/11/tars-the-interstellar-robot-should-be-the-future-of-artificial-intelligence.html.

Thompson, Clive. “What Is IBM's Watson?” WHS Film Festival, 16 June 2010, www.whsfilmfestival.com/Walpole_High_School_Film_Festival/Grammar_Articles_files/Smarter%20Than%20You%20Think%20-%20I.B.M.'s%20Supercomputer%20to%20Challenge%20'Jeopardy!'%20Champions%20-%20NYTimes.com.pdf.

Tange, Ole. “8-In-1 Sentence - Depending on Emphasis.” English Language & Usage Stack Exchange, english.stackexchange.com/questions/258653/8-in-1-sentence-depending-on-emphasis.

“Cleverbot.” Cleverbot, www.cleverbot.com/.

Ryan, Kevin J. “Tesla Explains How A.I. Is Making Its Self-Driving Cars Smarter.” Inc.com, Inc., 13 Sept. 2016, www.inc.com/kevin-j-ryan/how-tesla-is-using-ai-to-make-self-driving-cars-smarter.html.

Ponce, Hiram, and Ricardo Padilla. “A Hierarchical Reinforcement Learning Based Artificial Intelligence for Non-Player Characters in Video Games.” SpringerLink, Springer, Cham, 16 Nov. 2014, link.springer.com/chapter/10.1007/978-3-319-13650-9_16.

Jung, Alexander. “AI Playing Super Mario World with Deep Reinforcement Learning.” YouTube, YouTube, 26 May 2016, www.youtube.com/watch?v=L4KBBAwF_bE.

“Interstellar - TARS Humor Setting.” YouTube, 9 Nov. 2015, youtu.be/p3PfKf0ndik.

Boots, Byron. “Machine Learning for Modeling Real World Dynamic Systems.” Georgia Tech, www.cc.gatech.edu/~lsong/teaching/CSE6740fall14/BBoots.pdf.

Website template inspired by This place and this place.

This page template was created by this guy with help from him.