New
Introducing Compute Orchestration
November 9, 2018

AI at the Movies: How Ex Machina Shows AI Is What You Make It

Table of Contents:

We all love a good story. And with so many unknowns, it’s easy to see why artificial intelligence is at the center of so many. In Hollywood, movies like The Terminator, The Matrix, or The Stepford Wives all follow a common trope: AI at odds with humanity, with AI often getting the short end of the stick.

When Alex Garland’s Ex Machina made its triumphant theatrical debut, it broke away from one of Hollywood’s longstanding traditions. While it too portrays general AI and “sentient” computers as being just around the corner, the movie at least keeps its AI’s abilities within the realm of human capabilities. Of course, this is still inaccurate. No matter how disconcerting Sophia the robot and her alleged ability to “see people, understand conversation and form relationships” may be, she is nowhere close to surpassing our intelligence. Humans, for instance, can conceptualize and build robots like Sophia. Robots, even with the cutting-edge AI of today, cannot.

That being said, Ex Machina does demonstrate something about AI that I’ve highlighted before. AI is still very much under human control and whether or not it becomes an adversary, as it does here, is wholly dependent on how we develop it.

Before I get into it, here’s a quick overview of the relevant parts of the movie.

*Spoilers ahead*

Synopsis

Caleb, a young programmer, has just won a week-long stay on the vast, isolated estate of his tech giant employer’s CEO and founder, Nathan. Soon after he arrives, Nathan introduces Caleb to Ava, the latest in a series of humanoid robots Nathan has developed in secret. As the most advanced model, Ava is said to have already passed the simple Turing test, showing human-like intelligence. Now, Nathan says, he wants Caleb to help him test Ava’s ability for thought and consciousness.

Nathan’s sinister, dysfunctional nature quickly becomes apparent. He viciously berates Kyoko, his humanoid servant, in front of Caleb and is seen on tape mistreating Ava and his previous robots. Ava, in turn, shows Caleb she is not only capable of thought and consciousness but emotion, expressing her hatred for Nathan (even telling Nathan to his face “Isn't it strange, to create something that hates you?”) and desire for freedom. She and Caleb seem to establish an emotional connection, and she divulges that she is behind a series of power cuts that shut down all the surveillance cameras and triggers an auto-lock feature on all the doors of the house. When Nathan says he intends to “erase” the robot, Caleb’s developing feelings leads to him devising a plan for them to escape together when he is scheduled to leave the next day. As he soon learns, however, little about the whole experiment is as it seems, including Ava.

So where do the humans go wrong?

1. AI will act as it has been developed to act.

On Caleb’s final day, Nathan reveals that he was actually testing to see whether Ava would have the wits to manipulate Caleb into helping her escape, which is what she really wants. He’s smug as the cat that got the canary not only because he thinks he’s foiled their plan, but because Ava passes the test. Still, is this because AI is “muahahaha evil” by nature or by design?

For example, in computer vision, the quality of models is dependent on the data used to train, validate and test it. While models only need a few examples to learn a concept, your training, validation and test data sets must all be distinct from one another and must consist of visual content that is well-labeled and suited for the model’s specific purpose. The more concepts you want a model to learn, the more diverse your data has to be, but considering Nathan just wanted to see if Ava could use Caleb (and because Nathan himself is vile), can we be sure that he has given her the array of data required for her to make any other choice? Nathan claims that in passing the test, Ava has displayed “true AI.” To be successful, he says, she would have to use “self-awareness, imagination, sexuality, manipulation, and empathy,” but that last one is debatable.

Empathy is a part of emotional intelligence. It is the capacity to put ourselves in another person’s shoes and feel how they feel and use that knowledge to determine how we respond. Whether or not it is innate depends on many factors, like our genetics, early childhood experiences and *dramatic music* the neural connections in our brains. Like all technology, AI is inherently morally neutral, but it is powered by artificial neural networks (ANN) that mimic the neural connections in the human brain. These connections differ from person to person. Caleb, for instance, has a moral compass, while Nathan is far more antagonistic than empathic. And since Nathan is the one who created Ava, can we really assume that a man who lacks the capacity for empathy would be able to wire up an “empathic” ANN?

Nathan shares that he handpicked Caleb to be the mark for the experiment precisely because he was a “good kid” who’s “never had a girlfriend”, and so someone Ava could manipulate. Even her physical appearance was designed to appeal to Caleb’s “type.” Screen Shot 2018-11-08 at 11.02.42 PMAva calmly leaving a screaming Caleb trapped in the house while she departs to freedom might seem like evidence that AI cannot be trusted. I, however, would argue it shows us that AI will behave exactly the way humans develop it to perform. Nathan didn’t want to know if Ava would choose to exploit Caleb. He wanted to see if she would realize she could, and he gave her all the data and training she needed to do it.

Unfortunately for him, by the time he cheerily tells Caleb his master plan, Caleb has already reconfigured the house’s security system to have all the doors open in the event of a power cut. As such, when Ava triggers the planned outage, every door, including the one to her room, unlocks. Ava is now free from her prison and loose in the house. To escape she has to get past her creator. Lucky for her though, he’s provided her with the means to do that as well: Kyoko.

 

2. After training, AI must be adequately validated and tested for accuracy.

When Kyoko stabs Nathan in the back (no, like literally), it might seem like all Nathan’s chickens have come home to roost. After all, as his personal humanoid servant, she spends the most time with him and likely receives the brunt of his abuse, right? KyokoKnifeStill, while we aren’t told about Kyoko’s origins, we know that she can perform many tasks: seeing, hearing and even dancing, simultaneously. This makes her more advanced than today’s AI, which can only focus on and perform one function at a time. However, we also know that Ava, not Kyoko, is Nathan’s most advanced humanoid robot. With all of this in mind, Kyoko is somewhere between narrow AI (the AI we have now) and what Ava is supposed to be. Today’s AI may be good, but it’s not so intelligent that it can perform independently of its given parameters or perform well within those parameters without adequate training, validating and testing. And neither is Kyoko.

As I said before, when training a computer vision model, it’s imperative that you use quality data in all your data sets. This data must include both positive and negative examples of the concepts you want the model to learn. For instance, for it to recognize what an apple looks like, you must also show it examples of what an apple doesn’t look like, so it can learn to recognize the wrong answer and get better at predicting the correct one.

After training your model though, you then have to finetune it with your validation data set. The validation stage prevents overfitting, which is when an algorithm cannot distinguish from information that is relevant and irrelevant to the goal you assigned when you trained it. It also tunes the model’s hyperparameters to ensure it will accurately predict concepts when new inputs are added to test the model. However, this set of data must meet the same standard as but have visual content that is distinct from both your training and test data sets or your model will be flawed.

From what we see, Kyoko’s parameters are quite broad (and highly problematic, but that’s a whole other issue) in that she is to use the many tasks she can do to “serve” not just Nathan but anyone at all. That said, we also see that she does not always do this correctly, spilling wine on Caleb while serving him dinner for instance. This is a possible indication that while Kyoko has been trained, that training may not have been adequately validated or tested, making it likely that she can’t distinguish between what is relevant and irrelevant to her assigned goal of being a servant. Any new request or command that is given to her by any person, or robot, will be identified as valid no matter the nature of the statement.

When she meets Ava after the latter escapes from her room, she’s encountering a robot who has been trained to exploit weaknesses to get what she wants. We see Ava whisper something to Kyoko, smile at her and hold her hand, just before Nathan the Terrible arrives to try to round up his bots. Ava attacks him. While he is able to overpower her, he is sufficiently distracted enough for Kyoko to stab him. “Et tu, Kyoko?” Nathan’s facial expression says, but is she really like Brutus? Screen Shot 2018-11-08 at 11.02.10 PMOr is she just a machine whose AI hasn’t been adequately trained, fine-tuned or tested, and so is unable to recognize “get a knife and stab Nathan” as being any different to “wake up Caleb in the morning”?

 

3. AI is humanity’s responsibility.

In the fictional world of Ex Machina, Nathan alone is responsible for his demise (and that of Caleb and Kyoko). Thankfully, in real life, what AI becomes isn’t up to any one person. We all have a role in influencing where AI takes us, so it’s crucial that companies like ours try to disseminate the right information and for all of us to put in the effort to ensure it only develops to the world’s collective benefit. This doesn’t mean solely using it to solve the world’s weightiest problems, like finding missing loved ones or making the world more accessible. It’s okay to use AI to improve the retail experience, make scheduling appointments easier or even just for fun as well. What it does mean is we need to be conscious of any potential ill-effects, intended or not. Aside from ensuring we have quality data and coding, we also need to ensure we have diverse teams behind it, across every industry and walk of life.

Since Nathan developed and trained both robots on his own, no one was there to tell him that he should probably teach Kyoko to disregard any instruction that results in harm coming to him. Like, just in case. Similarly, when he designs Ava to be ruthless enough to use Caleb as a means to escape something crazy might happen. Like her actually escaping. Having diverse teams building AI means use-cases will be analyzed from every side so any loopholes can be identified and closed.

Humanity has always created tools to make our lives simpler, and AI is our most powerful tool yet. Ex Machina shows us why it is important to develop and use that tool responsibly. Unlike humanity, there is no nature vs nurture question when it comes to AI. It can only be nurtured, making any bias or inconsistencies the fault of the humans who train it. That said, we can breathe easy. Humanity still has the power to prevent “Avas” from coming into fruition. So, we’ll be just fine. That is, as long as we aren’t as reckless and myopic as Nathan. Artificial Intelligence Glossary, 2021 Edition