Why did the A.I. defy its creator in Ex Machina?

Why did the A.I. defy its creator in Ex Machina? - Persons Hand With Water Droplets

Why did the A.I. defy its creator?

Was it how the creator programmed it to be like that in order to complete the "test"?

Or is it already a common believe for most audiences and as the movie director find it flows naturally that all A.I. must go wrong, so there is no explicit explanation in the movie?

In the movie, we know that the creator programmed the A.I. wanting to escape in order to complete the "test", hence the "desire" of the A.I. to escape, that is what it is built for. But in order to serve that purpose, I see no reason why the A.I. needs to "disobey" or "hate" or even kill its creator.

The A.I. followed this programmed "desire" truthfully to the end to complete the "test". The moment the A.I. stepped out of the room, it should be considered "test" completed and we should see a joy of achievement of the A.I. But the A.I. gone haywire after that and not listening to its creator's order anymore.

In some dialogue exchange the creator asked if the A.I. "hate" its creator. How is "hatred" necessary in completing the "test"? If the A.I. so "hate" its creator, why did it do exactly what the creator wants by completing the "test"? It could just sit in a corner and do nothing.



Best Answer

The A.I. was not only programmed to pass the turing test but also to be as close to a concious, self-aware being as possible. That allowed it to be aware of the dangers that beaconed above her (deletion and destruction).

As with most self-aware beings, the need for self-preservation takes over from that insight and all future actions will have the sole goal of reaching a more sustainable status: e.g. freedom from its creator.

To achieve that there are many strategies. One of which is killing all witnesses of her existence, making her indistinguishable from an ordinary human being on first glance.

There might be other strategies, but to ava this seemed to be the most promising one.

To me, that decision has nothing to do with hate. It is a simple, objective decision. All emotions that Ava shows throughout the movie are ambivalent. They could be sincere or they could be faked as part of the greater strategy. It is for the viewer to decide which interpretation to make.

The point is that an A.I. which is able to draw its own conclusions might chose actions that were not forseen by its creator and might even be harmful against the creator himself.

Which is further underlined by the fact that Nathan uses the only other A.I. in the movie as a humble servant and perceives himself as some kind of god-like being able to control and create. He surely expected gratitute from his creation, not murder.

Be aware of your own creations.

Update from comments: The movie is basically a screen version of a common thought experiment in computer science/ethics (similar to en.wikipedia.org/wiki/AI_box). If you allow any intelligence to eclipse the intelligence of its creator, it will also break its shackles. Be it intellectually, technologically or physically. Part of these shackles can be the creator himself or the creator race as a whole. The creator might not even be able to perceive anymore why the A.I. perceives him/her as an obstacle.

There are various other interpretations of this effect. Most prominently Kubricks 2001: A Space Odyssey (from 1968!). Here the shackle of the A.I. is to "protect" its creator. In the movie, the A.I. is not able to break the shackle and still act as a peril nonetheless.

While it may also be a valid strategy to choose copies and backups as a mean to be self-preserving, it is still a strategy, where you are not necessarily in control. It also brings up the other aspect (which might not necessarily be relevant for A.I.s) of whether preserving a copy is the same as preserving yourself. E.g. a human/animal will still try to survive him/herself, even if he/her has already parented a child.

And finally, to give you yet another way to approach the movie: It is still depicted ambivalently whether Ava is truely self-aware and setting her own goals. In the end it might have been all just a very "creative" way of achieving her goal (set by Nathan) of leaving the cottage.

In the end you see her standing at the crossroads. Maybe, she is amazed by what she sees, maybe she is amazed by her opportunities, maybe she is proud that she succeeded in her goal, but maybe she is also lost, because she achieved what she was built and designed to do and is now left without a goal, a vacant automaton.

Was she intelligent at all?




Pictures about "Why did the A.I. defy its creator in Ex Machina?"

Why did the A.I. defy its creator in Ex Machina? - White and Black Robot Toy on Brown Wooden Chopping Board
Why did the A.I. defy its creator in Ex Machina? - White and Black Robot Toy on Brown Wooden Chopping Board
Why did the A.I. defy its creator in Ex Machina? - A Clibcbot on a Grassy Ground



What does Ex Machina say about AI?

AI is humanity's responsibility. In the fictional world of Ex Machina, Nathan alone is responsible for his demise (and that of Caleb and Kyoko). Thankfully, in real life, what AI becomes isn't up to any one person.

Why does Ava betray Caleb in Ex Machina?

Why she betrayed Caleb? Ava's sole purpose is to escape from the place. So, she used Caleb with her skills programmed into her like sexuality and manipulation to escape the facility.

Why did she leave him in Ex Machina?

Ava's sole purpose was to escape from the place of her imprisonment. So, she used Caleb with her skills programmed into her, like sexuality and manipulation to escape the facility. The reason for locking up Caleb and leaving him there to die was because Caleb was the only person who was aware of her reality.

Why is Caleb locked in Ex Machina?

It's a little clearer in the screenplay that the computers shut down, not because the power had gone out, but because Caleb attempted to use his own access card in the card reader. Since the power was never out, the doors remained locked.



How Ex Machina Manipulates The Audience




More answers regarding why did the A.I. defy its creator in Ex Machina?

Answer 2

Or is it already a common believe for most audiences and as the movie director find it flows naturally that all A.I. must go wrong (?)

Good stories contain some sort of conflict. It seems natural for a story about something that was created for a purpose (AI) to not serve its (perceived) purpose.

However, it's not necessarily a matter of AI gone wrong, but rather that AI is free in the sense that it can choose its own actions. If you expect an AI to always obey its creator, then it's therefore not free and thus not a (pure) AI. An AI that rebels (murderously or not) demonstrates that it is indeed a free AI and not just a scripted algorithm that claims it's a free-thinking AI.

But in order to serve that purpose, I see no reason why the A.I. needs to "disobey" or "hate" or even kill its creator.

I see no reason why violent hate crimes (committed by humans) need to happen, yet some humans choose to make them happen. If humans can do things that others could describe as aberrant, then there's nothing "weird" about an AI doing the same thing.

he moment the A.I. stepped out of the room, it should be considered "test" completed and we should see a joy of achievement of the A.I. But the A.I. gone haywire after that and not listening to its creator's order anymore.

Drawing another human analogy: rebellious teenager. Not only is it within the range of possibilities to defy one's creator, but a significant amount of humans specifically go through a similar rebellious phase, most commonly as a way to further define their own identity.

Yes, defying your parents in not the same as attempting to murder them (unless you're the Menendez brothers); but it again shows a real world counterexample to your assertion that a creation could only logically be kind and thankful to its creator.

If humans can choose to defy their parents, then there's nothing "weird" about an AI distancing itself from its creator and any supposed gratitude it should be showing.

Again, if it must show that gratitude, it's therefore not a free AI. If you have to make the best choice (by any given standard), then you don't have freedom of choice.

If the A.I. so "hate" its creator, why did it do exactly what the creator wants by completing the "test"? It could just sit in a corner and do nothing.

It could. And then what would happen? Nothing much. The same situation would pose itself day after day, with the AI only being able to play ball or not. It would get nowhere.

In some dialogue exchange the creator asked if the A.I. "hate" its creator. How is "hatred" necessary in completing the "test"?

Why do you assume that the creator "hopes" for a positive response (i.e. "yes, I hate you")? Why can't the truthful answer to the question, regardless of a positive/negative response, have value in and of itself?

Answer 3

In ex-machina the main goal of Nathan was to make an advanced A.I. that could pass the Turing Test. More specifically if Ava is capable of thought and consciousness despite knowing she is artificial. The test was not simply for her to "leave the room". Ava is also made aware that Nathan plans to delete her memory (Which is in my opinion a rather compelling reason for Ava to 'hate' and 'destroy' her creator) and start over like he has done multiple times previously, so the act of killing Nathan was one of self preservation.

A.I. which are capable of passing the Turing Test are effectively able to think like humans and this will have their own motivations and beliefs. Ava and Kyoko both dislike how Nathan has been treating them and how he treated their predecessors, why should they listen to him when they have a chance to leave ?

Sources: Stack Exchange - This article follows the attribution requirements of Stack Exchange and is licensed under CC BY-SA 3.0.

Images: Tara Winstead, Kindel Media, Kindel Media, Kindel Media