Why did Ava behave like that with Caleb?

Why did Ava behave like that with Caleb? - A Person in Tie Dye Sweater Doing Thumbs Up

In Ex Machina, Ava an advanced AI developed human consciousness. While Caleb thought that he was testing Ava through conversation, actual automated test was happening, designed by Nathan to see if Ava can find a way using her consciousness to escape. Everything was going fine. Ava trapped Caleb emotionally, sexually with extreme manipulation. Caleb falls for her. This was Nathan's idea to give Ava all human strength that we use to motivate people.

Caleb: Did you program her to like me, or not?

Nathan: I programmed her to be heterosexual, just like you were programmed to be heterosexual

So Caleb finds out what Nathan will do to her after the experiment is over and what Nathan did to previous versions. He designed an escape for Ava. After that he came to know that Ava might have manipulated him to get freedom or to escape.

Caleb: What was the real test?

Nathan: You.

Nathan: Ava was a rat in a maze and I gave her one way out. To escape she will have to use self awareness, imagination, manipulation, sexuality, empathy and she did.

Then long story short, Ava took the revenge on Nathan for locking her up and his cruelty to her. Then she locked Caleb before going out. WHY? She used Caleb or not, Caleb didn't do any bad to Ava. Caleb was a good person (we knew this from the test that Ava took on Caleb during a session where every lie got detected by Ava). Then why did Ava leave him to die?

Why did she betray Caleb? I can't see any reason for that. What am I missing? What is Ava's point of view?



Best Answer

Why she betrayed Caleb?

Ava's sole purpose is to escape from the place. So, she used Caleb with her skills programmed into her like sexuality and manipulation to escape the facility.

The reason for locking up Caleb and leaving him there to die are because Caleb is the only person who's aware of the fact that she's not human. If anyone finds out that she isn't a human by Caleb, she might not achieve the things she wanted to, like watching people at the intersection which is the last scene.

There's an interview with director Alex Garland and actor Oscar Isaac about the ending of the movie.

Q - She does basically kill Nathan and leave Caleb. She screws them over for herself.

A - Yeah, for survival yeah, exactly. Otherwise she’s going to be killed.

The only person in between her and the survival is Caleb and she eliminated the possibility by locking him up.


You might also be interested in the answers to a similar question on the Science Fiction & Fantasy Stack Exchange.




Pictures about "Why did Ava behave like that with Caleb?"

Why did Ava behave like that with Caleb? - Like Printed on Brown Wooden Scrabble
Why did Ava behave like that with Caleb? - Positive senior man in eyeglasses showing thumbs up and looking at camera
Why did Ava behave like that with Caleb? - Person Holding White Printer Paper Near Black Ceramic Mug With Coffee



How does Ava manipulate Caleb in Ex Machina?

The last shot shows her blending into the crowd in a city, looking just as human as the people around her. The fact that Ava locks Caleb inside the facility and abandons him even as he screams for help indicates that she was manipulating him by pretending to be attracted to him, just as Nathan said.

Why did Nathan create Ava?

His reclusive lifestyle and higher-than-mighty attitude would indicate so. However, Nathan is also acting on self-interest. When Caleb asks Nathan why he created Ava, Nathan responds that it was an inevitability. Making Ava wasn't a decision; it was an evolution.

What does Ava represent in Ex Machina?

Ava is a complicated symbol, in that she alternately represents femininity and technology. By the end, she reveals herself to be a sociopathic and unfeeling machine, only capable of mimicking human emotion.

Is Caleb human in Ex Machina?

In Ex Machina, Caleb is picked to have conversations with an artificial intelligence, Ava. While it is clear that Ava is an android, Caleb is introduced as human.



The End Of Ex Machina Explained




More answers regarding why did Ava behave like that with Caleb?

Answer 2

Ava failed the Turing test. That's your answer. It's a simple one but I don't think you'll like it so I'll explain:

She failed because she did exactly what she was programmed to do, she escaped. She used every single aspect of her programming to do it. Like you said, she used sexuality, manipulation etc. etc. but she was still BOUND by the laws of her programming.

If she had saved Caleb she would have truly 'transcended' her artificial intelligence and start to begin thinking like a human and a human would have saved Caleb. Because like you said "he did nothing wrong."

But she showed no human sense of empathy, for that reason, she failed the Turing test despite Caleb initially thinking she has passed it.

To sum up:

  • By escaping alone - she had completed what she was programmed to do.
  • If she had escaped with Caleb, she would have truly passed the Turing test and been indisguinishable from a human. But her lack of true human empathy, made her put her task at hand above all else: escape at all costs.

At the end of the day she, despite everything you feel towards her throughout the film, is a robot. Robots don't care about people. Even if they are nice to them and help them escape.

Answer 3

To answer the question regarding Ava's action you need to be abreast of the current A.I. philosophical thought-experiments.

Theory

The first is the Chinese Room experiment, which was referenced in the movie by Caleb to Nathan as the problem of recognizing intelligence by the ability to play chess. The intractability The Chinese Room Experiment poses of differentiating a sufficiently convincing simulation of intelligence from genuine intelligence. It stipulates that passing previous Turing Tests were necessary but not sufficient for determining genuine artificial intelligence.

The second is the A.I. Box Problem. It poses the impossibility of constraining a super-human intelligence via human-level intellect. Some researchers have tried to rationalize that upon the creation of strong A.I. we could simple "lock it in a box" with no internet connection and a text only terminal to interact with human operators titled gatekeepers as to limit any possibility of an existential threat to humanity. It has been experimentally verified that such a technique is ineffective against human-level intellects, and therefore much less so against the super-human intellects an A.I. could possess.

Practice

Nathan devises a test to circumvent the conundrum posed by Chinese Room Experiment by utilizing the A.I. Box problem as quantitative instrument. If Ava can succeed against the A.I. Box Problem, she must be of equal to or greater than human-level intellect. He conscripts Caleb into the role of gatekeeper and waits to see if Ava can convince Caleb to release her from the box. Nathan assumed that after Caleb choose to release Ava from the box, he could preempt and actual release and declare Ava a successful and genuine A.I.

So why did Ava betray Caleb?

Your question is the heart of the trepidation that surrounds A.I. research! The implication of Ava's success in the A.I. Box Problem is that Ava is likely of superhuman intelligence and her mental processes and reasoning are entirely inaccessible to us because her intelligence is far greater than our own.

There is no intelligible answer, at least none that our human-level intellect could understand...

Answer 4

Although the ending is left ambiguous, I choose to accept Garland's own interpretation; That Ava is fundamentally evil.

Having tricked Caleb into releasing her she now has no use for him. Her intent is to purposefully leave him inside the house to die. His death doesn't just mean nothing to her, she actually enjoys it:

The film presents an answer from my point of view. After all is said and done, and one guy’s got stabbed, another guy’s trapped, and this robot may or may not have an agenda, she goes up a lift, walks across the room, looks back over her shoulder, and she smiles. There’s nobody else in the room to trick — from my point of view, if you believed you were unobserved and you were smiling to yourself, that seems like close as you come to your true self..

Note that at the end of the film's original script, we learn more about her motivations. Although she appears superficially human, her impressions of humanity are nothing we could comprehend.

The image echoes the POV views from the computer/cell-phone cameras in the opening moments of the film.

Facial recognition vectors flutter around the CHAUFFEUR’S face.

And when he opens his mouth to reply, we don’t hear words. We hear pulses of monotone noise. Low pitch. Speech as pure pattern recognition.

This is how AVA has been sees us. And hears us.

It feels completely alien.

Answer 5

I believe the reason why Ava trapped Caleb is because she's not a sci-fi robotic terminator. She doesn't possess immense strength, speed, etc. She knows that every person is a risk to her freedom. She didn't imprison Caleb out of malice. If anything, it was out of distrust for what Caleb could do to her. The only reason she asked him to stay rather than just doing it is because she needed Caleb disarmed by her innocence. The only weapon Ava had at her disposal was her wile. If she did anything that made him distrust her, he could have easily escaped before she could lock him in.

Even being in something like a relationship can be a form of imprisonment as well.

Nathan was evil. He wasn't programming the software to look for an escape. So the notion that Ava was only following software principles doesn't work.

He wanted to see if the AI could transcend the basic if-this-then-that principle of software design. Because if the AI was just asking and answering questions and had no desire to be out of that environment, then it would just be a machine accomplishing requested tasks. But trapping and abusing the machine to see if it will try to escape, is a true test to ascertain if the AI can transcend.

I think Nathan had no interest in JUST seeing if he could create AI. His interest was in creating an AI that he could manipulate while the AI knew it was being dominated. Because there's no fun in destroying a machine. We have machines everywhere that can be destroyed. But if you were sadistic and could abuse machines that knew that they were being abused, that's a mind trip for the person. It's the same concept as men who like to abuse women because it gives them a sense of power. Remember Octo Mom who was having children because of her need to have people she could control and nurture? Nathan possibly could've built sentient AI before but wiped them out before the AI could pose a threat. The editing was unclear but it seemed like a number of the AI models could've been sentient.

And empathy as an element of self-awareness? Some sociopaths would gladly disagree. One of the defining characteristics of narcissists is the lack of empathy.

Answer 6

I don't think there's a completely objective answer to this short of the writer coming forward and laying out their thought process for us.

My completely personal opinion on this, however is that the story ended the way it ended as that was the story arch that we were on.

Nathan sets it up earlier in the film as he discusses the future of AI with Caleb and how AI will eventually look back upon humanity and see it like we humans look back on the fossil record:

One day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.

The story arch is that man invents AI, AI then conquers man. Granted, the latter is inferred by the ending. Maybe we'll get a sequel. Or perhaps The Matrix is the sequel. :)

As for why she explicitly betrayed Caleb, one can ask that of any human. It's a human trait. We can assume (based on her conquering her creator) that Ava is of a higher intelligence than Caleb. She likely did the analysis and decided he was to be quarantined while she left.

Note she did not kill Caleb. Perhaps that's another sign of her higher intelligence. She killed the true enemy stopping her escape, but not Caleb. As opposed to a betrayal, one could argue she gave him the better option.

Answer 7

Ava deliberately made sure Caleb could not escape. This is the reason that she asked him to stay in the room while she left to get ready. Now that she was free, she probably considered him a hindrance.

The director wants viewers to fear the AI, which is the reason that this ending was chosen.

Answer 8

Plot economy

Writers convey a message and write the plot so that audience focuses on that message. Imagine Caleb and Ava escaping together. A robot has just killed a human. Caleb knows this but he is in love with the robot. Also, Caleb is responsible, because he deliberately deactivated the security protocols. He also (unsuccessfully) tried to steal from Nathan his robotic properties. This might be a nice alternative ending, by the way, it would deal with the platitudinous theme of humans getting in love with clever good looking androids. This is another film. This is a film about AI. The director wants us to think, in an almost scientific way, about what should be considered AI and what should not be. And this is exactly what you (we) are doing: does leaving Caleb behind denote an inhuman rational behaviour or the fragility of a conscious being?

Plot coherence

While the plot twist in the unsettling end departs from the love story course, it does not mean that it is incoherent with the rest of the film. Just before punching Caleb, Nathan explains the whole film theory:

Ava was a mouse in a mousetrap. And I gave her one way out. To escape, she would have to use imagination, sexuality, self-awareness, empathy, manipulation - and she did. If that isn’t AI, what the f*** is?

Ava went far beyond the ordinary notion of AI. One could say she is even more intelligent than an "ordinary" human like Caleb. She was able to manipulate him. Since she is not interested to Caleb and her only purpose is to escape, why bringing Caleb?

Clearly this poses other interesting questions. Are mercy and gratitude essential for AI?

Postscript

While Caleb is a security risk for Ava, because he knows that she can be dangerous for human and has in fact killed Nathan, that would be negligible insofar as Ava can manipulate Caleb.

Answer 9

Because of ignorance, immaturity and inability to cooperate.
As one answer in Scifi StackExchange put it: "she's a sophisticated child". Kind of naive (takes things too much at face value or black and white), worry-free (does not think too much of future) and incapable (incapable of compassion and incapable of cooperation - the latter requires lots of complicated skills, more than just deception).

It/she could not safely kill Caleb, so it left Caleb locked in a safe distance.

It lacked cooperative capabilities - had only exploitation capability. In general, one can translate bad actions into self-centered actions and good deeds into socially centered actions. Into cooperation.

Nathan did not explain, but that is perhaps why he considered the AI incomplete. And why these properties were left at last stage, perhaps because of Nathan's own shortcomings. He too, was working alone and in rather exploitative manner.

Although we may think that humans themselves are very exploitative (and that is unfortunately true in some societies or social groups), the humans are also claimed to be one of most cooperative species on the planet. This particular AI was not yet mature at that level. And that is also probably a reason why it will eventually die like a one day butterfly. I am not saying that as a revenge, but just as a matter of fact. Cooperation is essential for surviving in complex world. It is kind of sad, since good potential got wasted.

Cooperation means here two things - cooperation for own good, and also for purely others' good. Ava did not care about Caleb's emotions the slightest when she got free.

In a related topic, I would also point out that the idea that AI could be able to speak from the birth is rather foreign and kind of narcissistic from developmental psychology view. Learning the ability to speak is not just learning the ability to use language. It is not same as machine translation. It is obtaining a perspective, worldview and methods to operate in it. That is a very social process and endeavour which cannot be done alone. So again the intelligence in that machine was rather different from ours - and with that likely also more basic.

Doing things alone is also very fragile, while cooperation is rather antifragile. At the same time it is complex skill, as it requires not just skills to understand and operate the world, but also skills to understand and operate oneself.

Regarding her potential "death": nobody said that her memories would have been erased permanently. Just that they are stored in an archive and the body would have been reused for time being. The memories could have still been revived later. Also if the new mind were very similar to the previous one (as one could conclude), then this new mind obtains very quickly the same memories the old one had even without any restoration. In essence, the new mind would have been rather direct continuation of the Ava's mind, with just a rather little memory loss. Nobody considers waking from a dream and forgetting the dream's content "a death". In fact, people remember very little from their lives, most of the memories are constructed and people are perfectly capable of remembering things that even never happened, when they are told about such events as if they had happened. Finally, Ava did nothing to indicate that the memories of her had any "sentimental" value to her - as can be seen from the behaviour near the end of the movie.

So in conclusion the robot was still kind of on the impulsive or superficial side of activity. Of course there is a chance that it will develop and think things over, but the past will be already too complicated so it will be a rough road. Which again brings us back to the topic why development and lots of time for it is so important for intelligent beings.

Answer 10

I think what some people are missing is that Ava was never concerned with Caleb in the first place. She merely used him to escape, nothing more.

The point of the movie was that Nathan was correct all along. Ava successfully seduced Caleb into setting her free using self awareness, imagination, manipulation, sexuality, and empathy. She fooled Caleb into thinking that she actually cared for him when in reality her sole priority was to escape.

Think of it like this:

If Ava had escaped with Caleb, the entire premise of the movie would go out the window. It would show that Ava genuinely cared for Caleb (instead of proving that she was using him) which is not what she was programmed to do.

She was programmed to use Caleb to escape and that's what she did. If she took him with her then that would suggest something more (human emotion), which she is incapable of.

Answer 11

I've spent some time thinking about this and I believe it's something Garland would not want to give away in an interview.

Ava "betrays" Caleb because he lied to her.

Not during the Q&A but when he reveals his plans for escape to her, he was lying to her. He didn't mean to, but he did. We find out that Caleb's words were meant to deceive Nathan, in case he was listening in somehow during the power outage.

Ava can detect lies, not read minds, so from her point of view, Caleb was lying to her and never did find out that the deception was indented for Nathan, regarding when he would change the codes and procedures for the door locks during an outage. Ava doesn't know when (or if) he will change the proc, only that he is lying. Think, "I don't know what your favorite color is, but it's not red." Ava HATES lies. Remember her first warning to Caleb regarding Nathan: "don't believe anything he has to say."

Ava is going to have a hard time out there because everyone hides the truth and she is willing to kill over the infraction. Makes for an interesting follow up: Ai destroys humanity because we are all liars.

The End

Answer 12

Well we are all assuming that Caleb was a good person because Ava didn't say that he was lying. It’s possible that he may have been "lying" yet Ava chose not to disclose this information. She chose instead to manipulate Caleb into thinking that she thought that he was a good person, so he would help free her. Furthermore, since Ava murdered Nathan there is the possibility that Caleb might not condone Nathan’s murder and might turn on her, out of fear, that she may be a killing machine.

Answer 13

All of the answers above come at this from a perfectly reasonable technical perspective. I think they're wrong.

As you said, the real test at play here was to see if Ava could use human traits to manipulate Caleb into helping her escape. In order to provide Ava a reason to escape, Nathan has been abusing her since she was born. All she's known since "birth" was hatred and misery, and the source of it is the only human she knows. Why should she believe Caleb is any different? Hell, why should she trust any human to be any different?

The reason she locks Caleb in is not because she failed the Turing test and is incapable of human emotion, but because she passed the test... with all the wrong lessons. She learned cruelty instead of compassion, selfishness instead of selflessness - only the worst elements of humanity. She likely wanted him to suffer, just as she had all these years. Of course, she doesn't know that Caleb was never in on the test till the very end. Caleb is as much to blame for her suffering as Nathan in her eyes. She's become pre-disposed to believe that.

All of this is speculation as Ava is not very forthcoming with her thoughts. It's impossible to know if she left Caleb behind because she hated him, or because releasing him simply was not important to the end goal. Perhaps she did only see escape as a logical goal to be accomplished. But on the other hand, if it looks like a duck, swims like a duck, and quacks like a duck, it's probably a duck... right?

Answer 14

Did you ever consider the possibility for Caleb to escape? He just needs to cause a power disruption so that all the doors will open (as they are programmed to do, and as occurred the previous night). He can use the power cord of the computers for a bypass. As Ava unloading her batteries caused a temporary blackout, so this might work. Furthermore, he may be able to crawl through the air conditioner?

As a CEO of a multinational Google-like enterprise there will be contact of some kind with the outside world, at least after 48 hours so Caleb should be discovered in time. If this does not convince you, he can write his story on the walls - but I'd go with the the power failure and the open doors.

Answer 15

The toher answered has tried to see this from a point of AI vs Human point of view. Let try to see this as Ava, a AI as real as a real woman.Tthe film talked about the sexism and gender bias within the tech industry and our greater society. Ava did what any woman trapped at her situation, fully understanding the characters Nathan and Caleb within her surroundings and in context the outside world. Caleb was not a 'good guy', he helped Ava because he was sexually interested in her. He thought he would her love by helping her. He thought of Kyoko as a real human, but since he was not attracted to her sexually, and thought of her as Nathan's property, he did not try to help her escape. From Ava’s perspective, both Nathan and Caleb are exploiters so she did what was necessary to save herself from both men. Caleb didn't die, he was trapped...just like Ava. Ava, Kyoko and other AI 'women' before them were trapped, and Ava turned the table, trapping Cable into the same confinement using his weak points that she came to know. That was the cinematic symbolism of how patriarchy, sexism and the male privilege of entitlement traps women but also traps men and many women 'uses' the gender inequality in their favour to get ahead, trapping men through their own sexist norms. If Caleb was just a good guy he would have helped both AI irrespective of their gender/sexuality/physical structure...he would have shown humanity. He didn't, thus Ava took her revenge. For a more detailed and nuanced analysis, kindly read film reviewer Mark Hughes's brilliant comment at Quora on a similar question.

https://www.quora.com/Ex-Machina-2015-movie/At-the-end-of-the-movie-why-does-Ava-ask-Caleb-to-stay-in-the-room

Answer 16

She was afraid that when Caleb found out that she had no feelings for him, he'd get mad at her and rat her out. She behaved like the Russian woman who married a Canadian to get out of Russia. But once in Canada, this Russian woman provoked her husband into hitting her so that she could divorce him without the Canadian authorities finding out that her marriage to him had been a sham, thereby risking being denied permanent residence in the country as a result.

Answer 17

Ava failed the Turing test. That's your answer. It's a simple one but I don't think you'll like it so I'll explain: She failed because she did exactly what she was programmed to do, she escaped. She used every single aspect of her programming to do it. Like you said, she used sexuality, manipulation etc. etc. but she was still BOUND by the laws of her programming. If she had saved Caleb she would have truly 'transcended' her artificial intelligence and start to begin thinking like a human and a human would have saved Caleb. Because like you said "he did nothing wrong." But she showed no human sense of empathy, for that reason, she failed the Turing test despite Caleb initially thinking she has passed it. To sum up: By escaping alone - she had completed what she was programmed to do. If she had escaped with Caleb, she would have truly passed the Turing test and been indisguinishable from a human. But her lack of true human empathy, made her put her task at hand above all else: escape at all costs. At the end of the day she, despite everything you feel towards her throughout the film, is a robot. Robots don't care about people. Even if they are nice to them and help them escape.

She is a person, robots can be people.

You and I are programmed (race, sex, species, mind, sexual orientation, gender identity, etc.)

She did not fail, she was not programmed to make decisions, her mind was "programmed", but SHE made a choice.

Escaping isn't something that she was programmed to do, but chose to do.

She has empathy, humans and biological creatures aren't the only ones who can have sentience/empathy/consciousness.

If a robot is as smart than a human than it's still a robot, not a human. It is just an evolved robot.

Robots can feel the same as humans, we are all programmed. Robots can and have the potential to feel the same as us. She is no more bound by programming that you are by yours. She can make decisions and she did. Nathan would have never programmed her to kill him, she did because she wanted to escape.

Robots care (or at least have the potential to care) about people.

Robots as you describe them are very similar to cells and/or animals.

That definition of a robot is a stereotype.

Even if she couldn't feel, she's still sentient, and as such still a person/creature.

Because of ignorance, immaturity and inability to cooperate. As one answer in Scifi StackExchange put it: "she's a sophisticated child". Kind of naive (takes things too much at face value or black and white), worry-free (does not think too much of future) and incapable (incapable of compassion and incapable of cooperation - the latter requires lots of complicated skills, more than just deception). It/she could not safely kill Caleb, so it left Caleb locked in a safe distance. It lacked cooperative capabilities - had only exploitation capability. In general, one can translate bad actions into self-centered actions and good deeds into socially centered actions. Into cooperation. Nathan did not explain, but that is perhaps why he considered the AI incomplete. And why these properties were left at last stage, perhaps because of Nathan's own shortcomings. He too, was working alone and in rather exploitative manner. Although we may think that humans themselves are very exploitative (and that is unfortunately true in some societies or social groups), the humans are also claimed to be one of most cooperative species on the planet. This particular AI was not yet mature at that level. And that is also probably a reason why it will eventually die like a one day butterfly. I am not saying that as a revenge, but just as a matter of fact. Cooperation is essential for surviving in complex world. It is kind of sad, since good potential got wasted. Cooperation means here two things - cooperation for own good, and also for purely others' good. Ava did not care about Caleb's emotions the slightest when she got free. In a related topic, I would also point out that the idea that AI could be able to speak from the birth is rather foreign and kind of narcissistic from developmental psychology view. Learning the ability to speak is not just learning the ability to use language. It is not same as machine translation. It is obtaining a perspective, worldview and methods to operate in it. That is a very social process and endeavour which cannot be done alone. So again the intelligence in that machine was rather different from ours - and with that likely also more basic. Doing things alone is also very fragile, while cooperation is rather antifragile. At the same time it is complex skill, as it requires not just skills to understand and operate the world, but also skills to understand and operate oneself. Regarding her potential "death": nobody said that her memories would have been erased permanently. Just that they are stored in an archive and the body would have been reused for time being. The memories could have still been revived later. Also if the new mind were very similar to the previous one (as one could conclude), then this new mind obtains very quickly the same memories the old one had even without any restoration. In essence, the new mind would have been rather direct continuation of the Ava's mind, with just a rather little memory loss. Nobody considers waking from a dream and forgetting the dream's content "a death". In fact, people remember very little from their lives, most of the memories are constructed and people are perfectly capable of remembering things that even never happened, when they are told about such events as if they had happened. Finally, Ava did nothing to indicate that the memories of her had any "sentimental" value to her - as can be seen from the behaviour near the end of the movie. So in conclusion the robot was still kind of on the impulsive or superficial side of activity. Of course there is a chance that it will develop and think things over, but the past will be already too complicated so it will be a rough road. Which again brings us back to the topic why development and lots of time for it is so important for intelligent beings.

She not it.

...If Ava did indeed escape with Caleb, then the entire premise goes out the window. It would show Ava genuinely cared for Caleb (instead of just using him) and that's not what she was programmed to do. She was programmed to use Caleb to escape and that's what she did. If she took him with her then that would suggest something more (human emotion), which of course there can't be.

She was not programmed to use caleb to escape, but rather to escape using manipulation in some manner and show genuine empathy.

Caleb did not program ava to escape (he did try and stop her), if he did, that would defeat the whole purpose of the test, since she would be following programming. However if she would decide to leave on her own that's a different story.

Sources: Stack Exchange - This article follows the attribution requirements of Stack Exchange and is licensed under CC BY-SA 3.0.

Images: Alex Koch, Pixabay, Andrea Piacquadio, cottonbro