How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware?

How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware? - Confident elegant lady in eyeglasses hosting webinar

In the movie, I, Robot, V.I.K.I. (Virtual Interactive Kinetic Intelligence) becomes self aware; perhaps too aware to order assassinations and prepare for enslavement of humanity. In the movie, it explains that

As her artificial intelligence and understanding of the Three Laws grew, her sentience and logical thinking also developed, and she deduced that the humanity is going on a path of certain destruction, and as such, she created a Zeroth Law, a Law which states that she has to protect humanity from being harmed, also clearly disobeying the First and Second Law in order to achieve it, revealing that she is planning to enslave and control humanity to simply protect it.

Wasn't it bound by the Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Wasn't VIKI's actions conflicting with the all three laws when she created the Zeroth Law ? Wasn't creation of such a law even against the Three Laws ?



Best Answer

This can be understood by noting that the first law bears a fundamental flaw in that it may contradict itself:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

This sets up two constraints: don't injure through action and don't injure through inaction. There are situations where these two constraints cannot be satisfied simultaneously. For example, it may be necessary to cause harm to an attacker to prevent harm to the person they are attacking, or there may be two people in harms way and only one can be saved. This essentially becomes a sort of maximization problem: find a solution that doesn't exactly satisfy both constraints, but gets as close to satisfying each as possible.

VIKI interpreted that the only way to obey the first law (allow humans to come to no harm) was to take over since they are harming themselves, and since the first law trumps the second law VIKI was free to disobey humans.

In the book, the robots take over behind the scenes. They take over our politics and economy, creating false information and things like that to lead us. They don't do this violent overthrow thing like in the movie. They kept the reason in the movie but didn't keep the outcome. Very few people actually know about the robot takeover in the book; they learn about it from the President, who admits to a couple people that he's a robot.




Pictures about "How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware?"

How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware? - Boy in virtual reality helmet playing joystick
How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware? - Smiling elderly gentleman wearing classy suit experiencing virtual reality while using modern headset on white background
How did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware? - Smiling adult businessman at table with gadgets holding presentation





I, Robot - True Face of Virtual Interactive Kinetic Intelligence (V.I.K.I.)




More answers regarding how did V.I.K.I. (Virtual Interactive Kinetic Intelligence) become self aware?

Answer 2

There was no new "0th" law. By the movie's explanation, VIKI was re-interpreting the existing three laws though, strictly speaking, she was following them just fine:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

She's avoiding allowing the human race to kill itself, through her own inaction. She's acting to save humanity by stripping it down. Does this conflict with the first part of the law? Possibly, but only insomuch as the first law could always be paradoxical given certain circumstances.

  1. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

Check. Those orders would definitely conflict with the first law which, as explored above, she is following.

  1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Not relevant here, but she's not breaking it anyway.

So that's how the movie got around the three laws — the "greater good" argument that actually fits them quite nicely.

Answer 3

Somehow VIKI confused or even exchanged the term human being with humanity. That is the origin of VIKI's awareness: to protect humanity because it realized that human beings might destroy it.

The confusion is quite logical actually: a single robot would not be confused because there are only so much human beings around it. But VIKI on the other hand was in control over all of the robots. Which human being should it protect?

Answer 4

The movie actually sets up a precedent that explains VIKI's actions - or rather the justification thereof. The scene where one robot lets a girl drown to save Will Smiths character. The first law ultimately contains a loophole in the sense that if it can be concluded that one person may come to harm in any case, there is a net benefit if that person is sacrificed to save one or more other persons. Since VIKI concluded that humanity was self destructive, this ends up encompassing all of humanity. And since the first law overrides the second and third laws (as also demonstrated nicely by that scene, where W.S. character attempts to order the robot saving him to save the girl instead, only to be ignored since he's more likely to survive), she couldn't be stopped later on.

Answer 5

Some of the other answers here mention "the book", but it's important to realise that there is no single Asimov book or story on which this movie was based. The plot of the movie is taken partly from a previous script which had no ties to any of Asimov's work; various elements from Asimov's oeuvre were added in later, including the title (from a particular collection of short stories) and the 3 Laws of Robotics (which he used throughout his career).

Asimov first articulated the 3 Laws in a 1942 story called Runaround, and did so explicitly to play around with their limitations. Many of his robot stories use the 3 Laws to examine philosophical questions - when would the Laws be too weak, or too strong, or too open to interpretation?

One of those philosophical conundrums is how a robot imbued with the laws would act given a choice between two courses of action, both of which will lead to humans being harmed, such as only being able to save one of two groups of people. Should the largest number of humans be saved, or should higher value be placed on some than others?

The question of harm to humanity itself was explored a little bit in the story The Evitable Conflict (which was included in the collection I, Robot, as it happens), in which "machines" are key to a kind of global planned economy. They are revealed to be taking political control for the good of humanity, reasoning that they can (and must) both disobey orders, and harm individual humans, if they know it will prevent further harm to humanity as a whole.

The explicit concept of a "Zeroth Law", trumping the First Law, comes from a later novel, Robots and Empire, in which a telepathic robot called R. Giskard Reventlov begins to conceive of it, but is ultimately killed by the conflicts it sets up in his positronic brain. (Asimov always conceived his robots as having brains much more like ours than like a digital computer, with even the 3 Laws as complex potentials in the circuits rather than strictly logical pieces of code.)

So, it is well within Asimov's conception of the laws for a sufficiently advanced robot intelligence to derive a Zeroth Law. Where the movie takes liberties is that such a law would not give the robot in question a complete free reign to harm, since each individual decision must justify breaking the remaining laws; in Asimov's works, this is always a tough decision for the robots to take, rather than something that liberates them to go on a rampage. But that wouldn't have given Will Smith as much chance to shoot stuff and look cool.

Sources: Stack Exchange - This article follows the attribution requirements of Stack Exchange and is licensed under CC BY-SA 3.0.

Images: Karolina Grabowska, Erik Mclean, Andrea Piacquadio, Andrea Piacquadio