What was the virtual reality research that was used to produce the movie A.I.?

What was the virtual reality research that was used to produce the movie A.I.? - Smiling elderly gentleman wearing classy suit experiencing virtual reality while using modern headset on white background

Years ago, I watched a documentary / behind the scenes on the movie A.I. (2001). It explained that the scenes in town were generated by putting markers on the ceiling. Each camera had an attached upward-facing camera which looked at the above markers, allowing them to be processed by software. Chroma key software replaced the studio walls with the scene automatically based on the coordinates and orientation of each camera. That way it didn't matter if the cameras moved; no manual labour was required to correctly set the CGI background, allowing them to produce the movie in real time with improvised camera angles as described here.

What is this process called, and what research (if any) is it based off of?



Best Answer

Not sure that there's some specific name to this (it's a form of 3D image reconstruction from single images or a series of images, and essentially the same process used for augmented reality). The actual technique is rather easy to explain. It's essentially "reversed" motion capturing:

The process is essentially the same for each and every camera - as used in classic augmented reality:

  • Object tracking is used to follow/"see" the markers.
  • It's possible to transform world coordinates to screen coordinates and vice-versa if you know the proper transformation matrix.
  • This transformation matrix consists of two sets of parameters (which it's essentially constructed/calculated from):

    • Intrinsic camera parameters describe the camera itself. Examples for these would be the camera's field of view, curvature of the projection, etc.
    • Extrinsic camera parameters describe the camera in the world. These include it's position and rotation.
  • Since the walls/ceiling of the studio and the virtual positions of the markers won't change, these are all known.

  • The intrinsic camera parameters are known as well.
  • The only values missing are the extrinsic camera parameters (essentially the "know the position and orientation of the camera in the scene").

  • From there on it's all "simple math" as long as you assume one thing: All markers/positions have to be in the same plane (since depth information is lost when recording with one camera only).

Since now you've got your extrinsic and intrinsic camera parameters, it's possible to construct the complete transformation matrix, which allows you to project your virtual 3D world/background into the camera image at the correct positions.

In classic augmented reality, this would be enough. However, in this case they now replaced the instrinsic camera parameters with those of the actual camera used for the screening and they transformed the extrinsic camera parameters to consider the different angle the cameras were attached to each other (i.e. rotating the view).

In case you've got a Nintendo 3DS, this is essentially the same process that is used in the Augmented Reality game, to determine the console's position in relation to the used cards and to draw the scene properly on screen.




Pictures about "What was the virtual reality research that was used to produce the movie A.I.?"

What was the virtual reality research that was used to produce the movie A.I.? - Side view of little girl wearing VR goggles and exploring new robot with controllers in light room
What was the virtual reality research that was used to produce the movie A.I.? - Side view of concentrated young guy in casual clothes experiencing virtual reality in modern headset at home
What was the virtual reality research that was used to produce the movie A.I.? - Amazed boy exploring virtual reality headset





The Rise Of Technology-Augmented Reality(AR), Virtual Reality(VR) And Mixed Reality(MR) |Simplilearn




Sources: Stack Exchange - This article follows the attribution requirements of Stack Exchange and is licensed under CC BY-SA 3.0.

Images: Andrea Piacquadio, Vanessa Loring, Eren Li, Vanessa Loring