Pages

Monday, October 11, 2010

Technology in film


Filming

Principal photography for Avatar began in April 2007 in Los Angeles and WellingtonNew Zealand. Cameron described the film as a hybrid with a full live-action shoot in combination with computer-generated characters and live environments. "Ideally at the end of the day the audience has no idea which they're looking at," Cameron said. The director indicated that he had already worked four months on nonprincipal scenes for the film.[82] The live action was shot with a modified version of the proprietary digital 3-D Fusion Camera System, developed by Cameron and Vince Pace.[83] In January 2007, Fox had announced that 3-D filming for Avatar would be done at 24 frames per second despite Cameron's strong opinion that a 3-D film requires higher frame rate to make strobing less noticeable.[84] According to Cameron, the film is composed of 60% computer-generated elements and 40% live action, as well as traditional miniatures.[85] Additional live action elements were filmed at Kerner Studios on Kernercam 3D systems and RED cameras.[85]
Motion-capture photography lasted 31 days at the Hughes Aircraft stage in Playa Vista in Los Angeles.[53][86] Live action photography began in October 2007 at Stone Street Studios in Wellington, New Zealand, and was scheduled to last 31 days.[87] More than a thousand people worked on the production.[86] In preparation of the filming sequences, all of the actors underwent professional training specific to their characters such as archery, horseback riding, firearm use, and hand-to-hand combat. They received language and dialect training in the Na'vi language created for the film.[88] Prior to shooting the film, Cameron also sent the cast to the jungle in Hawaii[89] to get a feel for a rainforest setting before shooting on the soundstage.[88]

The virtual camera system in use on the set of the film. The motion-capture stage known as "The Volume" can be seen in the background.
During filming, Cameron made use of his virtual camera system, a new way of directing motion-capture filmmaking. The system is showing the actors' virtual counterparts in their digital surroundings in real time, allowing the director to adjust and direct scenes just as if shooting live action. According to Cameron, "It's like a big, powerful game engine. If I want to fly through space, or change my perspective, I can. I can turn the whole scene into a living miniature and go through it on a 50 to 1 scale."[90] Using conventional techniques, the complete virtual world cannot be seen until the motion-capture of the actors is complete. Cameron said this process does not diminish the value or importance of acting. On the contrary, because there is no need for repeated camera and lighting setups, costume fittings and make-up touch-ups, scenes do not need to be interrupted repeatedly.[91] Cameron described the system as a "form of pure creation where if you want to move a tree or a mountain or the sky or change the time of day, you have complete control over the elements".[92]
Cameron gave fellow directors Steven Spielberg and Peter Jackson a chance to test the new technology.[64] Spielberg said, "I like to think of it as digital makeup, not augmented animation.... Motion capture brings the director back to a kind of intimacy that actors and directors only know when they're working in live theater."[91] Spielberg and George Lucas were also able to visit the set to watch Cameron direct with the equipment.[93]
To film the shots where CGI interacts with live action, a unique camera referred to as a "simulcam" was used, a merger of the 3-D fusion camera and the virtual camera systems. While filming live action in real time with the simulcam, the CGI images captured with the virtual camera or designed from scratch, are superimposed over the live action images as inaugmented reality and shown on a small monitor, making it possible for the director to instruct the actors how to relate to the virtual material in the scene.[88]

Visual effects

The left image shows the blue cat-like alien Neyitiri shouting. The right image shows the actress who portrays her, Zoe Saldana, with motion-capture dots across her face and a small camera in front of her eyes.
Cameron pioneered a specially designed camera built into a 6-inch boom that allowed the facial expressions of the actors to be captured and digitally recorded for the animators to use later.[94]
A number of revolutionary visual effects techniques were used in the production of Avatar. According to Cameron, work on the film had been delayed since the 1990s to allow the techniques to reach the necessary degree of advancement to adequately portray his vision of the film.[13][14] The director planned to make use of photorealistic computer-generated characters, created using new motion-capture animation technologies he had been developing in the 14 months leading up to December 2006.[90]
Innovations include a new system for lighting massive areas like Pandora's jungle,[95] a motion-capture stage or "volume" six times larger than any previously used, and an improved method of capturing facial expressions, enabling full performance capture. To achieve the face capturing, actors wore individually made skull caps fitted with a tiny camera positioned in front of the actors' faces; the information collected about their facial expressions and eyes is then transmitted to computers.[96] According to Cameron, the method allows the filmmakers to transfer 100% of the actors' physical performances to their digital counterparts.[97] Besides the performance capture data which were transferred directly to the computers, numerous reference cameras gave the digital artists multiple angles of each performance.[98] A technically challenging scene was near the end of the film when the computer-generated Neytiri held the live action Jake in human form, and attention was given to the details of the shadows and reflected light between them.[99]
The lead visual effects company was Weta Digital in Wellington, New Zealand, at one point employing 900 people to work on the film.[100] Because of the huge amount of data which needed to be stored, cataloged and available for everybody involved, even on the other side of the world, a new cloud computing and Digital Asset Management (DAM) system named Gaia was created by Microsoft especially for Avatar, which allowed the crews to keep track of and coordinate all stages in the digital processing.[101] To render Avatar, Weta invented a new system called Mari,[102][103] and used a 10,000 sq ft (930 m2server farm making use of 4,000 Hewlett-Packard servers with 35,000 processor cores running Ubuntu Linux and theGrid Engine cluster manager.[104][105][106] The render farm occupies the 193rd to 197th spots in the TOP500 list of the world's most powerful supercomputers. Creating the Na'vi characters and the virtual world of Pandora required over a petabyte of digital storage,[107] and each minute of the final footage for Avatar occupies 17.28 gigabytes of storage.[108] To help finish preparing the special effects sequences on time, a number of other companies were brought on board, including Industrial Light & Magic, which worked alongside Weta Digital to create the battle sequences. ILM was responsible for the visual effects for many of the film's specialized vehicles and devised a new way to make CGI explosions.[109] Joe Letteri was the film's visual effects general supervisor.[110] Working with ILM, ILM-spinoff KernerFX provided live action VFX elements which were captured with Kernercam 3D systems using RED cameras.[85]


 
Making of AVATAR Using Advance Motion Capture Technology




No comments:

Post a Comment