Image created:
Playa Vista, CA; Wellington, NZ; Lots and lots of computers

AVATAR

During the three year project, I spent the first 18 months working daily with Jim, the actors, editors, Virtual Art Department, and the motion capture teams to undertake the Performance Capture phase of the project in Playa Vista, California. We then moved to Wellington, New Zealand for six months where I Supervised all of the the live action work for the show. For the final year of the project, I returned to Weta Digital to Supervise a Shot Production team for several key sequences in the movie. I was fortunate to win Oscar and BAFTA awards for my contributions to the movie.

Before the visual effects could be planned, the entire fantasy world of Pandora had to be designed right down the every leaf, bug, creature, and humanoid Na'vi character including what they wear and the tools they use in everyday life. We also had to figure out how to accommodate Jim Cameron's mandate to shoot the virtual characters the same way he shoots a physically real movie.

Virtual Production

Visual Effects is traditionally associated as being done during Post Production, however, with incresasing frequency VFX work has become more integral to the development and shooting stages of productions. Virtual Production established a new industry workflow on Avatar offering a means for directors to produce visual effects in real time during Production, instead of deferring it all to Post. Jim wanted to direct the virtual characters the same way he directs actors on set, so the existing motion capture workflow was augmented with virtual environments and optimized to run in real time. This allowed Jim to direct actors inside a mocap volume and see them simultaneously puppeteer their virtual characters situated within a virtual set. This gave Jim and the actors a clear visual understanding of the layout, lighting, and atmosphere of the virtual worlds so they could intuitively adjust and refine direction and  performances.

This scene of the helicopter landing deep in the jungles of Pandora shows a motion capture volume during a typical Virtual Production shoot. The actors are inside the helicopter as the crude physical mocap heli set is lowered to the ground. There was not enough room for the pilot and the starboard door gunner so they were placed elsewhere in the volume and their virtual characters were offset back inside the virtual helicopter. Jim is directing the physical action while holding a Virtual Camera where he can see how the low resolution proxy virtual action looks. This gave him the immediate feedback every director needs, and allowed him to stage the action exactly how he wanted on the day. Prior to developing this Virtual Production workflow, directors and editors would typically have to wait months for the motion capture data to be processed, the environment and helicopter assets to be built, and then for it all to be put together. During VFX Post Production, we had an exact template of the action and environment layout, and we could immediately progress into up-res'ing the assets, lighting, and atmospherics.

A picture after the helicopter has touched-down and the actors are exiting the aircraft and looking around.

Jim could even puppeteer his own  helicopter landing using a scale mocap prop.

Virtual Camera

The Virtual Camera was the real time window into Pandora. Optical mocap markers were placed on it so the operator could move the camera position just by taking a step forward or backward, move side-to-side, or angle it up or down. Because it's "virtual", the operator could also instantly change scale to, for example, emulate a crane or steadycam for larger or smoother camera moves.

Performance Capture

Another innovative technology in our Virtual Production workflow was adding the ability to capture actor facial performances concurrent to body performances without limiting how far the actor could move within the volume. Prior to this, facial capture was restricted to a smaller dense external mocap camera array. Painting dots on the face instead of small optical markers, and then pointing a lipstick camera straight at the face that is attached to a helmet allowed the actor to move freely in any direction and distance. Most importantly, it connected the facial performances to the body performances which gave us complete character performance captures.

A relatively simple scene of Neytiri and Jake crawling through some underbrush, represented as pool noodles in the mocap volume. Using the Virtual Camera, Jim can see the characters positioned within the virtual jungle set and give them direction. We can also see a video feed from the actor face cameras, and follow how well the dots on their faces are being tracked in real time.

Like a live action set, Jim could immediately dress the environment and stage action to his liking, and then pick up a Virtual Camera and compose his shots.

Simulcam

Extending Virtual Production into live action photography required a method for simultaneously tracking and visualizing virtual characters within a physical set. Doing this allowed Jim to see the virtual character performances and then move the real camera to precisely compose plates however he wanted.

In this scene, we pre-captured Jake's action on a scale mocap set (The Avatars and Na'vi are 10 feet tall) using the live action Med Techs for eyeline and positional reference.

Jake's performance was spatially registered to the physical set and video playback of his action was mixed with the live stream on the camera operator's display so he could see how to compose the shot.

The movie was shot in native stereo using the Cameron Pace Fusion beam splitter rig that held Sony F950's and Sony F23's.

Lightstage 5

Another key technology we leveraged was a device that captures and synthesizes high resolution facial geometry and performance. It is a spherical gradient illumination scanner that estimates surface normal maps of an object from either its diffuse or specular reflectance, simultaneously from any viewpoint. Additionally, the normal map from the diffuse reflectance is able to produce a good approximation of subsurface scattering. The system consists of 156 LED lights,  two SLR cameras, and a video projector. The captured spherical gradient images are then turned into surface normals of the subject, and then applied to the actor's CG character. The resulting renders yielded tangibly natural expressions and skin details to our blue friends.

What a privilege and a lot of fun!