How New Motion Capture Tech Transformed Actors Into Creatures for "Dawn of the Planet of the Apes" | Innovation | Smithsonian
Current Issue
October 2014 magazine cover
Subscribe

Save 81% off the newsstand price!

Actor Andy Serkis's motion-capture performance rendered into a photo-perfect computer-generated ape. (Courtesy 20th Century Fox)

How New Motion Capture Tech Transformed Actors Into Creatures for "Dawn of the Planet of the Apes"

The special effects team behind Gollum and King Kong took on its most-challenging feat yet: animating 2,000 apes in a real forest

smithsonian.com

Dawn of the Planet of the Apes takes place 10 years after the conclusion of Rise of the Planet of the Apes—enough time for apes to have built their own civilization outside San Francisco and for a virus to have wiped out much of the human population.

In the real world, it’s been three years since Rise, but it may as well have been a decade based on how far the technology behind the film has come.

In the most recent film, the visual effects team had not one, two, or even a dozen apes to animate: they had 2,000. The apes, which now have human-like intelligence, have to act and emote in groups. And they have to do it all on location.

Believability here is key. Primates are humans’ closest living relative, which means audiences might not be as easily fooled as they are by fantastical creatures like Gollum and Davy Jones.

To achieve this, WETA Digital, a New Zealand digital effects house founded by Peter Jackson, developed what may be the most-sophisticated motion capture (mo-cap) system ever built—a network of dozens of cameras and hundreds of trackers connected wirelessly to a central server. 

Motion capture is a technique in which animators record the movement of an actor and convert it into a 3D digital model. Actors are dotted with a series of markers, which provide animators with a three-dimensional mesh map of their bodies and faces. The technical team can then render a new face and body on top of that grid, translating the performance onto a computer-generated character.

Though the technique has gotten a lot of attention in recent years, thanks in large part to Andy Serkis’s performance as Gollum in the Lord of the Rings: The Two Towers (he also plays Caesar, the ape leader in Dawn), it’s actually been used in various forms for decades. Animators on Disney’s Snow White traced live-action footage to draw characters, a rudimentary version of the technique. Mo-cap performers have provided the bones for popular videogame characters, including Lara Croft and even Mario.

Modern filmmakers didn’t hop on the bandwagon until the early 2000s. Animated films were the first target. In The Polar Express (2004), director Robert Zemeckis used mo-cap to allow Tom Hanks to play multiple characters throughout the film. More recently, directors have used mo-cap to add to or create the appearance of live-action moviemaking. The Na’vi creatures in James Cameron’s Avatar and Serkis’s Gollum are the most famous examples.

But because of the delicacy of the process, mo-cap is most often done in the studio, where capture specialists have complete control of the lighting and scenery. The rest of the scene is either shot separately and merged with the mo-cap performance or animated on its own. When outdoors, as in the previous Apes film, environments are small and controlled. 

What makes the new film so groundbreaking is that 85 percent of Dawn was filmed on location outside New Orleans or in Vancouver forests, according to a report in IEEE Spectrum. The visual effects team hid 50 motion-capture cameras throughout the sets to ensure that as actors moved throughout a scene, passing in front of one another or behind brush, at least one of the cameras would still see them. A single frame could contain as many as 13 actors, each sporting 48 LED mo-cap markers; cameras beamed footage to a local server over Wi-Fi so there were no wires to hide.

The team transmitted data from the each day’s shooting back to the WETA office in New Zealand for rendering. Anywhere from 200 to 50,000 processors were running at once to create everything from fur, skin, eyes, and fingernails with pristine photorealism.

“I think this is a high watermark for WETA for photorealism in performance capture,” director Matt Reeves said in an interview with New York magazine. “No one has tried to push it as far as we did on this film.”

For WETA, Dawn pushed the limits of scale more than anything. Unlike prior efforts, their setup required painstaking daily calibration as well as protection from the elements. 

At the end of the day, what will make Dawn stand out against other mo-cap-heavy films is the honesty of the performances. The level of detail WETA’s system is able to capture allows animators to map real, human emotion onto computer-generated apes.

WETA’s work allows Serkis, the film’s star, to think of mo-cap as little more than acting in extremely advanced makeup.

“I just act. With performance capture, there is no mystery to it,” he told The New Zealand Herald.

“The audiences want to be moved,” he continued, “that doesn't happen by a visual effect; that happens by an actor's performance.”

So far, critics appear to agree. A.O Scott, film critic for The New York Times, writes: “[Serkis’s] facial expressions and body language are so evocatively and precisely rendered that it is impossible to say where his art ends and the exquisite artifice of Weta Digital begins."

Audiences might not be able to tell the difference, either. But with motion-capture so captivating, it's getting more difficult to care. 

Tags
About Corinne Iozzio
Corinne Iozzio

Corinne Iozzio is a New York–based technology writer and editor. When she’s not fiddling with LEGOs or Nerf blasters, she covers gadgets and emerging tech for various publications, including Popular Science and Scientific American.

Read more from this author |

Comment on this Story

comments powered by Disqus