James Knight worked for Atlanta, Georgia’s Crawford Post Production, where he was instrumental in growing the facility to the largest in the south-east United States. Moving to Los Angeles, Knight worked as Motion Capture Producer or Project Manager on films including I Am Legend, Hulk and The Chronicles of Narnia: Prince Caspian, before spending the better part of four years managing the motion capture for James Cameron’s ground-breaking Avatar.
He is currently CEO of his own company, Knight Vision Studios, and a member of the VES and the Scientific and Technical Achievement Committee at AMPAS. This article consists of the highlights of a talk Knight conducted at London’s CVMP conference in November 2010.
[Editor’s note: we’ve assumed that you’re already familiar with the plot of Avatar, and how performance capture was used to transform live actors into CG characters. If not, the movie’s Wikipedia entry offers a brief run-down.]
We shot Avatar in Howard Hughes’ old offices in Playa Vista in California. He used to maintain the engines for the Spruce Goose in the building. It’s actually not a very big auditorium, so to have horses in there – we motion-captured horses as well as the actors – was surreal.
On Avatar, the motion capture volume was 72 by 32 feet, and we had 102 cameras capturing the data. We had a scaling volume off to the side where we would do range-of-motion work, and there were seven cameras for that, then 95 capturing the live motion-capture data.
Every morning, we’d ‘snap’ each actor in by recording them in the T-pose. Each one had different marker placements on their bodies, since they’re all different sizes and move differently.
We numbered the markers in the computer. With Avatar, we used 56 markers on the body, and when we snapped it in, it was always wrong, so we’d have to tell the system: ‘This is 20, this is 27’.
We’d also do a ROM [range-of-motion calibration] each morning so the system knew the extremities of the constraints on each individual performer. Then we’d do it again at lunch, because with the LA sun beating down on he building, the building would stretch slightly and the cameras would need to be recalibrated.
A lot of the actors hated doing T-poses. Jim Carrey once said: ?Who did the first T-Pose? Jesus Christ.?
The motion-capture pipeline
We used a passive optical system, so all the markers we used were reflective. When we got into combat scenes, or if there were scenes where we had reference cameras surrounding the actor, there would be occlusion and the system would get confused and start swapping markers, so we’d spend a lot of time [cleaning up] the data.
We solved the characters in real time, solving for a skeleton and retargeting to the CG character. We streamed into MotionBuilder, which is the industry-standard viewing tool, live on set.
MotionBuilder crashed all the time. If it was the software, Jim [Cameron] wouldn’t yell, but if it was someone being slow and not doing their job, he’d have an opportunity to ask what the f*** was going on, so most of the time, we’d blame it on MotionBuilder.
When we show directors and VFX supervisors [live playback], they’re not too concerned about the positions of the hands: that’s tweaked afterwards. And on Avatar, the motion capture for the face was a post process: we just did the body, but Weta did the motion capture for the faces.
But point of contact is key. A big stumbling block with motion capture can be when you have a CG character who’s really fat. The computer doesn’t see that somebody’s fat; all it sees is a skeleton. So if you have a performer my size [they have to move in such a way that the character’s arms don’t intersect with its belly.] If you didn’t have the benefit of real-time playback, you’d just have to move your arm out and tell the computer to have that offset throughout the whole take.
The virtual camera
A virtual camera is nothing more than a markered prop. As the director moves around in the volume, he’s essentially allowed to shoot a CG movie as if it were live action.
When Spielberg shot Jurassic Park, he’d shoot a live-action film plate and then have to animate to the film plate. We did the reverse. On Avatar, the cameras were character-driven. Jim could follow the action organically, which had never been done before.
We had a ‘wheels’ process where we’d smooth out the camera movement in post, but the result was incredibly organic. It was as if the camera dolly was in the jungle. Imagine if you were shooting that in a real jungle: the tracks you’d have, the cranes you’d have. It would be ridiculous. Camera moves like this just aren’t possible [in the real world].
Jim wanted all the plants on set because he wanted a realistic performance from the actors. We had props standing in for virtual objects – a plank of wood would represent a log, say – then the geometry was represented in MotionBuilder, and our software would stream live into MotionBuilder.
Not all of the props were markered, and nor was the debris we threw at the actors off camera, but it all aids in getting great secondary motion.
The actors’ headgear wasn’t markered either. Again, it was so Jim could get really good secondary motion from the head. And it helped the actors get used to [their virtual characters] having those silly ears.
I actually had tons of props from Avatar. But I didn’t realise it was going to be this big, so I gave them away to my friends and relatives. And then I was told that I should just have held on to them for a few years, and then they would be rightfully mine. It would probably have put my kid through college.
The perfect performance-capture movie?
Why was Avatar the perfect movie for motion capture? One of the reasons is secondary motion. Both Sigourney Weaver and Sam Worthington appear as live-action characters in the movie, and then as their virtual likenesses, their avatars. In the movie, the avatar is made from their DNA. So it only stands to reason that it would have the same walk cycles, [the same facial features], that it would hold itself in the same way. The motion of the avatars and the motion of the live-action characters were indistinguishable.
The other thing is the emotion behind the movie; that we were able to capture the emotion in the actors’ faces. It’s very hard to keyframe: it can be done, but it’s a hell of a lot easier with motion capture.
You hear stories about how hard Jim is to deal with, but he was actually really cool on the set. Every now and then he’d throw his toys out of the pram, and then we’d all giggle, like: ?Daddy’s having a bad moment.?
Jim rarely wore a headset. But we all wore them, and we were on the movie for so long, this one guy ended up building up a sound array of Arnold Schwarzenegger, because Jim had worked with Arnold [on the Terminator movies]. After Jim said, ?Great, guys. Well done,? he’d play a clip of Arnold going ?Bull-shit!? And then everyone would snigger.
Looking back, I spent four years of my life on Avatar: lots of 18-hour days and even two 26-hour days. It wasn’t exactly fun. Because of how it turned out, we like to look back and say. ?Oh man, what an awesome experience,? but it was actually really frustrating.
One lunch I left to go to the gym and I was really looking forward to it, and the producer called me on the phone, and I had to go back. I had a John Cleese moment… you know, in Fawlty Towers, where he beats the car. People thought I was mad. Which I was. Avatar did that to you.
Since the completion of Avatar, James Knight has worked on both Real Steel and the upcoming The Amazing Spider-Man, and is now working on three undisclosed future projects.
Updated 25 January: In the original version of this story, we incorrectly stated that James Knight remains a producer for Giant Studios. Although Knight is a former employee of Giant, the principal performance capture provider on Avatar, he currently runs his own company.