Sign up for the newsletter

Signup for the Newsletter

Email

Interview: Avatar mocap producer James Knight

Tuesday, January 24th, 2012 | Posted by Jim Thacker

James Knight worked for Atlanta, Georgia’s Crawford Post Production, where he was instrumental in growing the facility to the largest in the south-east United States. Moving to Los Angeles, Knight worked as Motion Capture Producer or Project Manager on films including I Am Legend, Hulk and The Chronicles of Narnia: Prince Caspian, before spending the better part of four years managing the motion capture for James Cameron’s ground-breaking Avatar.

He is currently CEO of his own company, Knight Vision Studios, and a member of the VES and the Scientific and Technical Achievement Committee at AMPAS. This article consists of the highlights of a talk Knight conducted at London’s CVMP conference in November 2010.

[Editor's note: we've assumed that you're already familiar with the plot of Avatar, and how performance capture was used to transform live actors into CG characters. If not, the movie's Wikipedia entry offers a brief run-down.]

We shot Avatar in Howard Hughes’ old offices in Playa Vista in California. He used to maintain the engines for the Spruce Goose in the building. It’s actually not a very big auditorium, so to have horses in there – we motion-captured horses as well as the actors – was surreal.

On Avatar, the motion capture volume was 72 by 32 feet, and we had 102 cameras capturing the data. We had a scaling volume off to the side where we would do range-of-motion work, and there were seven cameras for that, then 95 capturing the live motion-capture data.


The Avatar motion-capture volume: formerly Howard Hughes’ offices.

Every morning, we’d ‘snap’ each actor in by recording them in the T-pose. Each one had different marker placements on their bodies, since they’re all different sizes and move differently.

We numbered the markers in the computer. With Avatar, we used 56 markers on the body, and when we snapped it in, it was always wrong, so we’d have to tell the system: ‘This is 20, this is 27′.

We’d also do a ROM [range-of-motion calibration] each morning so the system knew the extremities of the constraints on each individual performer. Then we’d do it again at lunch, because with the LA sun beating down on he building, the building would stretch slightly and the cameras would need to be recalibrated.

A lot of the actors hated doing T-poses. Jim Carrey once said: ?Who did the first T-Pose? Jesus Christ.?

The motion-capture pipeline
We used a passive optical system, so all the markers we used were reflective. When we got into combat scenes, or if there were scenes where we had reference cameras surrounding the actor, there would be occlusion and the system would get confused and start swapping markers, so we’d spend a lot of time [cleaning up] the data.

We solved the characters in real time, solving for a skeleton and retargeting to the CG character. We streamed into MotionBuilder, which is the industry-standard viewing tool, live on set.

MotionBuilder crashed all the time. If it was the software, Jim [Cameron] wouldn’t yell, but if it was someone being slow and not doing their job, he’d have an opportunity to ask what the f*** was going on, so most of the time, we’d blame it on MotionBuilder.


Director James Cameron views live playback on the Avatar set.

When we show directors and VFX supervisors [live playback], they’re not too concerned about the positions of the hands: that’s tweaked afterwards. And on Avatar, the motion capture for the face was a post process: we just did the body, but Weta did the motion capture for the faces.

But point of contact is key. A big stumbling block with motion capture can be when you have a CG character who’s really fat. The computer doesn’t see that somebody’s fat; all it sees is a skeleton. So if you have a performer my size [they have to move in such a way that the character's arms don't intersect with its belly.] If you didn’t have the benefit of real-time playback, you’d just have to move your arm out and tell the computer to have that offset throughout the whole take.

The virtual camera
A virtual camera is nothing more than a markered prop. As the director moves around in the volume, he’s essentially allowed to shoot a CG movie as if it were live action.

When Spielberg shot Jurassic Park, he’d shoot a live-action film plate and then have to animate to the film plate. We did the reverse. On Avatar, the cameras were character-driven. Jim could follow the action organically, which had never been done before.

We had a ‘wheels’ process where we’d smooth out the camera movement in post, but the result was incredibly organic. It was as if the camera dolly was in the jungle. Imagine if you were shooting that in a real jungle: the tracks you’d have, the cranes you’d have. It would be ridiculous. Camera moves like this just aren’t possible [in the real world].


James Knight on stage at CVMP 2010. On-set vegetation helped actors pitch performances in jungle scenes.

Jim wanted all the plants on set because he wanted a realistic performance from the actors. We had props standing in for virtual objects – a plank of wood would represent a log, say – then the geometry was represented in MotionBuilder, and our software would stream live into MotionBuilder.

Not all of the props were markered, and nor was the debris we threw at the actors off camera, but it all aids in getting great secondary motion.

The actors’ headgear wasn’t markered either. Again, it was so Jim could get really good secondary motion from the head. And it helped the actors get used to [their virtual characters] having those silly ears.


Sigourney Weaver on set (below left) and in digital form (right). The ‘silly ears’ on the mocap suit helped the actors remain in character during recordings.

I actually had tons of props from Avatar. But I didn’t realise it was going to be this big, so I gave them away to my friends and relatives. And then I was told that I should just have held on to them for a few years, and then they would be rightfully mine. It would probably have put my kid through college.

The perfect performance-capture movie?
Why was Avatar the perfect movie for motion capture? One of the reasons is secondary motion. Both Sigourney Weaver and Sam Worthington appear as live-action characters in the movie, and then as their virtual likenesses, their avatars. In the movie, the avatar is made from their DNA. So it only stands to reason that it would have the same walk cycles, [the same facial features], that it would hold itself in the same way. The motion of the avatars and the motion of the live-action characters were indistinguishable.

The other thing is the emotion behind the movie; that we were able to capture the emotion in the actors’ faces. It’s very hard to keyframe: it can be done, but it’s a hell of a lot easier with motion capture.


James Cameron on set. “He brought his helicopter to work one day, which was cool,” says James Knight.

You hear stories about how hard Jim is to deal with, but he was actually really cool on the set. Every now and then he’d throw his toys out of the pram, and then we’d all giggle, like: ?Daddy’s having a bad moment.?

Jim rarely wore a headset. But we all wore them, and we were on the movie for so long, this one guy ended up building up a sound array of Arnold Schwarzenegger, because Jim had worked with Arnold [on the Terminator movies]. After Jim said, ?Great, guys. Well done,? he’d play a clip of Arnold going ?Bull-shit!? And then everyone would snigger.

Surviving Avatar
Looking back, I spent four years of my life on Avatar: lots of 18-hour days and even two 26-hour days. It wasn’t exactly fun. Because of how it turned out, we like to look back and say. ?Oh man, what an awesome experience,? but it was actually really frustrating.

One lunch I left to go to the gym and I was really looking forward to it, and the producer called me on the phone, and I had to go back. I had a John Cleese moment… you know, in Fawlty Towers, where he beats the car. People thought I was mad. Which I was. Avatar did that to you.

Since the completion of Avatar, James Knight has worked on both Real Steel and the upcoming The Amazing Spider-Man, and is now working on three undisclosed future projects.

Updated 25 January: In the original version of this story, we incorrectly stated that James Knight remains a producer for Giant Studios. Although Knight is a former employee of Giant, the principal performance capture provider on Avatar, he currently runs his own company.

Tags: , , , , , ,

Comments

You can follow any responses to this entry through the RSS 2.0 feed.

You can leave a response, or trackback from your own site.


  • _

    No mention of Giant Studios? Seems like James did Avatar all by himself !!!

    Did this author do any research on this or did he let James write the article for
    himself?

    Also I was closely involved with Real Steel from the first day of cap to final delivery
    and not once did I see James Knight….

  • http://www.cgchannel.com Jim Thacker

    As I mentioned in the introduction, the text is the highlights of a talk James Knight gave at CVMP, so after the part in italics, it’s all in his own words. (I’ve described it as an ‘interview’, as the session included a ten-minute audience Q&A, but it wasn’t a conventional one-on-one.) I did mention Giant Studios in the original text of the article, but I’d incorrectly stated that James Knight was still involved with the company, and I obviously didn’t do the update with much finesse last night! I’ll clarify that now.

  • x

    There’s some conflict of interest going on in the story as well. James’ new company doesn’t use Giant Studios capture system, it uses a non-optical system so the negative comments related to the optical system used on Avatar should be viewed with some suspicion. “we snapped it in, it was always wrong” “there would be occlusion and the system would get confused and start swapping markers, so we’d spend a lot of time [cleaning up] the data.”

  • Ron F

    I think James’ new company has used a number of different mocap technologies. It’s not about jamming every round peg job into a square hole solution. Giant’s system, and everyone else’s, has improved vastly since the days of the Zemeckis films, Alice and Avatar. It is still easier to add actors to a passive optical mocap volume (cheaper!) BUT there are tradeoffs related to occlusion, setup time, global accuracy, cleanup, etc. The best tool for the job varies; no one size fits all. Especially in feature film production.

  • Chris

    Wow,

    this was really honest of James speaking open about how the production really was. Most of the time a pain in the ass and brutal.
    But he stayed and pulled through and he continues working in the same area. ;)

    What system is he using besides optical marker based mocap? There is no real and better solution than that. Magnetic, sound waves? Come on.

    I hoped he mentioned the outstandig effort the clean up team tackled after each shooting, receiving a selected take that was close to not solveable caused by occlusion or whatever.

    This wrap up is a real pleasure to read if you work in the same area. :)

    Cheers,

    Chris

  • Carry smyth

© CG Channel Inc. All Rights. Privacy Policy.

  • Sketch Theatre
  • Gnomon Gallery
  • Gnomon Workshop
  • Gnomon School
  • CG Channel is part of the Gnomon group of companies: