Sign up for the newsletter

Signup for the Newsletter

Email

Q&A: John Griffith, director, Cinedev

Monday, May 14th, 2012 | Posted by Jim Thacker

John Griffith has been working in visual effects for over a decade, cutting his teeth in previsualization on Star Wars: Episode III – Revenge of the Sith: a project he describes as “the roots of previz”. Having begun his career alongside the future founders of leading US visualisation studios such as The Third Floor and Halon Entertainment, he moved to 20th Century Fox in 2007, where he founded, and currently director of, its in-house previz wing, Cinedev.

We spoke to John about Cinedev’s unique approach to previsualization, the need to “close the feedback loop” with directors, the perils of photorealism, and his vision of a real-time previz process that can be fully integrated into the visual effects pipeline.

CG Channel: How did Cinedev come about?

John Griffith: I was hired here 2007 to be part of the visual effects department: kind of an in-house supervisor for previzualization, if you will. I help in-house previz communicate to all the other departments, to make sure the quality is there and take some of the burden off the VFX supervisor.

Since I’m a studio employee, I don’t get screen credit – no studio employee does; it’s just the way it goes – so around the time of X-Men Origins: Wolverine, we came up with the term ‘Cinedev’ to make sure the artists working for me get proper credits. Technically, Cinedev is not a department: it’s a wing of visual effects. It’s more of a term that describes the work we do for the studio.

Cinedev is short for ‘cinematic development’. It’s a more all-encompassing term than previz. We dive into not only the look of the film, but the action and the storytelling.

CGC: How many full-time staff do you have?

JG: Right now, it’s just me and my coordinator, Eric Stewart. But when a show comes in, I ramp up. I bring in freelance artists I’ve worked with in the past, or I cherry-pick from the vendors like Halon Entertainment or Digital Domain. They come here to do the work, but they’re paid by the company they’re employed by.

I bring in people I know will do a good job, rather than just going to one particular vendor. I work lean and mean, so I have to get the best people I can find. For me, it’s about individual artists, not particular companies.

CGC: What would be a typical team size for a major movie?

JG: On Rise of the Planet of the Apes, it was 17 people. Two or three of the artists I hired to work for me were from Halon, two were from Persistence of Vision, and the majority were from Digital Domain.

We did seven major sequences, including the Golden Gate Bridge sequence, the first visit to Muir Woods, the sequence in which Caesar attacks the neighbour and bites his finger, the transition from him being an infant to being a young chimp, and the rocket attack sequence. The team did a ton of work in a short period of time, and it was really good work.

The Golden Gate sequence was done over a period of a month or two, in a series of iterations. In a week, we might do one version and get notes, and the next we’d do a new version: sometimes a complete redo, sometimes just fixing portions to make them work. That was a big sequence, though: over six minutes long.

CGC: What happened after that? Did you hand over to external previz vendors?

JG: After production moved to Vancouver, one of Pixel Liberation Front’s people was brought in to supervise previz on the film, because I had to stay here. By that point there were some minor sequences still to do, but a lot of the remaining work was post-viz on the Golden Gate sequence. MPC did some stuff, Image Engine may have done some stuff. There were a lot of companies working on it. But the majority of the work had already been done here.

http://cgchannelvideos.s3.amazonaws.com/tutorials/120514_Cinedev_ROTA_GGs.flv
The seven-minute Golden Gate sequence from Rise of the Planet of the Apes was completed by a team of 17 artists at Cinedev: Mike Comfort (lead artist), Carlos Pedroza, Vyacheslav Anishchenko, Kevin Williams, Steven Lo, Brian Burks, Ben Liu, Chad Hofteig, Kevin Aguirre, Brent Tyler, Joel Kittle, Gregg Lukomski, Andrew Moffett, Jeff Benoit, Stephan Pavelski, Juan Sanchez and Joe Henderson.

CGC: What makes a good previz artist?

JG: Because you’re outlining what can or can’t be done in animation, what can or can’t be shot, you have to understand more about the entire film-making process than an animator or modeller in a traditional pipeline.

I don’t think you can come straight out of school and jump right in to previz because you have to understand so many different disciplines: cinematography, how to cut shots together, what camera work goes with what camera work. Some people have more of an innate ability, but there’s always a training period involved.

CGC: What kinds of backgrounds do your artists come from originally?

JG: The best ones come from animation, because they have a good grasp of motion, of the dynamic it takes to create a shot. If the timing of a shot isn’t right, the camera isn’t going to work. For me, the best ones are the creature animators. Don’t get me wrong: Pixar does some really great work, but if you have a cartoony style, it doesn’t lend itself to the previz process.

I use also use modellers and texture artists, particularly when we’re ramping up. It’s kind of a plug-and-play thing. I’m trying to adapt the process to allow me to be more flexible, so I can bring in a really talented modeller rather than a guy who has to model and animate. I’m trying to segment the pipeline.

CGC: So you’re mimicking the kind of specialisation you would find in the VFX pipeline as a whole?

JG: That’s my goal. It’s difficult to find really well-rounded people. There are only a few people I know who can take a shot, animate it, light it, do the camera work on it and have it look amazing. Usually, you have to have multiple people involved. So it makes more sense to break it into a more traditional visual effects pipeline, where I can use artists for their specific talents rather than having to rely on them to do everything.

http://cgchannelvideos.s3.amazonaws.com/tutorials/120514_Cinedev_ROTA_MWs.flv
Like most of the key effects sequences on the movie, Rise of Planet of the Apes’ Muir Woods sequence was visualised at 20th Century Fox’s main lot before production moved to Vancouver.

CGC: What does your previz pipeline consist of?

It’s hard to say there’s one specific way I work. But I like to start by taking the storyboards and cutting them into an animatic in Premiere. The artists are really good about doing everything digitally now, so the boards have usually been created Photoshop, and are already layered.

I find the people who are looking at the storyboards closely at the beginning of pre-production are the director and certain key VFX people: the VFX supervisor and producer, people who are trying to budget the sequence out. But no one in the rest of the creative team is really looking at them. That’s where the animatic – or rather, the ‘board-o-matic’ – comes in to its own.

Typically the director will have changes to make, because he’s never seen it in a linear form: he’s only seen it on paper. We can get way closer to his vision before we get into the actual 3D work, and that saves a lot of time.

Once the board-o-matic is refined, that’s my template for previz. There are some instances where you might not need to take things any further: for example, to communicate how a set needs to be designed, how big it needs to be, where the cameras are going to go. But usually, we’ll go right into Maya, bring in an artist to model and texture assets, set up environments and begin to animate.

CGC: How much do you use motion capture rather than animating by hand?

JG: I’ve integrated motion capture into the process over the past couple of years, and I don’t think I’ll ever go back to animating humans or anything with two legs by hand again.

I use the Xsens MVN inertial system. [Older optical] motion-capture systems were never practical for previz: there was a lot to clean up by hand, a lot of occlusion, and you don’t have the time to do that. You have to get the data, put it into the shot and show the result, sometimes within a day, and the only way I’ve found to do that is with a markerless system.

It’s not 100% accurate; it’s not something you’d put on final work. But it’s very cool, and there’s no clean-up.

CGC: You use the IKinema plug-in to stream motion-capture data directly into Maya, rather than going via MotionBuilder. Tell us about that.

JG: MotionBuilder will probably never be a part of my pipeline. Since Avatar, Autodesk has greatly improved its ability to communicate with Maya, so it’s much easier to go back and forth. But for me, MotionBuilder is not an animation tool, and for previz it’s like putting oven mitts on the animator’s hands. Setting keyframes in a way that a traditional animator is used to is painful, and I don’t want to restrict my artists in that way. Maya is where they’re comfortable, where they work, where everything goes, so if I try to base the pipeline around it, it benefits us all.

CGC: What about for rendering? Are you using one of Maya’s default render engines?

JG: I’m trying to advance the real-time part of previz, particularly real-time rendering. Maya has a great viewport renderer, but for me it’s not quite there yet. It’s great for playblasts – you get real-time shadows, ambient occlusion, all that good stuff – but when it comes to real-time work, the only thing that gives me a high enough frame rate to be able to use a virtual camera is a pure FK skeleton with just geometry on it: no blendshapes, nothing else. The minute you put constraints on the rig, Maya has to think about it, and that slows it down. Ultimately, I’m not able to use it in the way I’d like to as a final renderer, and that’s why I’m looking at other systems. There are patents involved, so I can’t talk too much about this, but the process will still involve Maya.

CGC: What level of visual quality are you aiming for? Is photorealism the ultimate goal?

JG: I’ve experimented with a lot of different styles of previz. On Percy Jackson & the Lighting Thief, we used a 2D style – it was actually done in 3D, but everything was textured by a storyboard artist, and had toon outlines. That allowed me to cut in boards with the previz and have everything flow seamlessly. It helped to cut down the time it took to create assets: all we had to do was texture the models with assets that already existed, and the models themselves could be simpler, less detailed.

I’ve been on the fence about high-quality previz versus lower-quality forms like grey-shaded or flat colours, because the more realistic you make something, the more there is to critique. I think it’s inevitable that it’s all going to get more realistic, but that can be a Catch-22 situation. In film production, you have to get previz done in a very short period of time, and the more information you put in that can distract from the point you’re trying to get across, the longer it’ll take.

http://cgchannelvideos.s3.amazonaws.com/tutorials/120514_Cinedev_Thief_Hydras.flv
Different visual styles suit different previz projects. This sequence from Percy Jackson & the Lightning Thief uses a cel-shaded look that enabled storyboards to be cut in seamlessly.

CGC: How do you think the previz process will change over the next ten years?

For me it’s all about motion capture, real-time rendering and virtual cameras. I’m constantly looking for any tool that will speed up the previz process. It’s about time, money and quality.

CGC: We’ve already talked about motion capture and real-time rendering. What about virtual cameras?

JG: For me, the virtual camera is the key component in being able to communicate with the director. Even if he’s just trying to fix a camera angle, if I can put a device in his hands that enables him to set just one keyframe, he can instantly communicate to me what he’s talking about, rather than having me move things around with the mouse – ‘No, not like that, a little bit more, a little bit more’.

I’m using a proprietary virtual camera. I don’t want to limit myself by having to be tied down to a certain location [as with off-the-shelf optical-capture-based solutions]. I don’t have to be locked into a stage with a ring of cameras and a bank of artists to run the solving software that goes along with it. The process has got to be the equivalent of an iPad.

CGC: That’s a pretty striking simile.

JG: Think of it like this. I’m a Windows guy: if you sit me down with a Linux system, I have no idea what I’m doing. But some guys do, they know how to work them really well – and that’s kind of what these expensive virtual production sets are like: you have guys who know how to work these tools that are very complex and work in a very specific way. But I have to have systems that are user-friendly. I have to be able to hand a director a device and not have to explain to him how it works. If I do, he’s going to be intimidated about it, and that gets in the way of the creative process.

When a director walks into the office, one of the most intimidating things he can see is a room full of artists he’s never met before staring back at him, to whom he has to pitch his ideas. He has to believe that these people are going to take what he’s telling them, digest it, and in a week, turn it around into something that matches his vision. There’s a lot of trust that goes along with that, because control is taken away from the film-maker.

My goal here is to prevent that from happening to the director, in much the same way James Cameron took over the virtual production on Avatar: he was able to take control back, make sure that what he was getting was his vision, his idea, his camera move. Instead of an animator, he had actors in suits. Instead of a previz artist creating a camera for him, he had a virtual camera. The more control you can give to a director, the shorter the feedback loop, the faster you can turn the work around and the happier everyone else is.

CGC: Apart from wider adoption of new technologies, do you have any other predictions?

JG: Virtual production and previz will blur together. At some point, previz will go all the way down the line to final-quality VFX. Maybe 10 or 20 years from now, it’ll all be one crazy, real-time process: real-time rendering, capturing all the data at once.

CGC: So the VFX department will actually use the assets you create?

JG: I think so. Any time you’re throwing something away, it’s kind of silly. There are always ways to reuse what you’ve done. Take the scene in Percy Jackson which shows the hydra in the Parthenon. If we were to take a final model from the person that designed the creature, and we had talented enough animators in our part of the pipeline – I’m talking final-quality animation – you could take that data and plug it straight into the final shot, especially if you post-vized it and tracked the live footage.

As it stands, we don’t have the luxury of time to do that, but if you were to get to that quality of animation, it could form the groundwork for the final VFX shot, even if it was only used as the blocking pass.

http://cgchannelvideos.s3.amazonaws.com/tutorials/120514_Cinedev_XMen_XJets.flv
Motion capture in use in previsualization. This fight sequence developed for X-Men: First Class was captured in two days, using six separate performers.

CGC: Do you think previz artists get enough recognition?

JG: People are always asking me that. But I’m a practical person, and I appreciate the job I have, especially with the way the economy is now. There are hundreds of people who work on a film, and many of them work without recognition, so for me to have an ego, and say I deserve it more than anybody else… let’s say that I get just enough recognition.

CGC: But it’s not just an ego thing. Surely it must help when directors are aware of what previz is and what it can do for them before they start work on a film?

JG: I’m still shocked how many people aren’t aware of previz. But we have to put ourselves in film-makers’ positions. Very few of them are working with previz on a major level. You’ve got Lucas and Spielberg and Cameron, but outside of that level, few directors have used it before.

On one of the films I was working on – this was towards the end of the process: we were doing some pick-up stuff – there was a specific angle we had to achieve in a shot. The director came and sat by me, and he hadn’t really done that before – it was the typical feedback loop: he’d come in and pitch his ideas and leave – he got to see me setting up and establishing a shot, and all of a sudden a lightbulb went off in his head: ‘Oh, you mean you can move the camera around in 3D?’

And it suddenly dawned on me that people who haven’t done anything in 3D might not understand how the 2D images they’re seeing are created; that behind the fourth wall of the computer screen is a whole other world.

CGC: So how do you break down that barrier?

JG: You’ve got to build trust, and for me, that’s about giving the director back some control over the work, rather than listening to what they have to say and going: ‘Okay, go away for two weeks while I animate this.’ That’s why the idea of virtual production is so important. By putting a virtual camera in the director’s hand and giving them actors in mocap suits to direct, you’re breaking that fourth wall. Suddenly, they’re in our world and they can communicate with us.

Visit John Griffith’s Vimeo channel

Tags: , , , , , , , , , , , , , , , ,

Comments

You can follow any responses to this entry through the RSS 2.0 feed.

You can leave a response, or trackback from your own site.


  • http://Www.gradientfx.com Otan

    https://vimeo.com/gradientfx/review/39499115/2c8e11f826

    Gradient effects already does Previs on an IPad ! Check out the video above .
    They can animate and render animation as well. All work happens on the cloud and it’s safe !
    Light years ahead of current methods and ideas ! All data is shared with maya and directors can use it themselves .

© CG Channel Inc. All Rights. Privacy Policy.