Behind the Scenes with Digital Domain for GI Joe: Rise of Cobra

Thursday, October 1st, 2009 | Article by Matt McCorkell and Robert Nelms (Video)

In our latest behind the scenes feature, we talk with Digital Domain about the extensive work done for effects heavy GI Joe: Rise of Cobra.

Interview with Brian Grill Visual Effects Supervisor for DIgital Domain on GI Joe

We came in to work on 2 sequences, one of them being the convoy sequence and the other being the Paris chase sequence ending in the Eiffel Tower being eaten by nanomites.

Did you work with other effects houses involved in the making of GI Joe?

THere were a lot of effects that were similar for other effects houses to work on. So earlier on Boyd Shermis (check spelling), the Visual Effects Supervisor, knew that there were a couple of facilities, Frantic was also working on some of the Nanomite effect shots that Digital Domain was involved with. Digital Domain uses a proprietary system so we couldn’t just hand it across the board. We don’t use the same software so it came down to us coming up with the best looking effect on our end. Then anyone else could copy it. So we were really fortunate that Boyd liked the direction we were going. Frantic’s shots were similar enough and had the qualities that the director liked so that it really worked out.

What were some of the more difficult challenges?

Well for the accelerator suit the biggest challenge was making sure that our digital suit matched the suits that Shane Mchan at Legacy had made. Just a couple weeks after shooting in Prague, we came back for a couple weeks and I had we were able to do a side to side comparison of our CG character (Marlin) running next to the real Marlin. So the big part was making it look real. It became very important to see the characteristics of our actors, even when they were CG.

A lot of movies have been using actors in front of green screens so that they can create their own environment. Did a lot of the action on GI Joe take place on a virtual set?

We definitely had our share of shots over blue. Actually a lot of the shots in Prague and you would be surprised how much of it we had to replace. You can almost call a lot of the shots we did virtual because in any shot there would be between 5 and 9 cameras running, cameras on motorcycle, cars, remote control camera plane, and locked off cameras. Even after you got a shot off you would still have to clean up all the other cameras. Pretty much every shot had to be rebuilt. We knew that we were going to have to do a lot of clean up to change Prague into Paris as well.

We had gigs and gigs of photography. We had (Lid?) that was supplied to us and we literally rebuilt almost every shot. Background removal for either getting rid of cameras or cables or being on a certain street too many times so we had to change the environment to be a different street. So I would consider most of the shots virtual, in a sense.

Can you tell us about some of the dynamics that went into creating certain shots?

Pretty much everything was an (RBD?) In the scene where the Eiffel Tower comes down the supervisors decided on a cloth simulation because no one really knew what would happen if the Eiffel Tower was eaten. We talked to a couple engineers to get a view of how it would happen. But Ultimately its a Hollywood movie and what we needed was to make it look cool. In the convoy sequence, with the concussion cannon we used fluid dynamic simulations. Once we had the simulations we could apply them to multiple effects to move toward a more organic effect. There is a scene on a rooftop where the baroness shoots a concussion gun and the glass roof basically buckles, crushes splatters, and comes into the shape a glass wave. I think those types of dynamics are really important for storytelling because you want to see something that is recognizable, and because of things like the glass being shattered and coming like a wave the first thing that comes intro your mind is “woah its a wave of glass!” I think that’s important when you hit those notes. Because there are quick cuts some of them only last 4 seconds and you need to get the full impact from them.

Introducing Darren Hendler, Digital Effects Supervisor on GI Joe

What were some of the challenges you encountered on GI Joe?

Well, I think one of the biggest challenges while working on GI Joe was the Eiffel Tower. Technically the Eiffel Tower was a huge challenge for us. When we were shooting it at the time, they didn’t really have any references of what the acceleration suits would look like. So basically the speed of the camera you know how fast you pan. But we ended up having to give the animators full control of the camera, which you don’t want to do when you go to live action plates. While they were animating they were able to move the cameras off the plates. We basically just decided that we would go through and rebuild whatever we saw was missing. But it was very important for us to the timing and the feel of the animation correct if that didn’t work then the shoots weren’t working at all well together. It was that and also we did allot of work basically trying to get continuity between shots. I mean we would have a cut from one plate to a different plate that was shot a month apart in different countries. We had to make sure that the same cars were in both plates the same buildings and over all making sure the continuity was the same between the plates.

Can you tell us about the Eiffel Tower effects work that we saw in the trailer?

We had the end of the sequence the Eiffel Tower get eaten away by these things called nanomites. Which are small nano machines that eat through metal. So the Eiffel Tower is getting eaten away and falls over and crashes into a river. We had a whole series of shots that were distant and close ups so it was kinda hard just to account for distance. So basically we had to account for everything. When we began the first thing we did was look at the Eiffel Tower and how it was built. We got a hold of an original set of plans of the Eiffel Tower and started looking at that. It was an amazing amount of detail in the Eiffel Tower. Just down to the rivets and struts and everything. Just tried to break it down, try to figure out a procedural way of actually building the Eiffel Tower. I think that maybe 11,000 struts and things that we didn’t want to go through and model them out by hand and stuff so we came up with a whole series of procedural techniques which allow us to build it much more efficiently. Then in the back end it would allow us to do allot of effects work for free in the end. The first thing was building the Eiffel Tower,a nd I think by the time the Eiffel Tower was built, it was so heavy we couldn’t load it up into a single load file. So we had to have several different scene files with different sections. The Eiffel Tower only really starts to come together at the end, when it was all put together.

Can you tell us about the procedural system that was used in building the Eiffel Tower?

Pretty much, we figure out what is the limited set of items we were going to use to build the Eiffel tower, so basically we came up with an Eiffel Tower tool kit. We had the modelers go through and place the various items. I am going to to A over here, class B over and stuff like that. Still try and keep the exact same look as the Eiffel Tower but at the same time try to limit the number of individual down as much as possible. Then for the Eiffel Tower just like for the shots we had a hybrid approach of hand animation from the animation department as well as a simulation approach. And felt we had allot of different shots that required different things. Art directed that we needed an animator to go in there to get us the look and feel we were looking for. In other shots we were like wanting to get a look and feel for an elastic deformation. So we used some cloth setups to get the base and motion for the Eiffel Tower. The whole pipeline we had an animator go in and an effects artist go in and simulate the base Eiffel Tower. It would be in a very very crude form basically just the major large struts and things we would preview that out and see if it looked about right. Then we would run it through another five or six processes and so we would actually see the final result in the end. I think what else made it extremely complex was not just that we were deformingbending, breaking the Eiffel Tower but it was being eaten away. I think very early on in our effects department came up with a system and showed us some tests of the Eiffel Tower being eroded. It was really exciting we all looked and listen, thought you know this is gonna be great. Then we actually thought it was going to be harder then any of us realized. As it’s getting eaten away what you find is it leaving little bits and piece behind. These little bits automatically need to recognize that they are no longer attached to the main structure and start falling times 11,000 struts and your ten million polygons and stuff. We had to have this whole system that as the whole Eiffel Tower is being eaten away items would recognize hey I am suddenly become detached from the remainder of the building, and I need to start falling off bending breaking and stuff like that. We devoted an automated system to do all of that but it defiantly took quite a while to run. We would have animator ruin through and run a pass of the full geometry and see what that would look like. Then run a pass at the erosion and see what that would look like. The go by and run the whole system all the piece would start breaking and falling off and everything.

Can you tell us more about the erosion system?

The erosion system we basically ran this pass we distributed nano points across and let them go through this system we built in Houdini, we basically would figure out where they were eating and stuff and they were jumping around and have the whole preprogrammed behavior. That was all set up by the effects lead Thomas Reckon. After that we run passes then to erode all pieces we then figure out, then figure based on their Uvs where it was going to be eaten away. Because what we were all set to do at that point is as they are gonna get eroded you wanna get back to solid geometry. So we then had to sub divide all the surfaces that were going to be eaten away then get rid of them and reveal a solid surface behind there. After that then you could run a few passes based off the objects Uvs. Figure out what sections have come detached and stuff. Sort of separate them into a RBD system so they could drop and bounce off the Eiffel Tower. It was a pretty complex system even by our standards. I think one of the other systems we had in there gave us allot of complex motion as we built a truss system. So we did not have to animate all the individual ruses. So we built a system that based on the tension on either end of the truss would automatically procedurally snap a truss, break a truss, have pieces fall off, shear it. Basically animate the whole tower and all the ruses would follow along and be breaking when appropriate. Fine tune the settings for how many things we want to snap or break off. For the most part we did not have to worry about each individual animation or anything alike that. Other than specific hero shots.

How long did the Eiffel Tower scene take?

It started at the beginning and finished at the end. We worked on it solidly from the first day we started on the Eiffel Tower and some of the last shots were delivered in the end. Added to that when the Eiffel Tower crashes into a body of water so then we had to simulate the Eiffel Tower crashing through a bridge, crashing into the water and everything. Again the Eiffel Tower was especially much harder because we didn’t know at what distance we were viewing it. Sometime we were up really close, and other time at a distance. It was kind of hard for us to plan using one route or the other. We had to have levels of detail but know that we were gonna be up close or at a distance at any time.

For so much work how did you handle the rendering required?

We actually did allot of distributed rendering. We broke it into allot of passes. We would have an animator go through and have the simulation guys go through and get the preview and truss work and ask does that look right? Then preview the erosion, “oh wait a moment it looks like it’s eroded here but it should be falling in this direction and instead go back and reanimate.” So we had levels of feedback and went backward and forth. It was very difficult, there was quite a long turn around time between the animation scene and the whole thing coming together. And often when you go back and sew all the pieces and go back and say this animation needs to change a little bit over here or change the simulation process.

Was it easier to go back and change the parts of the animation?

Yes, definitely on the eating away systems we got approved very quickly so we would do previews on they way it was getting eaten on a static version. We had different teams working in parallel so we could get sort of mile stone approvals and things before the whole system would come together but I mean it definitely to alot of faith on everyone’s side to be able to say I see this section here and I see this section here I see this third one over here and I can see how it will come together in the end.

Did the team handle water effects too?

They did yeah. And so we did custom in house fluids setups for the simulation of the river and everything. And we used stronger value metric system to create all the slashing events and all those kinds of things. In the end that turned out to be easier than the actual main Eiffel Tower itself.

By comparison the suits might have been a cake walk?

The technology involved was not as complex but as far as the number of shots that we did add allot of complexity to it. Our environment team were rebuilding so many plate we didn’t know how much work we would be doing in the beginning. One of the things that did make our lives much easier were we did allot of our environment and extension work in Nuke. We built our geometry in Nuke. So we could hand off our systems to our compositors with reflections and everything all set up in there in the compositors package so we could change camera do passes very very easily and it would update very quickly. We didn’t have to run it through 3D.

You usually can do the camera changes inside Nuke?

Well it would affect the accelerator suit and the main renders and things but we could do allot with Nuke. If there was a camera change we wouldn’t need to run it through the 3D department for the environment builds and things. We could easily change the content map it out something change the colors and stuff in Nuke. One of the other challenges in the accelerator suits besides doing all the environments work that Geoff Baumann and I did was integrating the accelerator suits into the plates. It had to look real, from cg suits to real so it had to match every time. So Hanzhi Tang who is our lighting lead, did allot of work on our lighting. We developed a system through DB before I think Paul Lambert. So basically we had to light off most the sets and we were actually shooting, we took a lot of high dynamic range photography and HDRs and back here we actually projected those HDRs on to the geometry and the stuff we put together. We would actually use that to either regenerate new HDRs for the various locations we needed them for or create running HDRs or even just to trace against the environment and use that for our lighting. Then we could make sure our lighting integrated nicely into our scene. So as the accelerator suit were running they were pretty reflective you see them reflected in the visor in the building reflected in the suit. You getting lighting changes based on the location where the suit is the suit could move over several hundred feet in a shot and stuff or even more in some cases. You’s need those changes in lighting to sit in to the plate.

So for most shots we built it pretty simple because of the LIDAR and HDR projected images. That trick is with the compositors ability in light?

Yes it was nice to use the HDRs to create a floating point environment in which case you got the intensity of all your lights. Intensity of shadows you got allot of dynamic range to work with. What we have also done because we were relying so heavily on HDRs we had to set up in Nuke where we could modify the HDRs and actually preview in real time what effect they would have on our suit. It had the suit in a reference position that a compositor could actually sit into the scene. It would publishes it off and it would be the first pass the lighters would use. Cause very often you take your HDR and there would be a giant cherry picker in the scene and when you put your suit in the scene it will turn bright red. And it doesn’t fit into the page at all. Our compositor would have to move it grade it out and then we would be able to take a look and see what the results would be. It gets to the point where you finally get a match, hand it off to lighting and then lighting will take it off from there. We defiantly did a lot of work with CG lighting. But those HDRs really gave us a look at some really good places to start from.

It seems like once you have established that base with the lighters all they have to do is move it where they want?

Pretty much I mean it gives them a pretty good base to start from. But again we had the suits on set for reference. And that was fantastic because you got the suits and see exactly what you are matching to but you find a lot of shots because the suits weren’t there on set to there are ways to make the suit look better you find a way to make those HDRs match the environment. They matched it pretty well but it doesn’t have all the nicety that you want to have. The DP would have done it that time if the object had been on the set during the day so definitely quite a lot of work for lighting. At the beginning of the movie we worked with some other scanning techniques using two different companies, one of them being XYZRGB images. We tried some different techniques in order to get the scans so XYZRGB already scanned the suits and all the body doubles and everything. And they’d use a portable set up which was pretty nice because they could take it to the location where they were shooting. And the actor would not have to move across town or go somewhere else to get scanned and they used a system that use projected lights and serial reconstruction on and were able to get nice scans of the suits. On the doubles which is normally pretty hard to do as well as they would then move all the piece and scan them separately. That makes sure we have our CG suit fitting. They act same way as it was at the time. And the second company was called the Guru and they basically licensed light stage technology. They had a giant dome of lights which we used to do all the facial scanning. It comes up with a very dense facial mesh as well as a fuse texture that will help us get some properties with the skin to helps us with the skin shading.

How is it different from facial motion capture? I believe the system the us e with the scanning is with a market so technology. It sounds like it really helped out allot?

It gave us a really good starting point. We had really nice high quality facial scans to start from. Normally we would have to do a high quality cast which is generally more expensive. As well as when you do a facial cast of the actor it changes their face shape so then you find your self fighting to get yourself to the original face shape. This allows us to do many facial poses high detailed facial poses and things very quickly. Just having the systems portable and going to where the actors were just made it so much easier. And made it much more easier for us to get data we didn’t have to work as much with the actor’s schedules, like very often it is very hard to get them and schedule them into a time. While we were there they could have a twenty minute break they could come get scanned and get back to work.

Where do you see the virtual faces in the scene?

We didn’t see as much of the faces as thought originally and they are doing allot of things behind the visors but there are a few shots of hero facial work. But definably it’s not the focal point of the movie by any means. It was nice cause we were able to spend less time on that and focus our attention on the suits. It gave us a nice surface detail and pass over or diffuse color data. It was a pretty good starting point. We take our own additional photography, we set up a full polarized light shoot. For our texture shoot we run all the actors through and gave us high quality images more diffuse but we definitely use the other images as well as the bases. Actually what I want to say is throughout the chase sequences when the other areas had proved to be quite complex in integration with effects. Our effects team lead by Phillip Crawl did allot of work integrating the suits into the environment and basically once it matches lighting it sits in low kicks in debris. You have dust. We had so many scenes of shattering of glass. And sparks and everything and that’s refined all those effects layers and refining the suit and feeling the whole thing was coming together. It crashes through glass and we also did a ton of effects work, allot of cars being destroyed. Originally we thought we would be shooting most of those and we did shoot allot of it on set but we thought for various reasons and time issues they weren’t able to us e the shoot they would change the camera so they would often being using that as reference. It was fantastic reference. But it sort of recreates it change the camera, destroying the car ripping gait pieces, windows shattering proved to be allot of work.

So they destroyed the car in real life? But you had to change the camera position so you had to reproduce the whole scene?

Rebuilt the entire plate, all the practical effects they did on set to do allot of CG I mean it’s really hard at the time knowing where the cameras are and the timing. But once they see piece at the end they may want to make changes. The practical work was some of the nicest we’d seen and it got used in allot of scenes but there was just too many shoots we could couldn’t use it for or had to rebuild it or just use sections of it. Sometime we would use hands from it in a CG version. We treat each one as a case by case cases.

Comments

You can follow any responses to this entry through the RSS 2.0 feed.


Leave a Reply