Tuesday, May 27th, 2014 Posted by Jim Thacker

Videos: the best of Siggraph 2014’s technical papers

ACM Siggraph has posted its annual preview reel showing the highlights of the technical papers due to be presented at Siggraph 2014, which takes place in Vancouver from 10 to 14 August.

Despite the jokey voice-over, this is research with practical applications. Futuristic they may be, but we can imagine using any one of these technologies in production in the next few years – well, almost.

Below, we’ve rounded up our own highlights, from advances in simulation to… um, pixie dust.

Image manipulation

http://www.youtube.com/watch?v=-2Xdziht4Gs
3D Object Manipulation in a Single Photograph using Stock 3D Models
Natasha Kholgade, Tomas Simon, Alexei Efros and Yaser Sheikh

Carnegie Mellon and UC Berkeley’s research into 3D-augmented image reconstruction plays out like a version of Photoshop’s Content-Aware Fill on steroids.

The technique relies on publicly available stock 3D models to help reconstruct the new surfaces and parts of the background exposed when repositioning an object within an existing photo.

The process leverages the structure and symmetry of the model to factor out the effects of illumination.

The results look good, at least in the test scenes, which range from moving a parked car to teeter on the edge of a cliff, to animating an origami bird to fly away from the hand on which it was originally balanced.

Materials, lighting and rendering


Discrete Stochastic Microfacet Models
Wenzel Jakob, Miloš Hašan, Ling-Qi Yan, Jason Lawrence, Ravi Ramamoorthi and Steve Marschner

Tools for rendering glittery surfaces took a couple of steps forward this year, with two related papers from a team from various US research institutions: mainly Cornell and UC Berkeley.

You can see its work in action on surfaces ranging from metallic paint to snow in the video above.

The paper’s primary author, Wenzel Jakob, develops the open-sourced physically based Mitsuba renderer, raising the hope that an implementation may surface there soon.

Character posing


Tangible and Modular Input Device for Character Articulation
Alec Jacobson, Daniele Panozzo, Oliver Glauser, Cédric Pradalier, Otmar Hilliges and Olga Sorkine-Hornung

Mice and similar input devices aren’t always the most intuitive tools for posing 3D characters. ETH Zurich’s Interactive Geometry Lab took a more tactile approach, eroding the boundaries between 3D and stop-motion.

Assembled from modular, hot-pluggable parts, its build-it-yourself physical controller enables users to create armatures matching their characters’ skeletons and manipulate them to pose the 3D model.

Embedded sensors measure the device’s pose “at rates suitable for real-time editing and animation”. Is it time for live puppeteering to make a return to TV screens? On the evidence above, possibly so.

Motion synthesis


Online Motion Synthesis Using Sequential Monte Carlo
Perttu Hämäläinen, Sebastian Eriksson, Esa Tanskanen, Ville Kyrki1 and Jaakko Lehtinen

The idea of synthesising character movements on the fly, rather than relying on a library of animation clips, has been around in videogames for some time, but has always been costly and complex to implement.

Aalto University and Nvidia’s method simplifies the process, removing the need for a set of reference motions.

Unlike similar research presented at last year’s Siggraph Asia, it doesn’t require locomotion controllers to be precomputed, meaning that characters can adapt to the environment, or get up when they fall.

The system integrates into the Unity game engine – so again, a practical, publicly available implementation might not be that far away.


Generalizing Locomotion Style to New Animals with Inverse Optimal Regression
Kevin Wampler, Zoran Popović, Jovan Popović

Adobe Research and the University of Washington’s work on gait synthesis uses the motion of modern animals to predict how the dinosaurs might have moved.

The team took video footage of six birds and 12 quadrupeds, tracked it, reconstructed a simplified skeleton for each, then used the data to synthesize motion for extinct animals of varying skeletal proportions.

Not knowing how, say, a Triceratops or Velociraptor actually walked, it’s difficult to say how accurate the results – seen at the end of the video – actually are, but they certainly feel believable.


Inverse-Foley Animation: Synchronizing Rigid-Body Motions to Sound
Timothy R. Langlois and Doug L James

Motion synthesis of a rather more unusual kind is on show in the work coming out of Cornell University on ‘inverse-foley animation’, which synthesizes rigid-body animations to match the sound of objects hitting the floor.

The technique samples a database of precomputed motions to find ones matching the contact points in the audio, using motion blending and retiming to fine-tune the synchronisation.

We aren’t sure how often anyone needs to work from the foley effects to the VFX in production, but the system apparently scales to “millions of contact events”, so the potential is there for, say, a large-scale destruction shot.

Physics simulations


Adaptive Tearing and Cracking of Thin Sheets
Tobias Pfaff, Rahul Narain, Juan Miguel de Joya and James F. O’Brien

Simulating thin sheets of material was a fertile area at Siggraph 2013. One of the key papers came from a team from UC Berkeley – who this year moved from crumpling those sheets to tearing them.

The video above shows their method, which dynamically refines the mesh at points where cracks are likely to propagate, in action on a range of material types: from paper to metal plate, via wood, glass and foil.

Melting and freezing
Another highlight of last year’s reel came from a team at UCLA and Walt Disney Animation Studios. The video of its research into snow simulation wasn’t online at the time of the show, but it proved to be worth waiting for.

Sadly, this year’s follow-up, extending the material point method to changes of phase, isn’t online yet either – but the clip in the preview reel, which shows melting chocolate bunnies, is tantalising.

Pixie dust


Pixie Dust: Graphics Generated by Levitated and Animated Objects in [a] Computational Acousti-Potential Field
Yoichi Ochiai, Takayuki Hoshi and Jun Rekimoto

But it wouldn’t be a Siggraph tech papers preview without at least one bit of blue-sky thinking, preferably one relating to a novel and highly impractical display technology.

Pixie Dust: Graphics Generated by Levitated and Animated Objects in [a] Computational Acousti-Potential Field checks all of those boxes, with the added bonus that it takes its name from Peter Pan.

It uses the standing waves generated by an array of sound sources to levitate small objects, like dust particles.

As you might imagine, the fidelity isn’t exactly high, but it manages a recognisable replica of the Siggraph logo – and seeing the thing hanging there levitating in mid-air is pretty neat.

Read the full line-up of technical papers at Siggraph 2014 on the conference website