Wednesday, May 22nd, 2013 Posted by Jim Thacker

Videos: the best of Siggraph 2013’s technical papers

ACM Siggraph has posted its annual preview reel showing the highlights of the technical papers due to be presented at Siggraph 2013, which takes place in Anaheim from 21 to 25 July.

As ever, it’s a fascinating mix of the practical but prosaic, the impractical but cute, and all points in between. And as ever, there are some real gems. Below, we’ve rounded up our own favourites.

Better forms of simulation
Research into new simulation techniques often generates some of Siggraph’s most appealing demo videos, and this year is no exception – which makes it all the more galling that not all of them are directly embeddable.

In the ‘ridiculously specialised’ camp, simulation of paper has been advanced by not one but two new techniques. Ohio State University’s Adaptive Fracture Simulation of Multi-layered Thin Plates (above) covers what happens when you tear it; while UC Berkeley’s Folding and Crumpling Adaptive Sheets handles the rest.

The latter does have a demo movie, but you’ll have to follow the link and download it for yourselves.

Sadly, we couldn’t find a video for UCLA and Disney’s work on A Material-Point Method for Snow Simulation. But you can see clips in the highlights reel at the top of this post – and the results really do look like wet snow.

Less immediately eye-catching – but probably more commercially applicable – simulation techniques on show include Highly Adaptive Liquid Simulations on Tetrahedral Meshes.

Put together by a team that includes Nils Thuerey of Scanline VFX, the paper sets out faster ways to create complex liquid motion by dynamically adjusting the resolution of the simulation.

Theodore Kim and John Delaney’s paper on Subspace Fluid Re-simulation explores similar territory, looking at ways of iterating very high-res simulations.

The paper shows “how to analyze the results of an existing high-resolution simulation, discover an efficient reduced approximation, and use it to quickly ‘re-simulate’ novel variations of the original dynamics.”

Capturing the real world
A recurring theme in this year’s reel is the need for new ways to capture the real world in digital form.

Sadly, we can’t find any information about Microsoft’s Scalable Real-time Volumetric Surface Reconstruction, which uses a Kinect to capture complex environments – think a low-cost alterative to LIDAR.

However, the others are available in full online.

Including the redoubtable Paul Debevec, a team from the USC has developed a new method for simultaneously Acquiring Reflectance and Shape From Continuous Spherical Harmonic Illumination.

The device uses a spinning LED arm and five-camera array to capture data, and works on highly reflective objects. It will be interesting to see if it has the same kind of real-world potential as Debevec’s previous research.

But the real highlight is Linjie Luo, Hao Li and Szymon Rusinkiewicz’s work on Structure-Aware Hair Capture, which reconstructs very complex hairstyles from nothing more than still source images: as few as 50 of them.

The technique, which works on photos taken under ordinary lighting conditions, first reconstructs a point cloud within which it uncovers a ‘wisp structure’ from which the final hair strands are synthesised.

The results are “robust against occlusion and missing data and plausible for animation and simulation”. Cool stuff.

New ways to solve old problems
Much practical research involves better ways of tackling existing tasks. Into this category fall the USC’s Interactive Authoring of Simulation-Ready Plants, which does pretty much what the name suggests; and Global Illumination with Radiance Regression Functions, which explores faster methods of calculating GI.

Meanwhile, Stylizing Animation by Example has the double benefit of being both tuned to the needs of commercial jobs and visually appealing to boot.

In the paper, researchers from Pixar and leading North American universities set out a new way to artistically stylise rendered 3D animation, working from a small set of hand-drawn source images.

Weird and wonderful
Finally, there were a couple of papers that probably won’t see much use in high-end graphics production, but which were just too cute to pass up.

In Make It Stand: Balancing Shapes for 3D Fabrication, researchers from ETH Zurich and Inria do just that, creating some of the most balletic 3D prints we’ve ever seen.

For proof, check out the gallery at the end of the video. We particularly liked the little chap standing on his head.

But the project we have most personal need of is Handwriting Beautification Using Token Means.

In it, Microsoft’s Larry Zitnick sets out a method for taking the illegible notes you’ve scrawled on a tablet and automatically tidying them up, based on samples of your best handwriting.

Now if only someone could make our best handwriting legible as well, we’d be laughing. But that, sadly, is probably a task beyond even the most talented of graphics researchers.

Read the full line-up of technical papers at Siggraph 2013 on the conference website