Tuesday, June 16th, 2015 Posted by Jim Thacker

Videos: the best of Siggraph 2015’s technical papers

Last month, ACM Siggraph posted its annual preview reel (above) showing the highlights of the technical papers due to be presented at Siggraph 2015, which takes place in Los Angeles from 9-13 August.

As usual, it provides a very watchable overview of the conference’s key themes, but for the detail, you need to turn to the full videos accompanying the papers – which are steadily appearing online as the show approaches.

Below, we’ve rounded up our own pick of the year’s technical advances in computer graphics: from the recreation of realistic human faces to new techniques for simulating mayonnaise and soap bubbles.

Fluid simulation


A Stream Function Solver for Liquid Simulations
Ryoichi Ando, Nils Thuerey and Chris Wojtan

The papers round-up usually brings a good crop of advances in fluid simulation, and this year is no exception, including this one from a team including Scanline VFX research lead and Sci-Tech Oscar winner Nils Thuerey.

The technique enables the simulation of systems including two phases – for example, immiscible liquids or liquids and gases – by computing only one of those phases.

The best examples are in the middle of the video and show the ‘glugging’ effect as a liquid flows through a hole in a sealed container under the influence of gravity, forcing bubbles of air back up through the stream.



Co-dimensional Non-Newtonian Fluids
Bo Zhu, Minjae Lee, Ed Quigley and Ron Fedkiw

Double Sci-Tech Oscar winner Ron Fedkiw‘s team at Stanford was resonsible for another eye-catching demo: a new method of simulating non-Newtonian fluids – those whose viscosity changes according to shear rate.

According to the abstract of the paper: “Improved treatment of viscosity on the rims of thin fluid sheets … allows us to capture their elusive, visually appealing twisting motion.”

Or in layman’s terms: better simulations of paint, molten plastic, mayonnaise, and melted cheese.



Restoring Missing Vortices for Advection-Projection Fluid Solvers
Xinxin Zhang, Robert Bridson and Chen Greif

Gaseous fluid simulations also got a look in in this paper from a University of British Columbia team that includes another Oscar-winner, Naiad co-creator Robert Bridson.

Their new IVOCK (Integrated Vorticity of Convective Kinematics) system addresses the issues that arise when advecting velocity into fluid flow to generate turbulence.

In traditional VFX fluid solvers, using larger timesteps in the simulation causes a noticeable loss of energy from the simulation – resulting in simpler, less turbulent fluid flow.

IVOCK, which works in both Semi-Lagrangian and FLIP solvers, minimises the problem, resulting in realistically turbulent smoke plumes that don’t come at the cost of greatly increased simulation times.



Fluid Volume Modeling from Sparse Multi-view Images by Appearance Transfer
Makoto Okabe, Yoshinori Dobashi, Ken Anjyo and Rikio Onai

This project, from a team of researchers at leading Japanese universities and animation studio OLM Digital, takes an entirely different approach, reconstructing volumetric fluids from real-world reference footage.

The algorithm first roughly reconstructs the overall fluid volume from the source footage, then iteratively applies detail via appearance transfer. Finally, colour and turbulence are applied to the animation.

The technique works even when reference footage has been shot from a single angle, and generates a velocity field that can be used to make the resulting volumetric fluid interact with other 3D objects in a scene.

Reconstructing skin, hair and facial features


Skin Microstructure Deformation with Displacement Map Convolution
Koki Nagano, Graham Fyffe, Oleg Alexander, Jernej Barbič, Hao Li, Abhijeet Ghosh, Paul Debevec

Another theme of this year’s round-up was the recreation of fine details of the human body, particularly the face.

Sadly, the video accompanying the paper on Detailed Spatio-Temporal Reconstruction of Eyelids (the title may sound like a joke, but the results look really cool) isn’t online yet, but the paper above is a bonus.

The authors used detailed scans of real skin to quantify how it smooths under tension and wrinkles under compression, mimicking the results in 3D by dynamically blurring or sharpening a displacement map.

The result is GPU-accelerated simulation of ultra-fine skin details – less than 0.1mm in size – in reponse to changes in facial expression.



Single-View Hair Modeling Using a Hairstyle Database
Liwen Hu, Chongyang Ma, Linjie Luo, Hao Li

Hao Li also co-authored this paper on single-view hair modelling, which takes a novel approach to speeding up the creation of detailed CG hairstyles.

Draw a few reference strokes over a source photo to represent the overall boundaries and flow of an actor’s hair, and the algorithm synthesises a CG hairstyle to match, drawing on a database of source 3D hair models.

Shape editing and retopology


Semantic Shape Editing Using Deformation Handles
Mehmet Ersin Yumer, Siddhartha Chaudhuri, Jessica K. Hodgins, Levent Burak Kara

Ever wished that you could design new products simply by adjusting abstract qualities like ‘stylishness’, ‘comfort’ or ‘durability’ instead of pulling points around in 3D?

The demo above, from a team at Carnegie Mellon and Cornell Universities, interactively deforms a source model to return a result that matches the real-world properties specified by the user.

The workflow is aimed more at design exploration and user testing than generating a production-ready model, and it requires quite a lot of specialist input, details of which can be found in the paper.

But it’s neat to see a model change in real time in response to sliders with titles like ‘fashionable’ or ‘sporty’.



Data-Driven Interactive Quadrangulation
Giorgio Marcias, Kenshi Takayama, Nico Pietroni, Daniele Panozzo, Olga Sorkine, Enrico Puppo, Paolo Cignoni

Another paper with obvious potential to transform the way we create 3D models – or retopologise them, at least – is this one on interactive quadrangulation, produced by a large international team.

Working from a large set of source models, the researchers extracted a library of readymade quad patches.

When a region on the surface of a new model is selected, the system queries the database for matching patches, enabling a user to retopologise that region in real time by drawing simple guide curves.

If you want to try for yourself, an earlier version of the tech, minus the database look-up, can be downloaded from ETH’s Interactive Geometry Lab website. You can find more details in this thread on The Foundry’s forum.

3D printing


Computational Hydrographic Printing
Yizhong Zhang, Chunji Yin, Changxi Zheng and Kun Zhou

As well as new ways to create 3D models, the papers round-up increasingly features new ways to print them.

This video on hydrographic printing – transferring ink from a thin PVA film floating on the surface of a water bath to an object being dipped into it – was the undoubted highlight.

While the technique has been around for a while, problems aligning the film with the object to be printed meant it was effectively confined to simple abstract colour patterns.

Using off-the-shelf hardware, the team built a motor rig driven by a computer vision system to control the transfer more accurately, based on a viscous fluid simulation of how the film will behave in reality.

The result is equal parts garage engineering and graphics research, and it’s kind of magical to watch a plain white model being dipped into the water tank, and a perfectly printed leopard or zebra coming out.

Just fun to watch


Time-lapse Mining from Internet Photos
Ricardo Martin-Brualla, David Gallup, Steven M. Seitz

Lastly, two videos that have less obvious applications in day-to-day CG work, but which are a lot of fun to watch.

The first, from a team at the University of Washington and Google, mines online photo galleries to crowd-source images of well-known buildings and landmarks.

By sorting the photos by date, warping them onto a common viewpoint, and compensating for changes in lighting, the system can create instant timelapse footage.

The result – skyscrapers rising, glaciers melting and gardens blooming – is curiously soothing. Just pity the poor Swiss Guard who has to stand so still that he becomes a fixture in the timelapse footage of the Vatican.



Discrete Circulation-Preserving Vortex Sheets for Soap Films and Foams
Fang Da, Christopher Batty, Chris Wojtan, Eitan Grinspun

And finally, the award for scientific pun of the year goes to this paper on the simulation of soap bubbles. The more descriptive, scientifically accurate subtitle can be seen in the link above.

But the actual title of the paper? ‘Double Bubbles Sans Toil and Trouble’. The video itself, which shows meticulously rendered 3D bubbles forming, fusing and popping, is very pretty, too.

Visit the technical papers section of the Siggraph 2015 website

Read Ke-Sen Huang’s invaluable list of Siggraph 2015 papers available on the web
(More detailed than the Siggraph website, and regularly updated as new material is released publicly)