Monday, June 19th, 2017 Posted by Jim Thacker

Videos: the best of Siggraph 2017’s technical papers


If you follow the Siggraph YouTube channel, you may already have seen the trailer above, which shows some of the papers due to be presented at Siggraph 2017, which takes place in LA from 30 July to 3 August 2017.

As usual, it provides a good overview of the conference’s key themes, but for the detail, you need to turn to the full videos accompanying the papers – which are steadily appearing online as the show approaches.

Below, we’ve picked out our own favourites, including several not featured in Siggraph’s own round-up, covering everything from fluid simulation and fur rendering to a new method for modelling tentacles.

The resulting 15 videos showcase some of the most innovative, the most visually compelling – and sometimes, just the plain weirdest – research being done in computer graphics anywhere in the world.

Simulation


A Multi-Scale Model for Simulating Liquid-Hair Interactions
Yun (Raymond) Fei, Henrique Teles Maia, Christopher Batty, Changxi Zheng, Eitan Grinspun

As ever, simulation proved a rich field for new research – and increasingly, not just the simulation of isolated physical phenomena, but of the interactions between them.

This paper, from researchers at Columbia and the University of Waterloo, explores the interaction between fluids and hair, including the way liquid is caught between strands, and the way it flows down them.

In the demo, the strands look rather coarse – almost more like wire than hair – but the fluid flow looks good, as does the way liquid drips from the hair: achieved by converting the flow to APIC particles.

The simulation framework, libWetHair, is open-source, and available for Windows, Linux and OS X, while the demo itself uses Houdini for surface reconstruction and rendering, so you can try it for yourself.



Multi-species Simulation of Porous Sand and Water Mixtures
Andre Tampubolon, Theodore Gast, Gergely Klár, Chuyuan Fu, Joseph Teran, Chenfanfu Jiang, Ken Museth

Two of the pioneers of the APIC particle method, Chenfanfu Jiang and Joseph Teran, also contributed to a paper on the interaction between water and another type of material: in this case, porous sand.

Their method, intended to mimic landslides, uses continuum mixture theory with two separate phases, each individually obeying conservation of mass and momentum, coupled via a momentum exchange term.

The sand is modeled an elastoplastic material whose cohesion varies with water saturation, and the return mapping for sand plasticity avoids the volume gain artefacts of the traditional Drucker-Prager model.

The results look good, as do the chances of them being used in entertainment work: Jiang and Teran have formed their own company, Jixie Effects, while co-authors Gergely Klár and Ken Museth are at DreamWorks.



Anisotropic Elastoplasticity for Cloth, Knit and Hair Frictional Contact
Chenfanfu Jiang, Theodore Gast, Joseph Teran

Jiang and Teran also contributed to a second paper on elastoplasticity, this time using a novel version of the Material Point Method to simulate contacts between cloth or hair and other materials.

They use a lot of striking test cases, including a jumper being torn apart and a bag being filled with slime.

However, the most eye-catching demo is probably the one right at the start of the video: 7 million grains of coloured sand flow over a sheet of cloth, then fall to the ground to form the Siggraph logo.



Lighting Grid Hierarchy for Self-illuminating Explosions
Can Yuksel, Cem Yuksel

There is a similar demo from this paper on self-illumination in simulated explosions: in this case, 200,000 multicoloured point lights cascading over a solid version of the Siggraph logo.

The simulation is intended to demonstrate a new, more efficient means of calculating illumination within clouds of smoke: converting the original volumetric lighting data into large numbers of point lights.

The authors use a lighting grid hierarchy to approximate volumetric illumination at different resolutions, focusing on temporal coherency to avoid flicker, with results visually indistinguishable from path tracing.

Again, it’s a technique that could quickly see use in production: Can Yuksel is another veteran of DreamWorks Animation, and is currently senior FX TD at Industrial Light & Magic.



Interpolations of Smoke and Liquid Simulations
Nils Thuerey

Veteran fluid dynamics researcher Nils Thuerey, formerly of Scanline VFX, contributed to several Siggraph papers this year, including one on cutting elastic sheets, and one on data-driven synthesis of smoke.

However, our favourite demo is from the paper above, which describes a new method of interpolating between smoke and liquid simulations.

At 05:00 in the video, you can see the two source sims being in-betweened to create a strange hybrid material that doesn’t just swirl like smoke or drip like water, but does both at the same time.



Inside Fluids: Clebsch Maps for Visualization and Processing
Albert R. Chern, Felix Knöppel, Ulrich Pinkall, Peter Schröder

Nineteenth-century German mathematician Alfred Clebsch makes an unexpected reappearance at Siggraph this year in the form of the eponymous Clebsch maps, used in the paper above to encode velocity fields.

The method is used primarily as a means of visualising fluid flow: at 02:00 in the video above, you can see a very beautiful visualisation of the vortices shed from a hummingbird’s wings.

However, it also has potential applications for simulation work: in the paper, the researchers note that it can be used to enhance sims through the introduction of subgrid vorticity.



Bounce Maps: An Improved Restitution Model for Real-Time Rigid-Body Impact
Jui-Hsien Wang, Rajsekhar Setaluri, Dinesh K. Pai, Doug L. James

If you’re feeling that solids have taken a back seat to fluids so far, this paper from researchers at Stanford and the University of British Columbia should go some way towards redressing the balance.

Whereas standard rigid body sims assume that dynamics are governed by a single, global constant, the coefficient of restitution, their method allows the value to vary across the surface of colliding objects.

The resulting one-body values are then used to approximate the two-body coefficient of restitution.

When used in dynamics sims, such ‘bounce maps’ result in more complex, visually richer behaviour, with an object rebounding in quite different ways according to the exact position of the impact on its surface.

Modeling


Regularized Kelvinlets: Sculpting Brushes based on Fundamental Solutions of Elasticity
Fernando de Goes, Doug L. James

One of the perks of working at Pixar is that you get to use its assets, so part of the fun of Fernando de Goes and Doug L. James’ demo video is seeing characters from Finding Dory deformed into strange shapes.

Their technique for 3D sculpting and 2D image editing is based on the response of real elastic materials to the forces generated by common modelling operations like grab, scale, twist and pinch.

Being physically plausible, the method avoids the artefacts generated by traditional modelling tools, such as the changes in volume created by grab brushes: you can see a comparison with Maya’s Grab Tool at 00:55.

The result? Hank stays looking like an octopus, no matter into what strange shapes you twist him.



Skippy: Single View 3D Curve Interactive Modeling
Vojtech Krs, Ersin Yumer, Nathan Carr, Bedrich Benes, Radomir Mech

‘Skippy’, a new algorithm from a team at Purdue University and Adobe Research, lets you draw curves around complex 3D objects without ever having to adjust your camera view.

The method divides 2D strokes drawn from a single viewpoint into continuous segments, duplicating those that could fall in front of or behind other objects, then finding an optimally smooth 3D path connecting them.

The result is a 3D stroke that hugs the surface of existing geometry: the demo shows an unfortunate ship being engulfed by the tentacles of a kraken, and snakes wrapping around the head of a medusa.

You can even add temporary geometry to the scene specifically to guide the curves, then delete it once they have been created, leaving the snakes coiling through empty air.

Animation



Implicit Crowds: Optimization Integrator for Robust Crowd Simulation
Ioannis Karamouzas, Nick Sohre, Rahul Narain, Stephen J. Guy

Most of this year’s papers on animation seem to be refinements of existing techniques rather than entirely new ones, but there were still a couple that caught our eye.

One covered a new kind of neural network to control animation blending in games – a paper co-authored by Method Studios’ Jun Saito; the other was this one on the use of implicit integration for crowd simulation.

Unlike traditional methods, the simulation does not break down as the size of the time steps used in the calculation increase, and the results mimic some interesting behaviours of real crowds, like lane formation.

You can see how it holds up in large sims towards the end of the video, where virtual characters navigate a maze, with crowds from five different starting positions mingling to form an orderly queue at a single exit.

Capture technologies



Light Field Video Capture Using a Learning-Based Hybrid Imaging System
Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, Ravi Ramamoorthi

By enabling users to change settings after footage has been captured, light field video cameras open up new possibilities for VFX artists, but their cost puts them beyond the reach of all but the largest studios.

This paper from a team at Berkeley and UC San Diego offers an ingenious low-cost solution: one of Lytro’s consumer cameras captures light-field data at 3fps, while a standard DSLR captures 2D video at 30fps.

The two can then be used to reconstruct the missing light-field frames via a learning-based approach, using flow estimation to warp the input images, then appearance estimation to combine the warped images.

The result is low-cost, 30fps light-field video, enabling users to change the focal point of a shot in real time, or even move the camera during playback – by a few degrees either way, at least.

Lighting and rendering


A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence
Laurent Belcour, Pascal Barla

Ever wondered why leather has a certain zing in offline renders that it lacks in games? It may be down to thin-film iridescence: the subtle rainbow hues generated by the film of natural grease on its surface.

Whereas spectral rendering engines like Maxwell can reproduce the look of goniochromatic materials – those whose colour varies with viewing angle – real-time engines limited to RGB colour components can’t.

Or at least, they couldn’t. This paper combats the problem – aliasing in the spectral domain – by antialiasing a thin-film model, incorporating it into microfacet theory, and integrating it into a real-time engine.

The result? Realistic leather, car paint and soap bubbles that render at 30fps. Co-author Laurent Belcour works for Unity Technologies, so the chances of actually seeing the the system in use in games are good.



Click the image above to play the video


An Efficient and Practical Near and Far Field Fur Reflectance Model
Ling-Qi Yan, Henrik Wann Jensen, Ravi Ramamoorthi

Standard DCC software treats hair and fur as if they were identical. But in reality, the fibres that make up animal fur have a distinct medulla – an inner core – that human hairs lack.

This paper, from a team at Berkeley and UC San Diego, builds on the authors’ existing double-cylinder model for fur fibres, refining the way in which light scattering through the medulla is calculated.

The method preserves the standard Marschner R, TT and TRT scattering modes used for rendering hair, making it easy to integrate into existing software, but adds two new extensions for fur: TTs and TRTs.

Further practical optimisations enable smooth transitions when switching between rendering near and far objects, ensuring that your CG cats render realistically, no matter how close to the camera they are.



Example-Based Synthesis of Stylized Facial Animations
Jakub Fiser, Ondrej Jamriska, David Simons, Eli Shechtman, Jingwan Lu, Paul Asente, Michal Lukac and Daniel Sykora

We’ve covered style transfer – the transfer of the colour palette and fine geometric forms from one image to another – on CG Channel before. But we’ve never featured a method specifically aimed at facial animation.

The video above shows the visual style of a range of natural media, including oil paintings, watercolours, pencil sketches, and even a bronze statue, transferred to video footage of live actors.

The eyes on the resulting animated statue look a bit creepy, but all of the other results are seamless, opening up new possibilities for visually stylised animation generated automatically from reference footage.

Several of the authors work at Adobe, so cross your fingers for related tools in the firm’s future releases.

Just plain amazing


VoCo: Text-based Insertion and Replacement in Audio Narration
Zeyu Jin, Gautham Mysore, Stephen DiVerdi, Jingwan Lu, Adam Finkelstein

If you followed last year’s Adobe Max conference, you may already be familiar with VoCo, the firm’s text-based system for editing recorded speech, but the demo is too compelling not to include here.

Whereas current tools enable editors to rearrange recorded speech by cutting and pasting existing words inside a text transcript, VoCo lets you type entirely new words, and have the software synthesise them.

The result sounds eerily like the original speaker’s voice: there’s sometimes an odd change of emphasis at the start of a new word, but picking one of the variant intonations that VoCo generates usually fixes it.

Adobe claims that VoCo generates better results in a second than a human audio engineer can manage in 20 minutes of painstaking splicing – and on the evidence of the demo above, we’re inclined to believe them.


More research online
That’s it for this round-up – although only a small sample of the research being presented at Siggraph 2017.

Other papers cover everything from new methods for motion-capturing clothing to ingenious techniques for capturing light field images using water droplets as lenses and 3D printing with magnetic flakes.

As usual, graphics researcher Ke-Sen Huang has compiled a list of the other papers and demo videos currently online. Check it out via the link below, and nominate your own favourites in the comments section.


Visit the technical papers section of the Siggraph 2017 website

Read Ke-Sen Huang’s invaluable list of Siggraph 2017 papers available on the web
(More detailed than the Siggraph website, and regularly updated as new material is released publicly)