Thursday, June 14th, 2018 Posted by Jim Thacker

Videos: the best of Siggraph 2018’s technical papers


If you follow the Siggraph YouTube channel, you may have seen the trailer above previewing some of the papers due to be presented at Siggraph 2018, which takes place in Vancouver from 12-16 August 2018.

As usual, it provides a good overview of the conference’s key themes, but for the detail, you need to turn to the full videos accompanying the papers – which are steadily appearing online as the show approaches.

Below, we’ve picked out our own favourites, including some not featured in Siggraph’s own round-up, covering everything from fluid simulation and material synthesis to modelling soft toys.

The resulting 10 videos showcase some of the most innovative, the most visually compelling – and sometimes, just the plain weirdest – research being done in computer graphics anywhere in the world.

Simulation



Animating Fluid Sediment Mixture in Particle-Laden Flows
Ming Gao, Andre Pradhana, Xuchen Han, Qi Guo, Grant Kot, Eftychios Sifakis, Chenfanfu Jiang

Advances in the simulation of physical phenomena make for compelling demos. As with 2017’s papers, some of this year’s most eye-catching dealt with the interaction between different real-world materials.

This paper, from a team of US-based researchers, reconstructs the behaviour of mixtures of fluids and granular solids, using a mixed explicit and semi-implicit Material Point Method.

Using MPM grids to resolve the two-way exchange of momentum between fluid and sediment, the method mimcs everything from debris being pushed down a water pipe to the motion of wind-blown sand dunes.



A Multi-Scale Model for Simulating Liquid-Fabric Interactions
Yun (Raymond) Fei, Christopher Batty, Eitan Grinspun, Changxi Zheng

If sand isn’t your thing, this paper from researchers at Columbia and the University of Waterloo, explores what happens when liquids interact with fabrics.

The model takes account of the “porous microstructure” created between threads in the weave – scenarios include yarn-based, mesh-based and ‘fuzzy’ cloth – to reflect the directional properties of real fabrics.

It simulates a wide range of effects, including buoyancy, nonlinear drag and capillary pressure, with the virtual water variously pooling on, soaking into, dripping from, or completely submerging the cloth.

Modeling



Shape from Metric
Albert Chern, Felix Knöppel, Ulrich Pinkall, Peter Schröder

With its elegant graphics and minimal piano score in place of a voiceover, this demo – created by a team from TU Berlin and Caltech with the support of SideFX – is almost a CG short in its own right.

The underlying algorithm reconstructs the topology of arbitrary triangular mesh surfaces in three-dimensional space, mimicking the deformation of thin materials with high membrane stiffness.

In the video, that means anything from deflating footballs and crushed drink cans to Escher-like abstract forms, culminating in a showstopping demo of the Stanford bunny turning itself inside out.



FoldSketch: Enriching Garments with Physically Reproducible Folds
Minchen Li, Alla Sheffer, Eitan Grinspun, Nicholas Vining

In traditional clothing design software, artists create folds and pleats by meticulously shaping 2D pattern parts, then adjusting the way they drape in 3D. In FoldSketch, you simply draw them on.

Users sketch onto the surface of a 3D model where they want folds to form, with the system automatically generating the new geometry and reshaping the 2D pattern parts via an alternating 2D-3D algorithm.

In the video above, the system can be seen in action on a range of types of clothes, mimicking materials ranging from silk to leather, and the corresponding OBJ files can be downloaded from the authors’ website.

Materials



Gaussian Material Synthesis
Károly Zsolnai-Fehér, Peter Wonka, Michael Wimmer

This paper, from researchers at TU Wien and Saudi Arabia’s KAUST, aims to replace conventional methods for designing and rendering materials with a faster, more intutitive AI-driven workflow.

Rather than creating materials by repetitively tweaking parameters, users simply assign scores to readymade examples from a gallery, with the AI generating new materials matching their preferences.

The results can be adjusted manually – again, without having to edit individual material parameters – with an AI-powered ‘neural renderer’ generating real-time previews mimicking the look of GI renders.

The accompanying source code is available under an open-source MIT licence, so there’s a good chance of seeing tools based on the system appearing in mainstream graphics software.

Animation



DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills
Xue Bin Peng, Pieter Abbeel, Sergey Levine, Michiel van de Panne

DeepMimic also uses machine learning to speed up CG workflows: in this case, training 3D characters to replicate real-world motion-capture clips or keyframe animations provided as reference.

Once trained, the characters can react intelligently to new environments, or perform user-specified tasks like kicking a target or throwing a ball at a gaol.

However, the real fun lies in seeing the hapless AI characters being subjected to ever-more bizarre conditions, vainly attempting to perform spin kicks under moon gravity, or while being pelted with boxes.



Mode-Adaptive Neural Networks for Quadruped Motion Control
He Zhang, Sebastian Starke, Taku Komura, Jun Saito

Using machine learning to train a bipedal character is hard. But adding an extra pair of legs adds a whole new degree of complexity, since quadrupeds don’t just walk or run, but can also pace, trot and canter.

This system, seen here implemented inside Unity, generates a range of realistic quadruped motion from unstructured motion-capture data, without the need to go through and label gait types manually.

Rather than optimising a fixed group of weights, its novel neural network architecture learns to dynamically blend weights into a further network based on the current state of the character.

Capture technologies



Instant 3D Photography
Peter Hedman, Johannes Kopf

Bringing environment scanning to consumer hardware, Instant 3D Photography reconstructs 3D panoramas from sequences of colour and depth image pairs of the type captured on current dual-lens phone cameras.

The method estimates camera poses to generate adjustment maps that can be used to deform and align the depth maps, helping to correct errors caused by using source images captured from a single vantage point.

The algorithm can process an image per second, making it possible to generate a panorama that can be viewed in a VR headset, complete with proper motion parallax, in around a minute.

Just plain amazing



Toward Wave-based Sound Synthesis for Computer Animation
Jui-Hsien Wang, Ante Qu, Timothy R. Langlois, Doug L. James

Foley effects are fine, but what if your animation software could just generate the sounds for you? This paper, from a team at Stanford University and Adobe Research, may be a step towards that goal.

Supporting a variety of physics simulation models, their integrated approach generates sound effects from 3D animations, including water dripping into a vase, falling Lego bricks and a cymbal being struck.

The team likens work in sound synthesis to the early days of 3D rendering: simply proof that photorealistic – or in this case, audiorealistic – results are possible, rather than production-ready techniques.

The parallels with 3D rendering don’t end there: in the paper, the team proposes the concept of ‘acoustic shaders’ for the different, material-specific types of sound sources implemented in the solver.



Shape Representation by Zippables
Christian Schüller, Roi Poranne, Olga Sorkine-Hornung

I didn’t know that I wanted a cushion shaped like the Stanford bunny until I watched the demo video for Shape Representation by Zippables: one of Siggraph 2018’s most offbeat research projects.

The paper, from a team at ETH Zurich, proposes a novel alternative to 3D printing models, instead generating a single 2D ribbon of material that can be zipped or glued together to create the 3D form.

Understandably, the approach lends itself well to soft objects like pillows and plush toys, but the team says that it can be extended to papercraft – and presumably even to more durable materials like metal sheeting.

More research online
That’s it for this round-up – although only part of the research being presented at Siggraph 2018.

Other papers range from volumetric muscle simulation to animated facial avatars for virtual reality, by way of the brilliantly titled Fast Winding Numbers for Soups and Clouds*.

As usual, graphics researcher Ke-Sen Huang has compiled a list of the other papers currently online. Check it out via the link below, and feel free to nominate your own favourites in the comments.

*It’s actually about geometry processing, not catering. Or meteorology.


Visit the technical papers section of the Siggraph 2018 website

Read Ke-Sen Huang’s invaluable list of Siggraph 2018 papers available on the web
(More detailed than the Siggraph website, and regularly updated as new material is released publicly)