Tuesday, November 20th, 2018 Posted by Jim Thacker

Videos: see Siggraph Asia 2018’s best technical papers


Over the past 10 years, Siggraph Asia has become one of the best places to glimpse the cutting edge of computer graphics, in part thanks to its ever-expanding technical papers program.

This year’s conference, due to be held in Tokyo from 4-7 December 2018, features over 100 papers, chosen from 353 submissions, and spanning both pure research and practical advances in software and hardware.

ACM Siggraph has just released a video previewing some of the progam’s highlights (above), but as the show approaches, more and more of the original material is being released online.

Below, you can find our own pick of the 10 best papers from Siggraph Asia 2018, chosen on the not-entirely-scientific grounds of technical innovation, practical applications, and how cool the demo videos look.

The resulting run-down covers everything from advances in virtual reality, material capture and 3D printing to new ways to train – or to melt – your CG dragon.

Simulation


GPU Optimization of Material Point Methods
Ming Gao, Xinlei Wang, Kui Wu, Andre Pradhana, Eftychios Sifakis, Cem Yuksel, Chenfanfu Jiang

The Material Point Method is an effective way of simulating complex materials. It’s also one that should lend itself to the kind of parallel processing possible on a GPU. It just isn’t yet clear what the best way to do that is.

In this paper, researchers from the Universities of Pennsylvania and Utah assess alternative methods for performing MPM calculations on the GPU, squeezing out an order-of-magnitude increase in performance.

Their own explicit and fully implicit solvers come equipped with a Moving Least Squares MPM heat solver, also making it possible to simulate thermomechanical effects on elastoplasicity.

The result? A simulation system that can reduce a 4.2-million-particle elastoplastic representation of the Stanford Dragon to molten goop in just 10.5 seconds per frame when run on an Nvidia Quadro P6000 GPU.



Hybrid Grains: Adaptive Coupling of Discrete and Continuum Simulations of Granular Media
Yonghao Yue, Breannan Smith, Peter Yichen Chen, Maytee Chantharayukhonthorn, Ken Kamrin, Eitan Grinspun

Discrete element methods reproduce the behaviour of granular materials accurately, but don’t scale well to large simulations. Continuum methods scale well, but don’t always look realistic.

This new hybrid approach attempts to combine the strengths of both techniques: in the authors’ words, partitioning a simulation domain into “continuum regions where safe and discrete regions where necessary”.

The resulting sims calculate much faster than purely discrete methods, but replicate complex real-world effects like clogging and bouncing as simulated sand pours from silos, or tyres churn through CG gravel.

But as always, it’s the weirder demos that capture the imagination – in this case, the bucket of a mechanical excavator scooping up gumballs, and ceramic bunnies drilling their way down into tanks full of beads.

Modelling and UV unfolding


Learning a Shared Shape Space for Multimodal Garment Design
Tuanfeng Y. Wang, Duygu Ceylan, Jovan Popovic, Niloy J. Mitra

Traditional CG clothing design workflows use a trial-and-error approach, where a 3D garment is draped over a human avatar, then the corresponding 2D pattern parts are iteratively adjusted to refine the way it hangs.

This method cuts out the guesswork, allowing users simply to sketch the results they want and have the software generate both the 3D garment and the 2D pattern parts – and even adjust the avatar to match.

The result is CG clothing with fold patterns that match those of real-world reference images, and which retargets easily to characters with different body shapes.

If you’re interested in the subject, also check up our round-up of papers from Siggraph 2018, which includes an alternative method in which users sketch fold patterns directly on to the surface of a 3D model.



OptCuts: Joint Optimization of Surface Cuts and Parameterization
Michen Li, Danny M. Kaufman, Vladimir G. Kim, Justin Solomon, Alla Sheffer

UV unfolding techniques tend to trade continuity against accuracy, with attempts to minimise the length of seams between UV islands resulting in high continuity, but at the cost of increased texture distortion.

The OptCuts algorithm, developed by a team at UBC, MIT and Adobe Research, seeks to optimise both properties, mimising UV seam lengths while staying beneath a specified maximum level of distortion.

Users can also influence the outcome by painting directly onto the 3D model to include or exclude parts of the surface from seam formation.

The results are demonstrated on an entire menagerie of organic models, including octopi, armadillos and camels, while the paper itself includes comparative analysis of commercial tools like ZBrush and Unwrella.

Materials


An Adaptive Parameterization for Efficient Material Acquisition and Rendering
Jonathan Dupuy, Wenzel Jakob

Bidirectional Reflectance Distribution Functions describe how materials interact with light, replicating the extent to which microscopic irregularities scatter light rays striking their surfaces at different angles.

But despite the ubiquity of BRDFs in physically based rendering, obtaining real-world data is a complex, time-consuming process, involving scanning materials from multiple angles at high resolution.

Dupuy and Jakob’s method takes advantage of the fact that interesting behaviour is usually confined to a small part of the 4D domain scanned to reduce the process to a “brief 1D or 2D measurement of reflectivity”.

The technique can be performed via a standard goniophotometer, and generates high-quality results with a range of materials – as can be seen by the gorgeous image of rendered Fabergé eggs in the video above.



Position-Free Monte Carlo Simulation for Arbitrary Layered BSDFs
Yu Guo1, Miloš Hašan, Shuang Zhao

Many real-world materials are layered, from car paint to human skin. Variations in the properties of each layer, and of the interfaces between them, gives them a range of complex optical properties.

As a result, simulating light-layer interactions is challenging, with existing methods either proving computationally very expensive, or introducing approximations that limit their accuracy.

This paper, from researchers at UC Irvine and Autodesk, introduces an efficient unbiased layered BSDF model based on Monte Carlo simulation, whose only assumption is that the layers actually exist.

The accompanying video demonstates the results with a luxurious – and suitably computationally testing – range of rendered materials: jade, iridescent glass, gabardine, and cloth of gold.

Animation


Aerobatics Control of Flying Creatures via Self-Regulated Learning
Jungdam Won, Jungnam Park, Jehee Lee

AI techniques like Deep Reinforcement Learning (DRL) generate control systems for animated characters that mimic the look of reference footage, but enable the actor to respond adaptively to new environments.

While AI approaches already generate good results with creatures that walk or run, recreating the movements of those that can fly takes the problem – quite literally – to a new dimension.

This paper, from a team at Seoul National University, introduces a new concept, Self Regulated Learning, that can be used alongside DRL to allow an AI actor to take control over its own learning.

The demo starts at Flappy Bird level, with an animated dragon flying over or under bars of various heights, but soon gets crazier, with the CG creature eventually completing an entire loop-the-loop assault course.

Virtual reality


A System for Acquiring, Compressing, and Rendering Panoramic Light Field Stills for Virtual Reality
Ryan Styles Overbeck, Daniel Erickson, Daniel Evangelakos, Matt Pharr, Paul Debevec

Light field data – measurements of the light travelling in every direction through every point in a space – makes it possible to display photographic environments realistically in VR, with accurate motion parallax.

This paper from a team at Google covers a range of technological advances, including two new rigs for capturing panoramic light fields: one using 16 GoPro cameras, and one requiring only two standard DSLRs.

The resulting data can be compressed using a modified version of Google’s VP9 video codec, and reconstructed in real time for display on virtual reality hardware.

The results can be seen in a free app available via Steam, in the shape of stills of the Gamble House and the cockpit of the Space Shuttle Discovery that can be navigated in VR using standard consumer headsets.

3D printing


3D Fabrication with Universal Building Blocks and Pyramidal Shells
Xuelin Chen, Honghua Li, Chi-Wing Fu, Hao (Richard) Zhang, Daniel Cohen-Or, Baoquan Chen

3D printing large solid structures can be a tricky process, requiring either a huge quantity of material to fill the internal volume, or a bespoke system of internal supports.

This new approach standardises those internal structures, using a set of snap-together cubic and pyramidal blocks to form a customisable, lightweight core that supports an external 3D printed shell.

The authors formulate an algorithm that jointly minimises the volume of that shell, the amount of wasted material required to print it, and the number of internal pyramidal pieces necessary.

The accompanying video is weirdly compelling, with timelapses showing 3D printed structures ranging from sofas to sphinxes being assembled from what appear to be nothing more than kids’ plastic building bricks.

Image editing


Two-stage Sketch Colorization
Lumin Zhang, Chengzhe Li, Tien-Tsin Wong, Yi Ji, Chunping Liu

Tools for colouring 2D sketches automatically have been in vogue recently, from the new Colorize Mask feature in open-source digital painting software Krita to online services like PaintsChainer.

This paper, from a team at The Chinese University of Hong Kong and Soochow University, uses a two-stage AI-based approach, first roughly ‘splashing’ colours over the line art, then cleaning up the results.

Dividing the task into two creates clearer goals when training the AI, minimising artefacts like simulated paint overrunning the line boundaries, or the muddiness of the colours that some systems generate.

The results – which can be guided by the user by dropping point colour swatches onto the sketch – look great: both crisp and delicate, preserving the subtleties of the original black-and-white sketch.

More research online
That’s it for this round-up – although only a small sample of the research being presented at Siggraph Asia.

Other papers that didn’t quite make it into the article include new ways to 3D scan holographic film, detail procedurally generated CG buildings and even simulate how to put on 3D clothing.

If you want to see more, graphics researcher Ke-Sen Huang has compiled a list of papers currently online. Check it out via the link below, and feel free to nominate your own favourites in the comments section.


Visit the technical papers section of the Siggraph Asia 2018 website

Read Ke-Sen Huang’s invaluable list of Siggraph Asia 2018 papers available on the web
(More detailed than the Siggraph website, and regularly updated as new material is released publicly)