SIGGRAPH Follow-Up: 2015 Edition

SIGGRAPH 2015 wrapped up just a few days ago, and it really was fantastic this year! There was tons of great content, and I got a chance to meet up with some of best graphics programmers in the industry. I wanted to thank anyone that came to my talk at Advances in Real-Time Rendering, as well as anyone who came to our talk at the Physically Based Shading course. It’s always awesome to see so many people interested in the latest rendering technology, and the other presenters really knocked it out of the park in both courses. I’d also like to thank Natalya Tatarchuk for organizing the Advances course year after year. It’s really amazing when you look back on the 10 years worth of high-quality material that she’s assembled, and this year in particular was full of some really inspiring presentations. And of course I should thank Stephen Hill and Stephen McAuley as well, who also do a phenomenal job of cultivating top-notch material from both games and film.

For my advances talk, you can find the the slides on the company website. They should also be up on the Advances course site in the near future. If you haven’t seen it yet, the talk is primarily focused on the antialiasing of The Order, with a section about shadows at the end. There’s also some bonus slides about the decal system that I made for The Order, which uses deferred techniques to accumulate a few special-case decal types onto our forward-rendered geometry. For the antialiasing, the tech and techniques presented isn’t really anything I’d consider to be particularly novel or groundbreaking. However I wanted to give an overview of the problem space as well as our particular approach for handling aliasing, and I hope that came across in the presentation. One thing I really wanted to touch on more was that I firmly believe that we need to go much deeper if we really want to fix aliasing in games. Things like temporal AA and SMAA are fantastic in that that really do make things look a whole lot better, but they’re still fundamentally limited in several ways. On the other hand, just brute forcing the problem by increasing sampling rates isn’t really a scalable solution in the long term. In some cases we’re also undersampling so badly that just having a 2x or 4x increase in our sampling rates isn’t going to even come close to fixing the issue. What I’d really like to see is more work into figuring out how to have smarter sampling patterns (no more screen-space uniform grids!), and also how to properly prefilter content so that we’re not always undersampling. This was actually something brought up by Marco Salvi in his excellent AA talk that was a part of the Open Problems in Real-Time Rendering course, which I was very happy to see. It was also really inspiring to see Alex Evans describe how he strived for filterable scene representations in his talk from the Advances course.

In case you missed it, I uploaded a full antialiasing code sample to GitHub to accompany the talk. The code is using my usual sample framework and coding style, which means you can grab it and build it with VS 2013 with no external dependencies. There’s also pre-compiled bins in the releases section, in case you would just like the view the app or play around with the shaders. The sample is essentially a successor to the MSAAFilter sample that I put out nearly 3 years ago, which accompanied a blog post where I shared some of my research on using higher-order filtering with MSAA resolves. The AA work in The Order is in many ways the natural conclusion of that work, and the new sample reflects that. If you load up the sample, you’ll notice that the default scene is a really terrible case for geometric and specular aliasing: it’s a rather high-polygon mesh, with lighting from both a directional light as well as from the environment.  I like to evaluate flickering reduction by setting “Model Rotation Speed” to 1.0, which causes the scene to automatically rotate around its Y axis. The default settings are also fairly close to what we shipped with in The Order, although not exactly the same due to some PS4-specific tweaks. The demo also defaults to a 2x jitter pattern, which we didn’t use in The Order. One possible avenue that I never really explored was to experiment with more variation in the MSAA subsample patterns. This is something that you can do on PS4 (as demonstrated in Michal Drobot’s talk about HRAA in Far Cry 4), and you can also do it on recent Nvidia hardware using their proprietary NVAPI. A really simple thing to try would be to implement interleaved sampling, although it could potentially make the resolve shader more expensive.

As for the talk that Dave and I gave at Physically Based Shading, I hope that the images spoke for themselves in terms of showing how much better things looked once we made the switch from SH/H-basis to Spherical Gaussians. It was a very late and risky change for the project, but fortunately it paid off for us by substantially improving the overall visual quality. The nice thing is it’s pretty easy to understand why it looks better once we switched. Previously, we partitioned lighting response into diffuse and specular. We then took advantage of the response characteristics to store the input for both responses in two separate ways: for diffuse, we used high spatial resolution but with low angular resolution (SH lightmaps), while for specular we used low spatial resolution but with high angular resolution (sparse cubemap probes). By splitting specular into both low-frequency (high roughness) and high-frequency (low roughness) categories, we were able to use spatially dense sample points for a much broader range of surfaces. These surfaces with rougher specular then benefit from improved visibility/occlusion, which is usually the biggest issue with sparse cubemap probes. This obviously isn’t a new idea, in fact Halo 3 was doing similar things all the way back in 2008! The main difference of course is that we were able to use SG instead of SH, which gave us more flexibility in how we represented per-texel incoming radiance.

SGs can be a really useful tool for all kinds of things in graphics, and I think it would be great if we all added them to our toolbox. To aid with that, Dave Neubelt and Brian Karis are planning on putting out a helpful paper that can hopefully be to SGs what Stupid SH Tricks was to spherical harmonics. Dave and I also have been working on a code sample to release, which lets you switch between various methods for pre-computing both diffuse and specular lighting as well as a path-traced ground truth render. I’m hoping to finish this soon, since I’m sure it would be very helpful to have working code examples for the various SG operations.


8 thoughts on “SIGGRAPH Follow-Up: 2015 Edition

  1. Nice presentation. I was looking into spherical radial basis before. I’m looking forward to your paper.

  2. Temporal AA methods I think are the wrong direction for AA to be heading in. There are just too many problems that they cause for them to be worthwhile.
    Often they cause too many temporal artifacts such as:
    Double images, blurring in motion rather than resolving detail, smearing, image wobble,etc. None of those problems can be inherently solved because of how the technique works IMO.
    (There are use cases where there are almost no noticeable draw backs to the user though, Temporal filtering of undersampled or upsampled buffers. Eg PP effects, particle effects,etc)

    Because of pandering to weak hardware and being as fast as possible it limits the quality of any given technique. (And isn’t very forward thinking for the future when GPUs will be faster. At least on PC)

    Some of them seem completely incapable of actually solving many temporal issues and ironically have less issues with no motion than with motion. (UE4 Temporal AA for example)

    Solving some aliasing at the core level (Mip-maps for textures, CLEAN/LEAN/Toksvig for specular,etc) is extremely helpful but can only do so much on their own and still leave aliasing behind. The remaining information still needs to be resolved in a way that is stable. And that ultimately is always going to come back to coming at a high cost.

    Nvidia’s own SGSSAA is the best example of this. When there aren’t other things interfering (Considering it is never natively implemented in game and has to be forced from outside with compatibility bits that can only cover so much general use cases), it not only is temporally stable. It is actually resolving detail close to the ground truth, statically and temporally without drawbacks. (Also considering it’s stuck using the fixed box function for reconstruction as well).
    In the cases where other things in the individual game are causing problems, creating a hybrid with standard OGSSAA (Preferrably not using a Box function) is unparalleled.

    Texture aliasing – Check
    Moire – Check
    Shader aliasing – Check
    Temporal Aliasing (flickering, crawling,etc) – Check
    Specular Aliasing – Check
    Geometry Aliasing – Check

    There are some games out there that are even more problematic and require some noodling about.
    Ex Red Faction 3
    (1×2 OGSSAA + 2xMSAA + 2xSGSSAA + FXAA at base rendering resolution before a final 2×2 OGSSAA step |This was running at ~20-30FPS at the time on a GTX 570)


    When it comes to the consoles, they are just too inherently weak to be able to push HQAA and still be pushing the graphical boundaries in addition to vastly varying game design philosophies. That doesn’t mean you can’t get decent and or good quality. But it often comes with drawbacks of it’s own. It just feels like developing any more demanding and vastly higher quality techniques (That can be utilized today on PC or tomorrow on future GPUs)don’t matter to anyone because it’s not being pushed out to the largest LCD userbase. (The consoles)

    Keep up the good work though, because while i’m not terribly fond of TAA, your hybrid solution in The Order generally looked fantastic and far better than just about every game out there on consoles. Ever.

  3. “Some of them seem completely incapable of actually solving many temporal issues and ironically have less issues with no motion than with motion.”

    I forgot to mention another thing about this as well, is that your own example suffers from this problem somewhat a bit.
    Static (Fantastically smooth, very little aliasing)
    In motion (Technique starts to unravel as all edges of the object start to crawl and flicker)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s