So pretty much the entire pipeline works in the a space such that intensity is scaled down by the 2^-10 factor, with all inputs (environment maps, light sources) appropriately scaled down to match that space. This includes the average luminance calculation and exposure calculations. There’s no need to ever “restore” back to the physical intensity range, since you get the correct final results by keeping the scene radiance and exposure values in the same 2^-10 space.

Does that make sense? Sorry that the code is a bit confusing when it comes to that scale factor, I probably should have added a few more comments explaining what was going on with that.

-Matt

]]>My question is regarding average luminance calculation. LuminanceReductionInitialCS takes as input a back buffer with luminance scaled down with 2^-10, and operates on that values without restoring actual luminance. But restoring gives wrong results (everything becomes too dark). What’s the logic behind that? Thanks.

]]>I think that writing a path tracer gives you a better perspective on BRDF’s and the rendering equation, since you take a more unified approach to solving the rendering equation. In real-time we end up having to do so many specialized paths for things like point lights, area lights, environment specular, etc., since we have such strict limitations on what we can do in a single frame. Meanwhile in a path tracer you’re free to to do arbitrary queries on your scene via ray tracing, which lets you focus on integrating to solve the rendering equation. So instead of using approximation X or for one light source and technique Y for handling your environment specular, you just use ray tracing to get radiance and use the same BRDF for everything. Having that flexibility and simplicity also makes it much easier (IMO) to explore things like refraction, and volumetric scattering.

This is all my own opinion based on my personal experiences, so your mileage may vary. If nothing else, knowing how a path tracer works will give you valuable knowledge for understanding how offline renderers work, or for writing an offline baking tool that you use for a real-time engine. Monte Carlo integration (which is used heavily in path tracing) is also an invaluable tool to have on your belt for almost any rendering scenario, both real-time and offline.

]]>Sorry that I took so long to reply to your question. The answer here is “yes”, since samples at the extreme edges of the hemisphere can potentially result in a negative contribution when projected onto the basis vectors. HL2 basis is exactly like spherical harmonics in this regard, except that the way the basis vectors are oriented results in ideal projections for a hemisphere since each basis vector has the same projection onto the local Z axis (normal) of the baking point. In fact, HL2 is pretty much exactly the same as L1 SH, except it has different weighting and the basis vectors are rotated (in SH the linear cosine lobes are lined up with the major X/Y/Z axes). This is actually noted in Habel’s paper from 2010 where they introduce H-basis: https://www.cg.tuwien.ac.at/research/publications/2010/Habel-2010-EIN/Habel-2010-EIN-paper.pdf (see section 2.1)

]]>