i’m implementing now sdsm technique from Andrew Lauritzen article with evsm filtering using opengl 4.5. So i read almost everything about esm, vsm, and other *sm, but cannot understand some points. I also looked through your shadow sample and intel’s also (i do not know direcx, really). So the questions:

1. When we used positive and negative exponent factors in evsm, why the first is 42 (this i understand), but the last is 5(this is completely obscured to me).

2. All the articles about shadow mapping say that you should use front face culling when rendering to the shadow map (this should decrease wrong selfshadowing and acne), but both you and andrew lauritzen use NoCull in your samples. Why?

3. In Andrew’s sample he “normalizes” exponent factors with cascade ranges, to make the transition between cascades less noticeable. You do not do that. I have ranges from several meters to 10s of kilometers (flight simulator). Is there any point to do that?

4. When blurring evsm, does it make sense to blur with decreasing kernel size (for example, 7x blur for cascade 0, 5x for 1, 3x for 2 and no blur for the last. Is there any “state of the art” technique?

5. The same question with light space orientation relative to camera space. I have only sun direction (z-axis), so i can rotate the light space basis around it (xy axes), is there the “right way” to do that? stick it to camera “up” vector or to world up vector?

6. Is there any point to do some shear transformation to light space. Could it help to better utilize sm resolution, I didn’t see any articles on. Now my lighspace is a box.

7. Everybody uses square shadow maps (1024×1024 for ex.). Is it possible to find better aspect knowing scene parameters.

Thank you for your answer :-):-) ]]>

I updated the link to Wang et al.’s paper. Thank you for pointing that out!

As for Stephen Hill’s fitted SG irradiance approximation, the link that I currently have in there just points to his home page, which is still the same. He hasn’t formally published his approximation anywhere, so I don’t have anything else to link to at the moment.

]]>1. Can you update the links to the wang paper, since john snyders web site is reworked

2. Can you point a link to the SGIrradianceFitted, since the selfshadow is also updated. ]]>

Thank you so much for providing this extensive blog and the source code.

I am running into problems bringing in an external .fbx file into the project. I’ve tried modifying the Model filenames as well as creating an entirely new fbx file in the BakingLab.cpp however it’s giving me errors “block offset if out of range” or “DirectX Error: The parameter is incorrect” It will be amazing if you can let me know how I should proceed with bringing in external fbx files.

Thanks again.

]]>I have one question about the relative luminance calculation, the equation used is well described in BT.709 standard. But I think it’s working with radiometry unit. Since the luminance unit is used for light, relative luminance shouldn’t use another equation?

For example, there is a light with color temperature and randiant power defined, we can construct a spectrum data and luminous efficiency base on color temperature, after photometric curve weighted, RGB value with photometry unit is used for lighitng, then luminance is stored in backbuffer. If we apply the equation from BT.709. does photometric curve applied twice since curve is already applied when we convert light units from radiometry to photometry?

I think the correct way is

luminance = r * integral_of_srgb_r_responce_curve + g * integral_of_srgb_g_responce_curve + b * integral_of_srgb_b_responce_curve

Please point it out if I made any mistake.

]]>