*Update: n00body posted this link in the comments, which is way more in-depth than my post. Check it out!*

If you’ve ever implemented a deferred renderer, you know that one of the important points is keeping your G-Buffer small enough as to be reasonable in terms of bandwidth and your number of render targets. Thanks to that constant struggle between good and evil, people have come up with some reasonable clever approaches towards packing necessary attributes in your G-Buffer. One of the more popular approaches is that whole storing depth and reconstructing position thing, and another is packing normals so that you only need 2 components instead of 3.

One of the more simple and common approaches is to only store the X and Y components of your view-space normals and then assume Z is positive (or negative, depending on whether you’re using right-handed or left-handed coordinates). As far as I know, this was first proposed here by Guerilla Games. However there’s a problem with this approach, which is that you can’t always assume the sign of your Z component when you’re using a perspective projection! This might seem weird at first (heck it took a while for someone to demonstrate to me why this is the case), but I assure you it’s true. Insomniac has some good pictures here demonstrating the errors that occur. So this means that if we want to use this technique and avoid errors, we have to pack the sign of Z somewhere in our two values. This is a little nasty, and takes away a bit of precision from one of your other values.

An alternative approach suggested to me a long time ago is to store the normal as a spherical coordinate. Since a normal is always a unit vector with length = 1, you can (safely) assume that Rho = 1 and just store Thetha and Phi. Piece of cake! All you have to do is implement the equations on the wiki page, take out the Rho’s, and you’ve got a two-component normal with excellent precision.

But wait, there’s more! It turns out if you use some trig-fu, you can actually further optimization to the conversions when Rho is equal to 1. I was never actually good at simplifying equations with trig functions (I can do everything else, promise!) so I defer to the noble Pat Wilson who gave a quick rundown over in this thread. Make sure you check out his set of screenshots that demonstrate the errors that occur from different normal storage options, so you can pick which method is right for you.

Also since this is Scinitillating Snippets and it wouldn’t be much fun without a snippet, I’ll post the HLSL functions I use for encoding and decoding my normals. Just remember, all of the credit goes to Mr. Wilson. I just did the pilfering!

// Converts a normalized cartesian direction vector // to spherical coordinates. float2 CartesianToSpherical(float3 cartesian) { float2 spherical; spherical.x = atan2(cartesian.y, cartesian.x) / 3.14159f; spherical.y = cartesian.z; return spherical * 0.5f + 0.5f; } // Converts a spherical coordinate to a normalized // cartesian direction vector. float3 SphericalToCartesian(float2 spherical) { float2 sinCosTheta, sinCosPhi; spherical = spherical * 2.0f - 1.0f; sincos(spherical.x * 3.14159f, sinCosTheta.x, sinCosTheta.y); sinCosPhi = float2(sqrt(1.0 - spherical.y * spherical.y), spherical.y); return float3(sinCosTheta.y * sinCosPhi.x, sinCosTheta.x * sinCosPhi.x, sinCosPhi.y); }

Also keep in mind that these functions normalize the values to the range [0,1], so that you can store in a regular fixed-point texture. If you’re using a floating point texture you can remove the division by PI if you wish (and corresponding multiply by PI in the decode), as well as the “multiply by 0.5, subtract by 0.5”.

there will be a bug then you represent float3(0.0, 0.0, some_z) like vectors.

and one will have a workaround for this, which also cost shaders instructions.

🙂

You should check out the work this guy is doing:

http://aras-p.info/texts/CompactNormalStorage.html

He has come up with a variety of ways to store view-space normals, including a novel use of sphere mapping. For each technique, he has images of their error from the original, plus lists of compiled code and instruction counts.

Definitely check it out. 😉

Oh wow, this technique does something I’ve been spending even more instructions than this cost to produce. Hard edges (90 degree or more) sort of soften up through this technique when used with Linear filtering. Not to the point where it breaks the hard edge, but just enough to where it’s easier on the eyes.