Picking up where I left off here…

As I mentioned, you can also reconstruct a world-space position using the frustum ray technique. The first step is that you need your frustum corners to be rotated so that they match the current orientation of your camera. You can do this by transforming the frustum corners by a “camera world matrix”, which is a matrix representing the camera’s position and orientation in world-space. If you don’t have this available you can just invert your view matrix. I’ll demonstrate doing it right in the vertex shader for the sake of simplicity, but you’d probably want to do it ahead of time in your application code.

// Vertex shader for rendering a full-screen quad void QuadVS ( in float3 in_vPositionOS : POSITION, in float3 in_vTexCoordAndCornerIndex : TEXCOORD0, out float4 out_vPositionCS : POSITION, out float2 out_vTexCoord : TEXCOORD0, out float3 out_vFrustumCornerWS : TEXCOORD1 ) { // Offset the position by half a pixel to correctly // align texels to pixels. Only necessary for D3D9 or XNA out_vPositionCS.x = in_vPositionOS.x - (1.0f/g_vOcclusionTextureSize.x); out_vPositionCS.y = in_vPositionOS.y + (1.0f/g_vOcclusionTextureSize.y); out_vPositionCS.z = in_vPositionOS.z; out_vPositionCS.w = 1.0f; // Pass along the texture coordinate and the position // of the frustum corner in world-space. This frustum corner // position is interpolated so that the pixel shader always // has a ray from camera->far-clip plane out_vTexCoord = in_vTexCoordAndCornerIndex.xy; float3 vFrustumCornerVS = g_vFrustumCornersVS[in_vTexCoordAndCornerIndex.z]; out_vFrustumCornerWS = mul(vFrustumCornerVS, g_matCameraWorld); }

So what we’ve done here is we’ve* rotated* (not translated, since vFrusumCornerVS is only a float3) the view-space frustum corner so that it’s now matches the camera’s orientation. However it’s still centered around <0,0,0> and not the camera’s world-space position, so when we reconstruct position we’ll also add the camera’s world-space position:

// Pixel shader function for reconstructing world-space position float3 WSPositionFromDepth(float2 vTexCoord, float3 vFrustumRayWS) { float fPixelDepth = tex2D(DepthSampler, vTexCoord).r; return g_vCameraPosWS + fPixelDepth * vFrustumRayWS; }

And there it is. Easy peasy, lemon squeezy.

The other bit I hinted at was using this same technique with arbitray geometry, for example the bounding volumes for a local light source. For this we once again need a ray that points from the camera position through the pixel position to the far-clip plane. We can do this in the pixel shader by using the view-space position of the pixel.

void VSBoundingVolume( in float3 in_vPositionOS : POSITION, out float4 out_vPositionCS : POSITION, out float3 out_vPositionVS : TEXCOORD0 ) { out_vPositionCS = mul(in_vPositionOS, g_matWorldViewProj); // Pass along the view-space vertex position to the pixel shader out_vPositionVS = mul(in_vPositionOS, g_matWorldView); }

Then in our pixel shader, we calculate the ray and reconstruct position like this:

float3 VSPositionFromDepth(float2 vTexCoord, float3 vPositionVS) { // Calculate the frustum ray using the view-space position. // g_fFarCip is the distance to the camera's far clipping plane. // Negating the Z component only necessary for right-handed coordinates float3 vFrustumRayVS = vPositionVS.xyz * (g_fFarClip/-vPositionVS.z); return tex2D(DepthSampler, vTexCoord).x * vFrustumRayVS; }

So there you go, I did your homework for you. Now stop beating me up in the schoolyard!

EDIT: Fixed the code and explanation so that it actually works now! Big thanks to Bill and Josh for pointing out the mistake.

UPDATE: More position from depth goodness here

I don’t think your solution for arbitrary geometry works. You alluded to the problems yourself in your previous post.

Interpolationg (xyz / z) per-vertex doesn’t work as it is not a linear operation. You have to do the division in the pixel shader for this to work.

Phil was correct. I tried doing the calculation in the vertex shader for my code, and it introduced very nasty visual artifacts. When I moved the calculation to the fragment shader, it produced the correct results.

Yup, you guys are right. For a while I was trying to figure out why I wasn’t getting artifacts…and then I realized that in code I was calculating the ray in the pixel shader too. Whoops. :-D

Thanks everyone for pointing it out, much appreciated.

A heads up to those using the first technique, it expects coordinates in the 0 to 1 range. Which should have been apparent with the texCoord parameter.

My own world space arbitrary implementation:

float3 GetFrustumRay(in float2 screenPosition)

{

float2 sp = sign(screenPosition);

return float3(Camera.FrustumRay.x * sp.x, Camera.FrustumRay.y * sp.y, Camera.MaxDepth);

}

The Camera.FrustumRay is calculated in the application using the following:

Vector2 frustumRay = new Vector2();

frustumRay.Y = (float)Math.Tan(Math.PI / 3.0 / 2.0) * camera.Viewport.MaxDepth;

frustumRay.X = -(frustumRay.Y * camera.Viewport.AspectRatio);

Forgot to mention in that last one, you alike the first one multiply view space depth (negated if your not using floating point buffers) then add camera position.

Hi!

I have a problem getting the last technique to work. VSPositionFromDepth gets the position in view space right? So in order to obtain the reconstructed world space position, I multiply the resulting value with an inverse view matrix like this:

//View space position

float3 wsPos = VSPositionFromDepth(tex, input.vsPos);

//Transform to world space

//wsPos = mul(wsPos, InvertView);

Where the texcoords are:

input.ssPos.xy /= input.ssPos.w;

//Transforming from [-1,1]->[1,-1] to [0,1]->[1,0]

float2 tex = (0.5f * (float2(input.ssPos.x, -input.ssPos.y) + 1)) – halfPixel;

Where SsPos equals csPos in the example. Well…the problem is that it doesn’t work, the ws positionons are incorrect. Any ideas of what I’m doing wrong?

You need to do this:

wsPos = mul(float4(wsPos, 1.0f), InvertView);

If you don’t convert to a float4 and set w = 1.0, then your view-space position won’t get transformed by the translation part of your inverse view matrix (in other words, it will only get rotated).

Hi,

I have a clever solution for you. It allows reconstruction of the view position in only two mul and no computation at all in the application.

First you need to output the depth in view space of your image. You can choose a R32F a G16R16F or anything you want.

Second, when you need to retrieve the pixel position in view space. Like in a full screen post process, draw a quad with those vertices (-1,-1) (-1,1) (1,1) (1,-1). If you need it in a real geometry, just send the xyw of the projected position and divide xy by w in the pixel shader

Let’s go with some math now:

1. we have a well know projection matrix with lot of zero and some value

A 0 0 0

0 B 0 0

0 0 C -1

0 0 D 0

let’s write the process to transform a view space position to projection space ( Vw == 1 )

Px = Vx * A + Vy * 0 + Vz * 0 + 1 * 0

Py = Vx * 0 + Vy * B + Vz * 0 + 1 * 0

Pw = Vx * 0 + Vy * 0 – 1 * Vz + 1 * 0

Now let’s reconstruct Vx. What we know in your fragment program is the interpolated pixel position in projected space ( Px/Pw ). let’s call it Ix for InterpolatedX

so:

Px/Pw = Ix

Vx * A / -Vz = Ix

Vx = Ix * Vz * ( -1 / A)

if we do the same for Vy we got

Vy = Iy * Vz * ( -1 / B).

Let’s write this in a HLSL form.

// Vertex part

out.position = in.position;

out.projpos.xy = in.position.xy;

out.magiccoef = -1.f / float2( gProj(0][0], gProj[1][1]

// Pixel part

float3 viewposition;

viewposition.z = tex2D( /***/ ).x;

viewposition.xy = viewposition.zz * in.magiccoef.xy * in.projpos.xy;

You know have your view space position, you can transform it to World space with a transform with the inverse view or keep it this way and do your math in view space.

Voilà :)

Interesting trick, GALOP1N.

On the whole, this actually seems like it would be cheaper than the prevalent method from Crytek. The “magiccoef” can be calculated outside the shaders, but is still significantly cheaper to do than the frustum corner. Then the actual stored depth needn’t be normalized & negated, so that is another savings. Of course, anything relying on the depth being in the range [0,1] might be affected, but that will be situation-specific.

I’m gonna try using this from now on, and see how well it holds up. Thanks. :)

Some more questions lol:

-g_matCameraWorld is the inverse of view matrix or inverse of world-view matrix?

-g_vCameraPosWS is the camera position multiplied by the world matrix or simply the camera position vector?

-How do I calculate vFrustumRayWS?

float3 vFrustumRayWS = vPositionWS.xyz * (g_fFarClip/-vPositionWS.z);

This is a really good tip especially to those fresh to

the blogosphere. Short but very accurate information… Appreciate your sharing this one.

A must read article!

I rarely leave comments, but after looking at a bunch of responses

on Reconstructing Position From Depth, Continued | The Danger Zone.

I do have a couple of questions for you if you don’t mind. Is it simply me or does it appear like some of these remarks look as if they are left by brain dead folks? :-P And, if you are posting at other online sites, I’d like to follow you.

Would you post a list of every one of your shared sites like your

Facebook page, twitter feed, or linkedin profile?