Motion Blur Sample

Motion blur is a visual effect that is becoming increasingly common in modern video games. The effect is designed to simulate the blurring that occurs when a standard camera takes a photo of objects that are moving relative to the camera, due to the camera’s shutter being open for a short period of time. The end result is a more “cinematic” and smooth look to the graphics. This sample will demonstrate two techniques that apply the effect as a full-screen post-process, making them simple to integrate into existing rendering implementations.  In particular they are well-suited to integration with deferred renderers, since they can make use of G-Buffer attributes.

Background

Normally when we render 3D graphics we render all geometry according to its state at an exact instant in time rather where it is over an elapsed  period of time.  In other words, our rendered output is the result of discretely sampling a function based on t, where t is the elapsed time for our simulation.  This discrete sampling causes aliasing, much the same way that rendering using discrete pixels causes aliasing.  When we render with pixels, each pixel has width and height but in our pixel shader we determine the color by assuming that the pixel is an infinitely small sample along a triangle.  Thus we end up with the jagged staircase look at triangle edges, because the pixel colors will have high contrast.  This same effect occurs in the time domain, and is referred to as temporal aliasing.

To reduce the negative effects of temporal aliasing, we can apply anti-aliasing techniques. The most effective method of anti-aliasing is super-sampling, which means we must take samples at a rate above our sampling rate (in this case our sampling rate is our framerate) and then apply filtering.  In other words, we would have to draw 2 or more frames for each frame actually shown on the screen, and then blend those sub-frames together to create the final image shown on the screen.  As you can imagine, this is quite costly! Using just 2 sub-frames would effectively cut our framerate in half. For this reason the technique is not popular, just as supersampling is not popular for reducing pixel aliasing (multisampling is typically used instead).  Another approach is to sample at the normal rate, but filter the resulting frames. In other words render as normal, but blend the frame with N previous frames.  Doing this is analogous to applying a full-screen blur to the frame in order to reduce pixel aliasing: it reduces aliasing since it smooths out high-frequency changes, but also washes out details. Many older games did use this technique selectively during the PS2/Xbox/GCN generation to provide a trippy “motion trails” effect.  Another more advanced approach is to attempt to “stretch out” the rendered geometry and make it transparent at the edges. This is demonstrated in Masaki Kaawase’s rthdribl sample, and also the MotionBlur10 sample from the DirectX SDK.  These approaches can look very realistic, but are expensive as they are geometry-based techniques. They also rely on blending (alpha-blending for the former, alpha-to-coverage for the latter), which can be problematic for implementations that use a deferred approach or a render target format/encoding that doesn’t support blending.

The techniques demonstrated in the sample use a filtering approach, but try to do it “intelligently”. What they do is attempt to determine the velocity (in screen-space) at any particular pixel, and then blur the frame based on that pixel velocity. The result is that we don’t just blur everything and wash out details, but we also don’t render multiple sub-frames.  The results of course don’t look as good as super-sampling since true anti-aliasing requires additional information that we won’t have available for post-processing, however in many cases the results can look quite good.

Technique #1: Depth Buffer Velocity Calculation

This technique makes use of something that has become very popular for deferred rendering: reconstructing the world-space or view-space position of a pixel using a depth buffer. Since we implicitly know the 2D position of a texel in a depth buffer, we can just sample the Z value from that buffer to get a 3D position.  This sample makes use of the technique demonstrated here, where an interpolated ray to the frustum corner is used to reconstruct view-space position from a linear depth buffer.  The inverse of the view matrix is then used to calculate the world-space position of the pixel.  Of course you can use another reconstruction method if you prefer, it doesn’t really matter as long as you do it somehow!

Once we have a world-space position for the pixel, what we do is we apply the the view * projection matrix from the previous frame in order to determine the screen-space position of that pixel from the last frame. Then by comparing that position to the current screen-space position of the pixel, we can figure out a velocity vector that we can use for blurring.  This method for determining velocity has two primary drawbacks:

  1. Non-zero velocity only occurs due to camera movement, and not due to independent movement of an object.  For this reason the technique is often referred to as “camera motion blur”
  2. It assumes that the world-space position of a pixel remains constant, which of course isn’t always true! This creates problems when you have an object that’s moving, but remains in the same position on-screen (imagine a camera following a car in a racing game, for instance). To get around this you have to store a mask somewhere that marks off pixels that shouldn’t be blurred.

AFAIK, this technique was first published in GPU Gems 3 as “Motion Blur As a Post-Processing Effect”. You can view the article here.  It was also used in games like Halo 3, Gears of War, and Crysis (Crysis actually has a more advanced motion blur technique that also blurs moving objects, but it’s only active if you have shaders set to “Very High”).  As I said previously it fits extremely well into a deferred rendering setup, since you already need to have access to a depth buffer for position reconstruction.

Technique #2: Velocity Buffer

This technique also applies motion blur as a post-process, but avoids the major drawback of the previous approach (independently moving objects aren’t blurred) by explicitly rendering velocity information to a render target. We calculate velocity by transforming each vertex (in the vertex shader) by the world*view*projection matrix from the previous frame, passing the screen-space position to the pixel shader, and then comparing that position with the current screen-space position to calculate pixel velocity.  This velocity buffer is then sampled in our post-processing pass to determine how much we should blur, and in which direction.  Since the velocity we render isn’t depending at all on lighting, it can be done as part of the G-Buffer pass for a deferred renderer. This makes it somewhat simple and natural to implement for many renderers.

The drawbacks of this approach are as follows:

1.    You have to render velocity for all of your geometry (and skybox), which increases the rendering costs and makes shaders more complex.  The added cost for the effect is also dependent on how much geometry you render, which makes the added cost less predictable than the first technique.

2.    You will still get artifacts for objects that don’t move relative to the camera, if a moving object is behind it.

3.    Proper motion blur will result in a moving object being blurred into the area it is moving into, as well the area it is moving from.  Since we will blur in the direction of movement will get the former effect, but not the latter.  Thus the result can look strange at the silhouette of objects.  However it may not be noticeable, depending on how fast the geometry is moving.

This technique was first published in the DirectX SDK as the PixelMotionBlur sample.  You can see it in games such as Killzone 2, Uncharted 2, and Lost Planet/Resident Evil 5 (these Capcom games also extend geometry in the direction of motion to improve the effect).

Technique #3: Dual Velocity Buffers

This technique takes the same basic approach as the previous technique, except that it also attempts to address the third drawback by using the velocity buffer from the previous frame. By sampling the previous velocity buffer, we can attempt to determine where a moving object used to be so that we can blur there as well.  This adds more overhead to the post-processing pass due to the need to sample an additional texture.

Download the sample here.

Pre-compiled binary version is available here.

Advertisement

9 thoughts on “Motion Blur Sample

  1. Hi!

    Im a begginer in this domain…so dont mind the noob question.^^

    I cant open the project in visual studio…I googled it and alot of people have the same problem as me but no definitive solution…

    can you make an executable of the demo please?

  2. As we are storing velocity information in a texture and a texture can only accept color values between {0,1}, how should I determine the direction of velocity?

  3. It’s been a long time since I made that sample, but I’m pretty sure that I used a floating point format for storing velocity. Floating point formats can store values beyond [0, 1], including negative values.

  4. Or you go for saving the Velocity in a simple r16G16 RT using Velocity*0.5+0.5 . to use that code properly you can simply use bias and scale : Velocity*2-1 to get it back in the range 🙂 so you save some performance

  5. thanks for the replies, i stored the absolute values in r&g channel and direction values in b&a channels. One thing I’m noticing is velocity texture generated for DirectX and OpenGL are a bit different, in general I am needing higher blur const values for DirectX than that of OpenGL. Is this expected ?

  6. Actually, nevermind looks like I needed XNA version 3 instead of four, you can delete these two comments if you want. I don’t think I can do it myself or else I would.

  7. When i use the Depth Buffer Velocity Calculation, the blur just appears when the camera is rotating, but on translation nothing happens?!
    What could be the cause of this behavior?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s