Recently I’ve been experimenting with AA techniques, and one of the avenues I was pursuing required me to read back subsamples and use them to compute coverage. However I quickly ran into the problem that I didn’t know the sample position for a given subsample index. With FEATURE_LEVEL_10_1 and FEATURE_LEVEL_11 there are standard MSAA patterns you can use, but unfortunately I’m still stuck on a 10-level GPU so that wasn’t an option. After some thinking I realized that it wouldn’t be too hard to write an app that could determine the sample positions within a reasonable degree of accuracy, so that’s what I did! The basic process is like this:
- Create a 1×1 render target with the desired MSAA sample count + quality
- Render a grid of sub-pixel quads onto the render target (I used 25×25). The output of each quad is just the position of the quad in pixel space.
- Render N pixel-sized quads onto an Nx1 non-MSAA render target, where N is the number of subsamples. For each quad, a pixel shader samples one of the subsamples from the MSAA render target and outputs the value.
- Copy the Nx1 render target to a staging texture , and then map it to retrieve the contents on the CPU. Each texel is the XY position of the subsample.
It was a piece of cake to implement, and it seems to actually work! So I added a super-sweet visualizer (as you can see in the picture above), and packaged it up so that you guys can enjoy it as well. Download it here: https://mynameismjp.files.wordpress.com/2014/06/samplepattern.zip (includes source and binaries, requires a DX10-capable GPU).
Update 9/13/2015: I updated this project while working on a blog post about programmable sample points, and uploaded it to GitHub. The new version has more accurate sample point detection, and displays the positions using the conventions established by the D3D Sample Coordinate System. It also can optionally link to NVAPI, in which case it can show a demonstration of how to setup programmable sample points that vary across a 2×2 quad of pixels.