Do you remember the MLAA (Morphological Anti-Aliasing)? In short, MLAA is a shape-based anti-aliasing method that uses post processing filters to reduce aliasing. Like the SSAO, MLAA can be hand-coded in a 3D app or added in graphics drivers.
During the next I3D 2011, NVIDIA will talk about Subpixel Reconstruction Anti-Aliasing or SRAA, a potential competitor of AMD’s MLAA:
Subpixel Reconstruction Antialiasing (SRAA) combines single-pixel (1x) shading with subpixel visibility to create antialiased images without increasing the shading cost. SRAA targets deferred-shading renderers, which cannot use multisample antialiasing. SRAA operates as a post-process on a rendered image with superresolution depth and normal buffers, so it can be incorporated into an existing renderer without modifying the shaders. In this way SRAA resembles Morphological Antialiasing (MLAA), but the new algorithm can better respect geometric boundaries and has fixed runtime independent of scene and image complexity. SRAA benefits shading-bound applications. For example, our implementation evaluates SRAA in 1.8 ms (1280×720) to yield antialiasing quality comparable to 4-16x shading. Thus SRAA would produce a net speedup over supersampling for applications that spend 1 ms or more on shading; for comparison, most modern games spend 5-10 ms shading. We also describe simplifications that increase performance by reducing quality.
Offical page @ NVIDIA: Subpixel Reconstruction Antialiasing.
interesting.. good to hear SRAA will do better on the geometric boundaries, that was a major down point on MLAA
i like the sound of it having a fixed runtime..
this means it shouldnt cause (as much) fluctuations in the fps as the scene changes..and it should be ~5 times faster aswell
great news for nvidia users!\
thanks
I wonder if AMD will add this too.
JeGX: Maybe it is time to add some sort of post process OpenCL AA to your benchmarks?
@Leith: AMD already has its own version in MLAA..nVidia cant use AMDs IP
@WacKEDmaN – MLAA is not AMD IP, their implementation in DirectCompute is. Same applies to SRAA unless NV patents whole idea.
MLAA is good with MSAA 2-4x+AAA on. I guess SRAA doesn’t need any other AA method to look good regardless of game engine and scene. More mature approach but… 1.8ms on 1280×720? That’s a little higher than MLAA implementation from
http://www.iryoku.com/mlaa/
on 9800GTX+ where it’s below 1ms in games not demos.
Maybe I’m not reading it right, but can SRAA be implemented as a forced driver feature to use on existing games or does it require developer support in their games?
from what I can understand, they supersample the depth buffer (sub-samples), so, for instance, 4 depth samples for one 1 single pixel (not multisampled.
Then, during the blit stage, I guess they use this depth sub-samples to detect the edges. I would they say they blur the colour buffer according to the depth sub-sample differences (if all 4 depth samples have same depth => no blur).
This could be implemented at the driver level if that is the case…
actually it needs the normal buffer as well, so I doubt they can make a general driver-based implementation. but a profile-based solution could be used (for deferred-shading-based games at least).
I’ll wait for Nvidia to get around to releasing a driver set that has SRAA support before I pass judgement…but I honestly believe that it’ll look about the same as MLAA and possibly require more GPU power.
Also, there’s a few reviews out that makes this point, but MLAA doesn’t require MSAA to be used in order for it to look good, as Promilus claims.
@Lavans
“but MLAA doesn’t require MSAA to be used in order for it to look good, as Promilus claims”
I never claimed IT REQUIRES. MLAA+MSAA(+adaptive aa) is much better in number of titles where pure MLAA isn’t just as good (either screen is too blurry or gives visual anomalies).
Some games don’t look any different using MLAA vs MLAA + MSAA. Also, running MLAA with MSAA isn’t going to make the image blur any less than what it does already. Though some games, MLAA is amazingly effective. In fact, some people compare MLAA to SSAA, which I think is silly because of how much it blurs distance objects. I just hope that AMD takes the time to further improve MLAA in the future.
AMDs’s MLAA is just like an edge detection filter + bilateral filtering used on MRTs. It’s nothing good in terms of quality and probably it could be implemented by the developers instead… as is done in Crysis or SC2.
Old supersampled AA is the way to go, definitively.