AMD TressFX DX11 Code Sample Published

AMD TressFX SDK demo, DX11

AMD has published a Direct3D 11 code sample that shows how to use TressFX, AMD’s new technology for realtime realistic hair rendering.

Overview
This sample implements AMD’s TressFX hair rendering and simulation technology. The TressFX technology uses the GPU to simulate and render hair that looks and acts like real hair, better than anything previously seen in games.

AMD TressFX makes use of the processing power of high performance GPUs to do realistic rendering and utilizes DirectCompute to physically simulate each individual strand of hair. Rendering is based on an I3D 2012 Paper from AMD “A Framework for Rendering ComplexScattering Effects on Hair”, Yang, et.al., with some modifications and optimizations in order to achieve a level of performance that enables TressFX to be used in games. The physics simulation is described in another paper by AMD which was presented at VRIPHYS 2012: “Real-time Hair Simulation with Efficient Hair Style Preservation”, Han, et al.

Rendering
The implementation of the rendering part of TressFX is based on three main features required to render good looking hair:
– Antialiasing
– Self-Shadowing
– Transparency

When used with a realistic shading model these features provide realism needed for natural looking hair. The sample uses the well-known Kajiya hair shading model. This uses anisotropic lighting to get the correct kind of highlights that hair has. The hair is rendered as several thousand individual strands stored as thin chains of polygons.

Since the width of a hair is considerably smaller than one pixel Antialiasing is required to achieve good looking results. In TressFX this is done by computing the percentage of pixel coverage for each pixel that is partially covered by a hair.
The second main feature, Self-Shadowing, is necessary for giving the hair a realistic looking texture. Without it, the hair tends to look artificial and more like plastic on a puppet than as opposed to real hair. Shadowing is done using a simplified deep shadow map. Typically a deep shadow map is several layers of depth values. To improve performance and memory usage, the implementation interpolates over a range of depth values to avoid having to keep a list of depth values for each pixel.

Transparency provides a softer look for the hair, similar to real hair. If transparency was not used the strands of hair would look too coarse, especially at the edges. In addition to that real hair is actually translucent, so rendering with transparency is consistent with simulating the lighting properties of real hair. Unfortunately transparent hair is difficult to
render because there are thousands of hair strands that need to be sorted. To help with this, TressFX technology uses order independent transparency (OIT). The K-Buffer required for OIT is implemented as a per-pixel linked list (PPLL), so the front-most k hair pixels can be rendered in the correct order. The PPLL is generated by writing into a DX11UAV from the hair pixel shader. Once the PPLL is filled the hair is rendered to the back buffer by applying a full screen quad.

Simulation
Physically accurate simulation of the hair is completely done on the GPU using DirectCompute. The hair responds to movement, wind, and gravity using a Verlet integration method. To maintain shape and natural looking behavior, the simulation
uses both local and global shape constraints. Local shape constraints keep individual strands of hair consistent under bending and twisting forces for various types of hair such as straight or curly. Global shape constraints keep the hair in the same hairstyle it was designed for, even after the hair has been temporarily disturbed. Additionally the simulation has length constraints to keep the hair from stretching under forces. A capsule method is used for collision between the hair and the head.
The TressFX hair simulation has many parameters that allow the programmer and artist to modify the look and behavior of the hair. For example the stiffness of the hair can be modified on the fly to make it look wet. The hair can also be made with varying levels of stiffness and damping, also adjustable during execution. One known problem with the current simulation approach is that if the local/global shape constraints are modeled to look good with gravitation in mind this can result in strange
behavior if the gravitation direction is reversed (for example if a character using TressFX is hanging upside down). Should this become necessary it is easiest to create a special hair mesh with simulation parameters authored to look good while upside
down.

You can download the TressFX v1.0 code sample HERE.




6 thoughts on “AMD TressFX DX11 Code Sample Published”

  1. mareknr

    It wokrs on NV GUs too. TressFX is implemented using Direct Compute API. The question is how different SDKs can influence performence for NV cards when AMD SDK is used for implementation. Can somebody explain that?

  2. Stefan

    If you have compatibility issues, update Geforce driver to a version reading “Tomb Raider fix” in changelog.

    Don’t know about the SDKs, but performance is dependent on number of strands.
    Don’t lower it too much though…
    http://goo.gl/QWyZO

  3. Squall Leonhart

    TressFX sucks ass, the demo published by nvidia years ago (and influenced the hair rendering in Alice:MR) owns the hell out of it.

Comments are closed.