NVIDIA has released its realistic human face animation demo shown GDC 2013. FaceWorks is a tech-demo for the GeForce GTX Titan. I successfully ran it on a GTX 660 with framerates around 20FPS at FHD resolution (1080).
Meet “Digital Ira”. Ira represents a big leap forward in capturing and rendering human facial expression in real time, and gives us a glimpse of the realism we can look forward to in our favorite game characters.
This demonstration highlights the state of the art in performance capture. All Ira”s motions were acted out in a “light stage” at the Institute for Creative Technology at USC. The team there headed by Dr. Paul Debevec is able to photographically capture facial geometry, surface detail, and lighting information of an actor without any of the traditional tricks of face markers or special makeup.
This light stage data is pulled into NVIDIA”s demo engine, and using FaceWorks rendering technology we witness a realism of human facial rendering never before seen in real time. FaceWorks shading gives Ira lifelike skin, eyes, lips and teeth. Adaptive tessellation keeps the curves of his face perfectly smooth.
Play with this interactive demo to see Ira immersed in three different lighting environments. Adjust his skin rendering to see the effect of sub surface light transmission through his skin. And, see the realism of his facial motion as he stares you down with a myriad of lifelike expresssions.
You can download the demo from THIS PAGE.
I posted some hi-res pictures HERE.
From the readme.txt:
ABOUT THE DATA CAPTURE
Dr Paul Debevec is Associate Director of Graphics Research at the “Institute for Creative Technology” at University of Southern California. His work on high dynamic range lighting has had an enormous impact on the way we describe light in both offline and realtime rendering ( ex: “Rendering with Natural Light” ).
He and his team at the ICT have been building and improving on “Light Stage” systems capable of capturing an actor in a controlled lighting environment. These have been used by companies like Sony Pictures, WETA Digital, and Digital domain in their productions.
“Digital Ira” is their latest “Comprehensive Performance Capture Technology”, and represents their efforts to make performance capture more practical; more elegant:
1. ICT capture up to 30 high resolution “emotional” faces from their actor ( “Ira” ) that include happy, sad, angry, blink etc.
Misc Fact 1: The “Light Stage” cycles through “Structured Light” patterns and pictures are taken by commercially available cameras to generate the mesh, bump, and color data.
Misc Fact 2: Each “face” includes the underlying geometry, the diffuse color, the specular color, the diffuse bump map for describing the way large wrinkles roll across the skin, a specular bumpmap for the fine scales and creases of the skin, and a displacement that can be used to add detail to the character’s surface.
Misc Fact 3: This data is captured at 0.1 mm accuracy
2. The actor’s performances are captured and recorded to video. No makeup and no markers are necessary.
Misc Fact 4: The video is recorded by commercially available cameras in common 1080p capture mode.
3. ICT runs the performance video through a compression engine that can describe any face the actor makes as a blended mosaic of those “emotional” faces.
Misc Fact 5: They figure out how the head bone and jaw bone move for gross movement. The “emotional” faces are only necessary to drive facial expressions and not those skeletal movements.
ABOUT THE REALTIME DEMO
1. This demo is brought to life by a collection of methodologies we collectively call “FaceWorks”.
2. Mesh deformation is accomplished with skinning and by blending between facial targets given to us from ICT.
3. Diffuse color, specular color, diffuse bump, specular bump, and displacement all 4096×4096 textures of 30 facial targets.
4. Originally the texture data was several gigs, but is reduced to 300 megs by use of texture compression and through tile-based texture optimizations whereby redundant tiles are discarded.
5. Character is dynamically tessellated with smoothing and displacement driven by HDR displacement maps.
6. Skin includes HDR subsurface scattering and transmission. Is computed in texture space to eliminate issues at silhoutte and with a backface optimization to eliminate unnecessay work. Transmission most visible in lighting environment 3, where light can be seen through the ear lobe but blocked by the veins in the ear.
7. Eyes include subsurface scattering and a raytraced iris / pupil shader
8. Full scene HDR depth-of-field with bokeh and dynamic tone-mapping
9. The average pixel on the screen is the result of about 8000 instructions
a. This equates to approximately 40,000 FLOPs per pixel
b. At full HD resolution that is 82 billion FLOPs per frame
c. At 60 fps that is 4.9 trillion FLOPs per second
d. Those are shader instructions and do not include the 161 filtered texture fetches / pixel
With Radeon cards, you get this error message:
This demo is NVIDIA-only.
Source: Geeks3D forum
5 thoughts on “NVIDIA FaceWorks Realtime DX11 Tech-Demo Available”
Do I need NVIDIA graphicscard to run it or i can use Radeo too?
I updated the post with the error message with Radeon cards. So yes, you need a NV card to run the demo.
Hate nvidia for this. Always including cuda in dx demos and nvidia specific gl extensions to GL demos. They even didn’t use it! Or if really used – why not go directcompute?
It’s their hardware, their technology, their demo. There is nothing wrong if they want to differentiate their product. Why hating?
runs smooth with GTX560.
Comments are closed.