Tag Archives: 3d graphics

Horizon-Based Image-Space Ambient Occlusion

NVIDIA has published a new paper from SIGGRAPH 2008 about HBAO (Horizon-Based Ambient Occlusion), an innovative improved form of screen-space ambient occlusion (SSAO). From the paper, HBAO seems to be easily integrable into an existing game engine:
– Input Data = eye-space depths and normals
– Rendered in a post-processing pass

The paper is available here: Horizon-Based Image-Space Ambient Occlusion: from SIGGRAPH 2008

OpenGL Benchmarking On Linux Reaches New Heights

The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available for Linux. Phoronix Test Suite is always looking for new and more demanding OpenGL benchmarks and 2 new benchmarks have been added to the Test Suite: Lightsmark (OpenGL lighting benchmark – built around the Lightsprint SDK) and Unigine (real-time engine that focuses upon photorealistic 3D capabilities for both gaming and virtual reality systems).

Read the complete news here: OpenGL Benchmarking On Linux.

This news allows me to start a new category at Geeks3D: Linux.

OpenGL and Mobile Devices: Round 2

Richard S. Wright Jr. the lead author of The OpenGL SuperBible, wrote about the intersection of OpenGL and mobile devices.

Read his complete article HERE.

The graphics hardware behind the iPhone and iPod Touch is a PowerVR MBX Lite, which uses Tile-Based Deferred Rendering.

There are a few limitations you should know from the start:
* There is no stencil or accumulation buffer.
* There are only two texture units.
* The maximum texture size is 1024×1024 (use power of two only).
* The maximum space for textures and surfaces is 24MB.
* Only 2D textures are supported.
* There is no software rendering fallback.

The PowerVR chip uses a full floating-point pipeline throughout. The OpenGL lighting model is fully hardware accelerated, and there is no need to use fixed-point values for either lighting and material values, or vertex data. For best performance, use directional lights instead of point lights when possible, and try to always use indexed strips for geometry submission. To minimize bandwidth, you can use unsigned byte values for colors, and either unsigned byte or shorts instead of floats for texture coordinates.

DirectX 9 to DirectX 11, where did 10 go?

Here is an analysis, by a game developer called Susheel, of the new things that DirectX 11 will bring.

Read the complete analysis HERE.

What is really interesting to see is the emergence of what Microsoft terms as the Compute Shader, no doubt a marketing speak for GPGPU which they claim will allow the GPU, with it’s awesome power to be used for more than just graphics, which smells like CUDA (Compute Unified Device Architecture) to me.

Issues like multi-threaded rendering/resource handling are things that were long time coming and yes, it’s a good thing we will finally see them in the newer version. It just makes my job as a game developer a whole lot easier. Most details on Shader Model 5.0 are pretty sketchy, so I won’t go into things like shader length and function recursion. However, I hope such issues are addressed satisfactorily in the newer shader model.

Microsoft is still fixated on releasing version 11 only for Vista, so don’t expect your XP machines to ever run DirectX 11 even if you buy brand new hardware.

DirectX 11 Details Emerge

Similar to DirectX 10, DirectX 11 will be available only on Windows Vista and future versions of Microsoft’s operating system. DirectX 11 will add new compute shader technology that Microsoft says will allow GPUs to be used “for more than just 3D graphics,” allowing developers to utilize video cards as parallel processors.

DirectX 11 will support tessellation, a feature which can potentially assist developers in making models appear smoother when seen up close. Multi-threaded resource handling is also incorporated, making it easier for games to utilize multi-core processors in a user’s machine.


Related links:
DirectX 11 preliminary details emerges in Gamefest
Microsoft: DirectX 11 To Use GPU For Parallel Processing