NVIDIA APEX SDK 1.0 Beta Available

NVIDIA APEX destruction demo
Destruction demo tested on a Radeon HD 5770 ;)



NVIDIA has released the first public version of its APEX SDK. APEX is a new SDK on top of the regular PhysX SDK 2.8.4.

APEX is a C++ SDK:

The APEX SDK comprises a set of C++ header files that define the SDK public API, a set of binary libraries that implement the functionality, as well as documentation of the API and of the functionality provided. Source code to the libraries may be provided to certain developers, according to business considerations. The APEX SDK also includes a set of tools (binary executables and DCC plugins) for content creation.

APEX is made up of 3 modules:

  • APEX clothing
  • APEX destruction (GPU accelerated with GRB)
  • APEX particles



APEX uses CUDA to speed up physics computations (on GeForce cards) but when CUDA is not supported, APEX uses CPU routines for physics (that’s why all APEX demos ran on my Radeon HD 5770). When CUDA is used (actually it’s up to the developer to decide to use CUDA or not, by allocating a CUDA context), the developer can take adavantage of D3D/OGL CUDA interop to directly update D3D/OGL buffers with CUDA data wihtout passing by the host application.

You can download APEX SDK HERE (an account is required). The SDK includes a detailed documentation as well as several code samples (sources + exe).



NVIDIA APEX clothing demo
Cloting demo tested on a Radeon HD 5770

NVIDIA APEX particles demo
Particles demo tested on a Radeon HD 5770

The NVIDIA APEX SDK is designed to address three important issues facing game physics:

  • Significant programmer involvement is required.
  • Game physics content typically gets designed to the game’s “min-spec” system.
  • Game engine performance limitations.

The first problem arises because the traditional interface to middleware physics is an API. It’s designed for programmers. These API’s need to be “general purpose”, so they expose a very “low-level” interface: rigid bodies, shapes, joints, particles, impulses, and collisions. They are basically, a “toolbox” of all the primitive components of a physics simulation. This gives the game programmer a lot of control, but in return, it requires a lot of non-trivial work. This work requires broad physics programming experience, and is often not budgeted by game developers. This is similar, in many ways, to the specialized work of writing an efficient rendering engine on top of D3D. With graphics, artists don’t directly create vertex buffer objects in the editor, but for physics that’s what physics engine integration requires. Even when authoring tools (like plug-ins for Max/Maya) are made available to attempt to reduce the programmer involvement that’s required, it’s the same “low-level toolbox” that is exposed to the artists. If the artists want to create content at a higher level of abstraction – not using these primitive building blocks – then you’re back to requiring a lot of programmer involvement again.

The second problem arises because there are huge performance differences between the consoles, and between different generations of PC CPUs, and GPUs. So, if game developers want their game to take advantage of higher end hardware, when it’s available, to improve the quality, or scale of their game physics, custom authoring is required for each platform or hardware configuration. The result of this is that, in practice, only the “lowest common denominator” content gets created, and users don’t benefit from better hardware.

The third problem arises because many game engines make the assumption that the world is largely static: that there are many static objects, but few dynamic objects Therefore, they tend to use very “heavy weight” data structures for their moving objects: For example: Using an “Actor” class for each and every crate, barrel, or piece of debris flying around the level. So even though the physics system might be able to handle very complex simulations, the overhead of the game engine (the scene graph, the AI, the rendering) makes it impossible, in practice, to use more than a few dozen objects in the level.

APEX addresses each of the above problems as follows:

  • 1 – Significant programmer involvement is required to take a relatively abstract PhysX-SDK and create a lot of meaningful content.

    APEX provides a high-level interface to artists and content developers. This reduces the need for programmer time, adds automatic physics behavior to familiar objects, and leverages multiple low-level PhysX-SDK features with a single easy-to-use authoring interface.

  • 2 – Game physics content typically gets designed to the game’s “min-spec” system.

    APEX requires each functional module to provide one or more ways to “scale the content” when running on better-than-min-spec systems, and to do this without requiring a lot of extra work from the game developer (artist or programmer, but especially programmer).

  • 3 – Game engine performance limitations.

    APEX avoids many of the game engine bottlenecks by allowing the designer to identify the physics that is important to the game logic, and what can be sent directly to the renderer, bypassing the fully generic path through the game engine. It also allows the game engine to treat an APEX asset as a single game object, even though it may actually comprise many hundreds or even thousands of low-level physics components.

[via]





[ Subscribe to Geeks3D latest news by email ]

Geeks3D.com

↑ Grab this Headline Animator