Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - JeGX

Pages: 1 ... 11 12 [13] 14 15 ... 33
The new GeForce Game Ready driver, release 344.65 WHQL, includes improvements which allows GeForce owners to continue to have the ultimate gaming platform. In addition, this Game Ready WHQL driver ensures you'll have the best possible gaming experience for Assassin’s Creed: Unity

- Assassin's Creed Unity GeForce Game Ready Driver Available Now

- R344.65 Win7/Win8 64-bit
- R344.65 Win7/Win8 32-bit

- R344.65 Win7/Win8 64-bit
- R344.65 Win7/Win8 32-bit

Big Pictures / MSI Radeon R9 290X Gaming 4G (Ultra Quick Review)
« on: November 10, 2014, 03:06:06 PM »
Better late than never, I received this nice card: the MSI's Radeon R9 290X Gaming 4G. This Radeon is powered by the Hawaii GPU (XT version) and comes with 4GB of GDDR5 and the Twin Frozr IV VGA cooler (I must say that the Twin Frozr IV is absolutely quiet at idle).


GPU Data:

AMD Catalyst 14.9 WHQL:

GPU Caps Viewer:


3D-Tech News Around The Web / State of Linux Gaming
« on: November 10, 2014, 02:43:13 PM »
Sadly, it's pretty clear that if you run these games on Linux your experience isn't going to be as good, and you'll be getting less "gaming value" vs. Windows. We're not talking about a bunch of little indy titles, these are big releases: Borderlands: The Pre-Sequel, Borderlands 2, Tropico 5, XCOM: Enemy Unknown, Sid Meier's Civilization V. My take is the devs doing these ports just aren't doing their best to optimize these releases for Linux and/or OpenGL.

A nice little tidbit from this report: "Unfortunately, Aspyr are currently still unable to provide support for non-Nvidia graphics cards, as with Borderlands 2. This doesn't mean the game won't work if you have an AMD or Intel GPU, but just that you're not guaranteed to receive help from the developer - the current driver situation for non-Nvidia cards may lead to degraded performance." Huh? This is not a good situation.


3D-Tech News Around The Web / Transparency (or Translucency) Rendering
« on: November 07, 2014, 01:25:54 PM »
Like many other visual effects, games attempt to mimic transparent (or translucent as it’s often synonymously referred to in the games industry) objects as closely as possible. Real world transparent objects are often modelled in games using a simple set of equations and rules; simplifications are made, and laws of physics are bent, in an attempt to reduce the cost of simulating such a complex phenomenon. For the most part we can get plausible results when rendering semi-transparent objects by ignoring any refraction or light scattering in participating media. In this article we’re going to focus on a few key methods for transparency rendering, discuss the basics and propose some alternatives/optimizations which should be of use to anyone who hasn’t heard them before.

Full article:

Over the course of the past few months, we have been re-evaluating our entire approach to image quality in Frostbite. Our overall goal has been to achieve a coherent and cinematic look while simplifying high-quality content creation for our game teams and artists. Moving to physically based rendering (PBR) was the natural way for us to achieve this.

This talk & detailed course notes covers what we’ve learnt during this R&D process and transition that we’ve gone through together with multiple game teams within Electronic Arts – all the different concepts & steps needed to transition a production game engine to PBR, including details that are often bypassed in the literature.

All slides:

English forum / Funny Webcam
« on: November 04, 2014, 12:21:25 PM »
A quick test that mixes geometry instancing and webcam.  Each cube is stretched along z axis according to the value of the texture (the webcam output).  I will put the demo in the code sample pack later.

English forum / GLSL Hacker released
« on: November 03, 2014, 08:19:15 PM »
A new version of GLSL Hacker is available for all platforms (Windows, OS X and Linux).

Complete story is available here:


Code sample pack:

Version - 2014.11.03
+ added new plugin based on FreeType-GL to eaily render true type fonts
 (all gh_utils.ftgl_xxxxxxx() functions).
+ added support of omni-lights shadow mapping with cube shadow maps
  (gh_camera.set_orientation_cubemap(), gh_render_target.create_cubemap(),
+ added support of user clipping planes (gh_renderer.enable_state()/disable_state()).
+ added get_orientation_euler_angles(), get_absolute_orientation_euler_angles(),
  get_orientation_vectors(), get_orientation_vector_z(), get_absolute_orientation_vectors()
  get_absolute_orientation_vector_z() to gh_object lib.
! updated camera orientation. Now a camera's children are correctly oriented.
+ added copy_transform() to gh_object lib.
+ added detection of OS X 10.10 Yosemite.

Acer has launched a new gaming laptop with 4K display.

Main features:
- Intel Core i7-4710HQ
- 16 GB of DDR3 memory
- NVIDIA GeForce GTX 860M
- 256GB SSD + 1TB HDD
- 4K display (3840 x 2160 pixels)


English forum / User clipping planes
« on: November 03, 2014, 04:52:51 PM »
The support of user clipping planes has been added to GLSL Hacker 0.8.0+.  User clipping planes are very easy to use: you need a GLSL program that deals with gl_ClipDistance and you need to enable one of the available user clipping planes with gh_renderer.enable_state("GL_CLIP_DISTANCE0").

A demo is available in the host_api/Clip_Planes/ folder of the code sample pack v2.25.0.

This demo is based on this article:

English forum / FreeType-GL plugin (TTF and OTF)
« on: November 03, 2014, 04:17:00 PM »
One of the coolest features of GLSL Hacker 0.8.0 is the support of FreeType GL via a dedicated plugin. Thanks to this new plugin, GLSL Hacker can now load any TTF (True Type Font) or OTF (Open Type Font) file. I also added a small Lua lib (gx_font.lua in the libs/lua/ folder of glsl hacker) to make things easier.

INIT script:
Code: [Select]
-- Load an otf file:
font = ftgl_load_font(demo_dir .. "data/BebasNeue.otf", 30)

FRAME script:
Code: [Select]
ftgl_print(font, 10, 60, 1.0, 1.0, 0, 1.0, "GLSL Hacker rocks!")

Demos are available in the host_api/freetype-gl/ folder of the code sample pack v2.25.0.

The plugin is available for all platforms (Windows, OSX and Linux).

English forum / Omnidirectional Shadow Mapping
« on: November 03, 2014, 04:00:04 PM »
GLSL Hacker supports shadow mapping with omni-directional lights.  Omni-directional shadow mapping is cool because a light can cast shadows in all directions.  Omni-directional shadow mapping (or cubic shadow mapping) relies on a cubemap and requires 6 passes to render the scene.

This demo is available in the host_api/Shadow_Mapping/Omnidirectional_Shadows/ folder of the code sample pack v2.25.0.

One thing we wanted to improve upon with GRID Autosport was our trackside environments, in particular the grass. The old system had served us well but it was time for an improvement so step up Rich Kettlewell, one of our programming wizards who set about the task of doing just that.

Below you’ll find a brief presentation which goes into how we achieved this and the effects it has on the game itself. The presentation was given at the Develop conference and while it was originally meant for game developers we hope you enjoy taking a look at what goes on behind the scenes.



October 20th, 2014, – The Khronos™ Group today announced the ratification and public release of the finalized OpenVX™ 1.0 specification, an open, royalty-free standard for cross platform acceleration of computer vision applications. OpenVX enables performance and power-optimized computer vision processing, especially important in embedded and real-time uses cases such as face, body and gesture tracking, smart video surveillance, advanced driver assistance systems (ADAS), object and scene reconstruction, augmented reality, visual inspection, robotics and more. In addition to the OpenVX specification, Khronos has developed a full set of conformance tests and an Adopters Program, that enables implementers to test their implementations and use the OpenVX trademark if conformant. Khronos plans to ship an open source, fully-conformant CPU-based implementation of OpenVX 1.0 before the end of 2014. The full OpenVX 1.0 specification and details about the OpenVX Adopters Program are available at

OpenVX defines a higher level of abstraction for execution and memory models than compute frameworks such as OpenCL™, enabling significant implementation innovation and efficient execution on a wide range of architectures while maintaining a consistent vision acceleration API for application portability. An OpenVX developer expresses a connected graph of vision nodes that an implementer can execute and optimize through a wide variety of techniques such as: acceleration on CPUs, GPUs, DSPs or dedicated hardware, compiler optimizations, node coalescing, and tiled execution to keep sections of processed images in local memories. This architectural agility enables OpenVX applications on a diversity of systems optimized for different levels of power and performance, including very battery-sensitive, vision-enabled, wearable displays.

“Increasingly powerful and efficient processors and image sensors are enabling engineers to incorporate visual intelligence into a wide range of systems and applications,” said Jeff Bier, founder of the Embedded Vision Alliance. “A key challenge for engineers is efficiently mapping complex algorithms onto the processor best suited to the application. OpenVX is an important step towards easing this challenge.”

The precisely defined specification and conformance tests for OpenVX make it ideal for deployment in production systems, where cross-vendor consistency and reliability are essential. OpenVX is complementary to the popular OpenCV open source vision library that is also used for application prototyping but is not so tightly defined and lacks OpenVX graph optimizations. Khronos has defined the VXU™ utility library to enable developers to call individual OpenVX nodes as standalone functions for efficient code migration from traditional vision libraries such as OpenCV. Finally, as any Khronos specification, OpenVX is extensible to enable nodes to be defined and deployed to meet customer needs, ahead of being integrated into the core specification.

Press release:

English forum / Very Simple OpenGL Extensions Viewer
« on: October 19, 2014, 05:58:29 PM »
Here is a small demo (in Lua) that shows how to list the OpenGL extensions exposed by the driver.

You can browse the extensions list using the following keys:
- PAGE_DOWN / PAGE_UP: to move up and down in the list.
- HOME: to jump to the start of the list
- END: to jump to the end of the list

The demo is available in the code sample pack in the host_api/OpenGL_Extensions/ folder

I haven't tested the demo on OSX and Linux but it should work. Otherwise, let me know in this thread.

English forum / GPU PhysX on Linux
« on: October 18, 2014, 04:07:32 PM »
The new PhysX SDK version 3.3.2 adds the GPU PhysX acceleration on Linux (see THIS NEWS).  I updated GLSL Hacker with this new SDK (GLSL Hacker v0.7.2.0) and I added a new particle demo in the code sample pack:


This is a simple particle/fluids demo that fills a pool with particles:

On Linux, you can start the demo with the command line:
Code: [Select]
$ ./GLSLHacker /demofile=\"path_to_code_sample_pack/host_api/PhysX/Pool/demo_gl2_v1.xml\"

Currently, I didn't manage to get the GPU acceleration on Linux (Mint 17 64-bit). I tested with latest R331.104 and with R340.xx. There is a cuInit failure (I installed the latest CUDA toolkit v6.5.14). The cuInit failure is also present with PhysX SDK samples.

Here are some benchmark numbers with GPU PhysX (currently only under Windows) and CPU PhysX (Windows and Linux).

To force CPU PhysX, just edit the demo file (demo_gl2_v1.xml) and update the line 182:
Code: [Select]
gpu_physx = 0

Benchmark settings: 6000 particles, 1280x720 windowed.

On Windows with a GeForce GTX 660 (R337.50) + Intel Core i5 2320 @ 3GHz:
- GPU PhysX: around 420 FPS
- CPU PhysX: around 150 FPS

On Linux Mint 17 64-bit, with a GeForce GTX 680 (R331.104) + AMD FX 6100 @ 3.3GHz:
- GPU PhysX: not available (cuInit failed)
- CPU PhysX: around 60 FPS (this CPU sucks!)

As soon as the GPU PhysX will be enabled, we should see a jump in the FPS on Linux (> 100 FPS on my Linux box).

qu3e is a compact, light-weight and fast 3D physics engine in C++. It is has been specifically created to be used in games. It is portable with no external dependencies other than various standard c header files (such as cassert and cmath). qu3e is designed to have an extremely simple interface for creating and manipulating rigid bodies.

qu3e is of particular interest to those in need of a fast and simple 3D physics engine, without spending too much time learning about how the whole engine works. In order to keep things very simple and friendly for new users, only box collision is supported. No other shapes are supported (capsules and spheres may be added in the future if requested).

Since qu3e is written in C++ is intended for users familiar with C++. The inner-code of qu3e has quite a few comments and is a great place for users to learn the workings of a 3D physics engine.

qu3e stands for "cube", since the 3 looks slightly like the letter b and boxes (or cubes!) are the primary type of collision object.


English forum / OpenGL 4 Shader Subroutines - NVIDIA/OSX bug?
« on: October 17, 2014, 06:32:29 PM »
I struggled many hours recently with subroutines and I didn't manage to make them ok under OS X with a GeForce GPU.
I added a new simple demo that shows a pqtorus rendered three times, each time with a different subroutine (texture, phong, phong+texture).

The demo is available in the code sample pack:

The latest GLSL Hacker is recommended.

The demo works fine on Windows. On OS X, the demo works fine for Intel and AMD GPUs but not for NVIDIA GPUs. With the GT650M / OSX, the same subroutine is used to shade all meshes and that's the bug. I suspect a bug on OSX with GeForce GPUs because the demo works fine everywhere else. But since it's a NVIDIA related issue, my code can be also guilty...

OK - NVIDIA GeForce GTX 660 - R337.50 - Win7 64-bit

OK - Intel HD 4000 - OS X 10.9

OK - AMD Radeon HD 6870 - OS X 10.9 (Hackintosh)

ERROR - NVIDIA GeForce GT 650M (Macbook Retina Mid-2012) - OS X 10.9

English forum / GLSL Hacker released
« on: October 17, 2014, 03:16:11 PM »
A new version of GLSL Hacker is ready, this time for all OSes (Windows, OSX and Linux) at the same time!

The PhysX plugin has been updated with the latest PhysX SDK v3.3.2


The code sample pack has been updated too:

Version - 2014.10.17
! improved the FBX plugin (Windows and OS X) for loading Autodesk FBX files.
! updated the PhysX 3 plugin (Windows, Linux and OS X) with
  latest PhysX SDK v3.3.2.
+ added a new OBJ loader (for testing purposes) and a way to change
  current OBJ loader with gh_model.set_current_3d_obj_loader(name):
  "ObjLoaderV1" or "ObjLoaderV2". Default is "ObjLoaderV1".

Full changelog is available here:

Pages: 1 ... 11 12 [13] 14 15 ... 33