Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - JeGX

Pages: 1 ... 18 19 [20] 21 22 ... 29

When it comes to gaming frame-rates can make or break the quality of the experience. Despite the fact that the pair of XFX GTX 260 Black Editions did score higher on most of the benchmarks, I do think the HIS Radeon HD5870 may be the better choice if torn between those two choice. As demonstrated in Colin McRae: DiRT 2 SLI doesn't always improve performance. In fact having SLI enabled we can actually see a hit in performance in DiRT 2. That is just one of many games that may experience an effect like that. Add to that the fact that you cut down the overall power consumption of your computer by nearly 200 Watts, in my opinion it is a clear choice. Don't forget to consider the fact that the ATI Radeon 5000 series is currently the only DirectX 11 platform if you are looking to get the best visual experience in the latest DirectX 11 games...

3D-Tech News Around The Web / OpenGL: Uniform Buffers vs Texture Buffers
« on: January 21, 2010, 03:01:55 PM »

Uniform Buffers
- Maximum size: 64KByte (or more)
- Memory storage: usually local memory
- Use case examples: geometry instancing, skeletal animation, etc.

Uniform buffers were introduced in OpenGL 3.1 but are available on driver implementations that don’t conform to the version 3.1 of the standard via the GL_ARB_uniform_buffer_object  extension. As the specification says, uniform buffers provide a way to group GLSL uniforms into so called “uniform groups” and source their data from buffer objects to provide more streamlined access possibilities for the application.

Texture Buffers
Maximum size: 128MByte (or more)
Memory storage: global texture memory
Use case examples: skinned instancing, geometry tesselation etc.

Texture buffers were also became core OpenGL in version 3.1 of the specification but are available also via the GL_ARB_texture_buffer_object extension (or via the GL_EXT_texture_buffer_object extension on earlier implementations). Buffer textures are one-dimensional arrays of texels whose storage comes from an attached buffer object. They provide the largest memory footprint for raw data access, much higher than equivalent 1D textures. However, they don’t provide texture filtering and other facilities that are usually available for other texture types. They represent formatted 1D data arrays rather than texture images. From some perspective, however, they are still textures that are resided in global memory so the access method is totally different than that of uniform buffers’.

3D-Tech News Around The Web / MSI HD 5870 Lightning disassembled
« on: January 21, 2010, 02:58:48 PM »

The card will be designed with overclockers in mind and comes with two 8-pin PCI-Express power connectors for maximum power delivery. It also has easily accessible measuring points for the GPU voltages - a voltmodder's dream.

Without a doubt one of the most highly anticipated releases for 2010 will be the NVIDIA GF100 Fermi graphics card. For nearly a year NVIDIA has told the media and their fans that GF100 is coming and that it will be the best performing graphics card that the world has ever seen. Read on to see what we think after we spend some time with GF100!



For 2010 we have some exciting plans towards Bullet 3.x with support for OpenCL acceleration, binary serialization in a new .bullet file format with improved authoring tools and improved compatibility with Sony PlayStation 3 and other platforms. We plan a new Bullet 2.76 release for January 2010.

3D-Tech News Around The Web / Re: ATI Catalyst 10.1 BETA (8.70) available
« on: January 10, 2010, 04:05:23 PM »
Thanks for the news!!

Downloading.... but the server is overloaded... still one hour to complete the 320MB rar file  ;D

3D-Tech News Around The Web / NVIDIA Optimus Technology
« on: January 07, 2010, 04:42:01 PM »

NVIDIA Optimus technology works on notebook platforms with NVIDIA GPUs. It is unique to NVIDIA. It is seamless and transparent to the user. Its purpose is to optimize the mobile experience by letting the user get the performance of discrete graphics from a notebook while still delivering great battery life. Look for more details next month.

3D-Tech News Around The Web / Qt Graphics and Performance - OpenGL
« on: January 07, 2010, 04:35:50 PM »

Here’s the next instalment of the graphics performance blog series. We’ll begin by looking at some background about how OpenGL and QPainter work. We’ll then dive into how the two are married together in OpenGL 2 Paint Engine and finish off with some advice about how to get the best out of the engine. Enjoy!

3D-Tech News Around The Web / Re: Windows 7 God Mode
« on: January 05, 2010, 09:01:02 PM »
Detailed Howto:

Sadly, this is nothing more than a stupid geek trick using a technique that isn’t widely known—Windows uses GUIDs (Globally Unique Identifiers) behind the scenes for every single object, component, etc. And when you create a new folder with an extension that is a GUID recognized by Windows, it’s going to launch whatever is listed in the registry for that GUID.


Tntel has just announced their latest lineup of Nehalem technology based processors. Clarkdale, as the new processor is called by engineers, is the first commercially available 32 nm based processor. It is also the first processor that features a graphics processing core which is located inside the processor's package - something that was first heard about when AMD talked about their Fusion project. It should be noted however that Intel did not put both CPU and GPU onto the same die of silicon.

Instead they took the 32 nm processor core and the 45 nm GPU core, and crammed them into a single processor package, as pictured above. This approach is called Multi-Chip module (MCM).

Intel's graphics core is based on 45 nm technology and features 177 million transistors on a die size of 114 mm². You could imagine it as an improved G45 chipset (including the memory controller) with some magic sauce to make everything work in the CPU package. The GPU is clocked at 533, 733 or 900 MHz depending on the processor model. Our tested i5 661 features the highest GPU clock speed available, without overclocking, of 900 MHz. Intel also increased the number of execution units (shaders) from 10 to 12 and the HD video decoder is now able to decode two streams at the same time for picture-in-picture like you find on some Blu-Rays to show the director's commentary. HD audio via HDMI is supported as well, which will make setup of a media PC more easy, putting this solution on the same level as the latest offerings from AMD and NVIDIA. Whereas the mobile GPU version features advanced power saving techniques like dynamic clock scaling (think: EIST for GPUs) and overheat downclocking, the feature is not available on the desktop part.

3D-Tech News Around The Web / Windows 7 God Mode
« on: January 05, 2010, 08:22:57 PM »

Want a good way to access all the control panel options in Windows 7 in one easy location? Simply make a folder on your desktop and rename it GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}  and you are all set.

General Discussion / Re: video card
« on: January 05, 2010, 08:19:22 PM »
This is just an idea, but maybe you might use a fan like a noctua (nf-p12 or something like this) and try to hook it to the heatsink...


Ocelot is a dynamic compilation framework for heterogeneous systems, accomplishing this by providing various backend targets for CUDA programs. Ocelot currently allows CUDA programs to be executed on NVIDIA GPUs and x86-CPUs at full speed without recompilation.

3D-Tech News Around The Web / NVIDIA delays Fermi to March 2010
« on: December 28, 2009, 05:34:10 PM »

Nvidia originally scheduled to launch Fermi in November 2009, but was delayed until CES in January 2010 due to defects, according to market rumors. However, the company recently notified graphics card makers that the official launch will now be in March 2010, the sources noted.

3D-Tech News Around The Web / Nvidia Fermi graphics architecture explained
« on: December 21, 2009, 04:48:16 PM »

There is an exception to this – high-power graphics cards, we love these. They make games sexy and that makes us sexy. At the heart of these is the GPU, and when Nvidia announces it has a new and wonderful one, it is time to take notice. It's codenamed Fermi, after renowned nuclear physicist, Enrico Fermi.


The silicon has been designed from the ground-up to match the latest concepts in parallel computing. The basic features list reads thus: 512 CUDA Cores, Parallel DataCache, Nvidia GigaThread and EEC Support.

Clear? There are three billion transistors for starters, compared to 1.4 billion in a GT200 and a mere 681 million on a G80. There's shared, configurable L1 and L2 cache and support for up to 6GB of GDDR5 memory.

The block diagram of Fermi looks like the floor plan of a dystopian holiday camp. Sixteen rectangles, each with 32 smaller ones inside, all nice and regimented in neat rows. That's your 16 SM (Streaming Multiprocessing) blocks with 512 little execution units inside, called CUDA cores.

Each SM core has local memory, register files, load/store units and thread scheduler to run the 32 associated cores. Each of these can run a floating point or an integer instruction every click. It can also run double precision floating point operations at half that, which will please the maths department.


In all honesty, we can completely understand the rampant issue of piracy and there is a worrisome number that I won't stop repeating. Crytek is a good example of a company that wanted to stay PC-only but simply could not build a business model due to multi-million dollar damages caused by prospective buyers opting for a pirated copy of the game. When Epic Games released their Unreal Tournament III game, they marked 40 million different installations trying to access online servers for multiplayer action.

In short, if those 40 million people went on and purchased the game instead of downloading it from The Pirate Bay and similar sites, Epic Games would earn approximately two billion dollars [40M times $49.95 recommended price, minus some $39.95 in US and plus some Euro 49.99/$62.44 at the time in EU lands]. Now, imagine what would Epic be able to do if they had an influx in excess over a billion dollars. Would Unreal Engine 4 need 4-6 years to develop in a limited budget or could Tim hire as much people as he needs and deliver an engine perfectly optimized for a whole spectrum of PC hardware?

3D-Tech News Around The Web / OpenCL path tracer / ray tracing demo
« on: December 21, 2009, 12:50:12 PM »

SmallptGPU is a small and simple demo written in OpenCL in order to test the performance of this new standard. It is based on Kevin Beason's Smallpt available at
SmallptGPU has been written using the ATI OpenCL SDK beta4 on Linux but it should work on any platform/implementation (i.e. NVIDIA). Some discussion about this little toy can be found at Luxrender's forum.


General Discussion / Re: Accessing the depth buffer in GLSL
« on: December 21, 2009, 09:39:28 AM »
Oh no, your english is very very fine, better than mine.
I googled your nickname and found some messages in french that's explain my question  ;)


Alors bienvenu dans la petite communauté GeeXLab. Un blog en francais existe aussi sur GeeXLab et les outils similaires:
Il est mis à jour moins souvent que geeks3d mais c'est le début (ca devrait aller mieux à partir de janvier) alors viens y faire un tour de temps en temps. Et si tu souhaites être contributeur sur le HackLAB, c'est quelque chose qui pourrait se faire...


Thank you for your nice feedback about GeeXLab

I'm preparing a big update of GeeXLab with new features, I hope to release it in early January...

Pages: 1 ... 18 19 [20] 21 22 ... 29