Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - JeGX

Pages: 1 ... 10 11 [12] 13 14 ... 19
221
Link: http://www.legitreviews.com/article/1176/1/

When it comes to gaming frame-rates can make or break the quality of the experience. Despite the fact that the pair of XFX GTX 260 Black Editions did score higher on most of the benchmarks, I do think the HIS Radeon HD5870 may be the better choice if torn between those two choice. As demonstrated in Colin McRae: DiRT 2 SLI doesn't always improve performance. In fact having SLI enabled we can actually see a hit in performance in DiRT 2. That is just one of many games that may experience an effect like that. Add to that the fact that you cut down the overall power consumption of your computer by nearly 200 Watts, in my opinion it is a clear choice. Don't forget to consider the fact that the ATI Radeon 5000 series is currently the only DirectX 11 platform if you are looking to get the best visual experience in the latest DirectX 11 games...

222
3D-Tech News Around The Web / OpenGL: Uniform Buffers vs Texture Buffers
« on: January 21, 2010, 03:01:55 PM »
Link: http://rastergrid.com/blog/2010/01/uniform-buffers-vs-texture-buffers/

Uniform Buffers
- Maximum size: 64KByte (or more)
- Memory storage: usually local memory
- Use case examples: geometry instancing, skeletal animation, etc.

Uniform buffers were introduced in OpenGL 3.1 but are available on driver implementations that don’t conform to the version 3.1 of the standard via the GL_ARB_uniform_buffer_object  extension. As the specification says, uniform buffers provide a way to group GLSL uniforms into so called “uniform groups” and source their data from buffer objects to provide more streamlined access possibilities for the application.


Texture Buffers
Maximum size: 128MByte (or more)
Memory storage: global texture memory
Use case examples: skinned instancing, geometry tesselation etc.

Texture buffers were also became core OpenGL in version 3.1 of the specification but are available also via the GL_ARB_texture_buffer_object extension (or via the GL_EXT_texture_buffer_object extension on earlier implementations). Buffer textures are one-dimensional arrays of texels whose storage comes from an attached buffer object. They provide the largest memory footprint for raw data access, much higher than equivalent 1D textures. However, they don’t provide texture filtering and other facilities that are usually available for other texture types. They represent formatted 1D data arrays rather than texture images. From some perspective, however, they are still textures that are resided in global memory so the access method is totally different than that of uniform buffers’.


223
3D-Tech News Around The Web / MSI HD 5870 Lightning disassembled
« on: January 21, 2010, 02:58:48 PM »
Link: http://www.techpowerup.com/113418/MSI_HD_5870_Lightning_disassembled.html

The card will be designed with overclockers in mind and comes with two 8-pin PCI-Express power connectors for maximum power delivery. It also has easily accessible measuring points for the GPU voltages - a voltmodder's dream.

224
Without a doubt one of the most highly anticipated releases for 2010 will be the NVIDIA GF100 Fermi graphics card. For nearly a year NVIDIA has told the media and their fans that GF100 is coming and that it will be the best performing graphics card that the world has ever seen. Read on to see what we think after we spend some time with GF100!

Link: http://www.legitreviews.com/article/1193/1/

225
Link: http://bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=4&t=4495

Quote
For 2010 we have some exciting plans towards Bullet 3.x with support for OpenCL acceleration, binary serialization in a new .bullet file format with improved authoring tools and improved compatibility with Sony PlayStation 3 and other platforms. We plan a new Bullet 2.76 release for January 2010.

226
3D-Tech News Around The Web / NVIDIA Optimus Technology
« on: January 07, 2010, 04:42:01 PM »
Link: http://blogs.nvidia.com/ntersect/2010/01/new-nvidia-optimus-primer.html

Quote
NVIDIA Optimus technology works on notebook platforms with NVIDIA GPUs. It is unique to NVIDIA. It is seamless and transparent to the user. Its purpose is to optimize the mobile experience by letting the user get the performance of discrete graphics from a notebook while still delivering great battery life. Look for more details next month.

227
3D-Tech News Around The Web / Qt Graphics and Performance - OpenGL
« on: January 07, 2010, 04:35:50 PM »
Link: http://labs.trolltech.com/blogs/2010/01/06/qt-graphics-and-performance-opengl/

Quote
Here’s the next instalment of the graphics performance blog series. We’ll begin by looking at some background about how OpenGL and QPainter work. We’ll then dive into how the two are married together in OpenGL 2 Paint Engine and finish off with some advice about how to get the best out of the engine. Enjoy!

228
Link: http://www.techpowerup.com/reviews/Intel/Core_i5_661_GPU/1.html

Quote
Tntel has just announced their latest lineup of Nehalem technology based processors. Clarkdale, as the new processor is called by engineers, is the first commercially available 32 nm based processor. It is also the first processor that features a graphics processing core which is located inside the processor's package - something that was first heard about when AMD talked about their Fusion project. It should be noted however that Intel did not put both CPU and GPU onto the same die of silicon.

Instead they took the 32 nm processor core and the 45 nm GPU core, and crammed them into a single processor package, as pictured above. This approach is called Multi-Chip module (MCM).

Intel's graphics core is based on 45 nm technology and features 177 million transistors on a die size of 114 mm². You could imagine it as an improved G45 chipset (including the memory controller) with some magic sauce to make everything work in the CPU package. The GPU is clocked at 533, 733 or 900 MHz depending on the processor model. Our tested i5 661 features the highest GPU clock speed available, without overclocking, of 900 MHz. Intel also increased the number of execution units (shaders) from 10 to 12 and the HD video decoder is now able to decode two streams at the same time for picture-in-picture like you find on some Blu-Rays to show the director's commentary. HD audio via HDMI is supported as well, which will make setup of a media PC more easy, putting this solution on the same level as the latest offerings from AMD and NVIDIA. Whereas the mobile GPU version features advanced power saving techniques like dynamic clock scaling (think: EIST for GPUs) and overheat downclocking, the feature is not available on the desktop part.

229
3D-Tech News Around The Web / Windows 7 God Mode
« on: January 05, 2010, 08:22:57 PM »
Link: http://hardocp.com/news/2010/01/04/windows_7_tip_day_god_mode

Quote
Want a good way to access all the control panel options in Windows 7 in one easy location? Simply make a folder on your desktop and rename it GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}  and you are all set.

231
Link: http://code.google.com/p/gpuocelot/

Ocelot is a dynamic compilation framework for heterogeneous systems, accomplishing this by providing various backend targets for CUDA programs. Ocelot currently allows CUDA programs to be executed on NVIDIA GPUs and x86-CPUs at full speed without recompilation.

232
3D-Tech News Around The Web / NVIDIA delays Fermi to March 2010
« on: December 28, 2009, 05:34:10 PM »
Link: http://www.digitimes.com/news/a20091228PD207.html

Quote
Nvidia originally scheduled to launch Fermi in November 2009, but was delayed until CES in January 2010 due to defects, according to market rumors. However, the company recently notified graphics card makers that the official launch will now be in March 2010, the sources noted.

233
3D-Tech News Around The Web / Nvidia Fermi graphics architecture explained
« on: December 21, 2009, 04:48:16 PM »
Link: http://www.techradar.com/news/computing-components/graphics-cards/nvidia-s-fermi-graphics-architecture-explained-657489

Quote
There is an exception to this – high-power graphics cards, we love these. They make games sexy and that makes us sexy. At the heart of these is the GPU, and when Nvidia announces it has a new and wonderful one, it is time to take notice. It's codenamed Fermi, after renowned nuclear physicist, Enrico Fermi.

...
...

The silicon has been designed from the ground-up to match the latest concepts in parallel computing. The basic features list reads thus: 512 CUDA Cores, Parallel DataCache, Nvidia GigaThread and EEC Support.

Clear? There are three billion transistors for starters, compared to 1.4 billion in a GT200 and a mere 681 million on a G80. There's shared, configurable L1 and L2 cache and support for up to 6GB of GDDR5 memory.

The block diagram of Fermi looks like the floor plan of a dystopian holiday camp. Sixteen rectangles, each with 32 smaller ones inside, all nice and regimented in neat rows. That's your 16 SM (Streaming Multiprocessing) blocks with 512 little execution units inside, called CUDA cores.

Each SM core has local memory, register files, load/store units and thread scheduler to run the 32 associated cores. Each of these can run a floating point or an integer instruction every click. It can also run double precision floating point operations at half that, which will please the maths department.

234
Link: http://www.brightsideofnews.com/news/2009/12/21/crytek-developers-moved-away-from-pc-due-to-piracy.aspx

Quote
In all honesty, we can completely understand the rampant issue of piracy and there is a worrisome number that I won't stop repeating. Crytek is a good example of a company that wanted to stay PC-only but simply could not build a business model due to multi-million dollar damages caused by prospective buyers opting for a pirated copy of the game. When Epic Games released their Unreal Tournament III game, they marked 40 million different installations trying to access online servers for multiplayer action.

In short, if those 40 million people went on and purchased the game instead of downloading it from The Pirate Bay and similar sites, Epic Games would earn approximately two billion dollars [40M times $49.95 recommended price, minus some $39.95 in US and plus some Euro 49.99/$62.44 at the time in EU lands]. Now, imagine what would Epic be able to do if they had an influx in excess over a billion dollars. Would Unreal Engine 4 need 4-6 years to develop in a limited budget or could Tim hire as much people as he needs and deliver an engine perfectly optimized for a whole spectrum of PC hardware?

235
3D-Tech News Around The Web / OpenCL path tracer / ray tracing demo
« on: December 21, 2009, 12:50:12 PM »
Link: http://davibu.interfree.it/opencl/smallptgpu/smallptGPU.html

SmallptGPU is a small and simple demo written in OpenCL in order to test the performance of this new standard. It is based on Kevin Beason's Smallpt available at http://www.kevinbeason.com/smallpt/
SmallptGPU has been written using the ATI OpenCL SDK beta4 on Linux but it should work on any platform/implementation (i.e. NVIDIA). Some discussion about this little toy can be found at Luxrender's forum.

via: http://fireuser.com/blog/opencl_path_tracer_ray_tracing_demo_using_the_amd_opencl_beta_sdk/

236
Link: http://nopper.tv/opengl_3_2.html

Quote
OpenGL 3.2 and GLSL 1.5 is available but there is a lack of simple and complex example programs. On this webpage, I do want to fill this gap by providing example programs using OpenGL 3.2 and GLSL 1.5 with GLEW. Please note, that all example programs do not use any deprecated OpenGL functions.

237
3D-Tech News Around The Web / Texture Tools 2.07 released
« on: December 17, 2009, 12:47:36 PM »
Link: http://news.developer.nvidia.com/2009/12/texture-tools-207-released.html

Texture Tools homepage and downloads: http://code.google.com/p/nvidia-texture-tools/

Quote
The NVIDIA Texture Tools is a collection of image processing and texture manipulation tools, designed to be integrated in game tools and asset conditioning pipelines.

The primary features of the library are mipmap and normal map generation, format conversion and DXT compression.

DXT compression is based on Simon Brown's squish library. The library also contains an alternative GPU-accelerated compressor that uses CUDA and is one order of magnitude faster.

238
3D-Tech News Around The Web / Qt Graphics and Performance - An Overview
« on: December 16, 2009, 04:26:42 PM »
Link: http://labs.trolltech.com/blogs/2009/12/16/qt-graphics-and-performance-an-overview/

Quote
We have two OpenGL based graphics systems in Qt. One for OpenGL 1.x, which is primarily implemented using the fixed functionality pipeline in combination with a few ARB fragment programs. It was written for desktops back in the Qt 4.0 days (2004-2005) and has grown quite a bit since. You can enable it by writing -graphicssystem opengl1 on the command line. It is currently in life-support mode, which means that we will fix critical things like crashes, but otherwise leave it be. It is not a focus for performance from our side, though it does perform quite nicely for many scenarios.

Our primary focus is the OpenGL/ES 2.0 graphics system, which is written to run on modern graphics hardware. It does not use a fixed functionality pipeline, only vertex shaders and fragment shaders. Since Qt 4.6, this is the default paint engine used for QGLWidget. Only when the required feature set is not available will we fall back to using the 1.x engine instead. When we refer to our OpenGL paint engine, its the 2.0 engine we’re talking about.

239
Link: http://users.softlab.ece.ntua.gr/~ttsiod/mandelSSE.html

 Last weekend, I got to play with an NVIDIA GT240 (around 100$). Having read a lot of blogs about GPU programming, I downloaded the CUDA SDK and started reading some samples.

Quote
In less than one hour, I went from my rather complex SSE inline assembly, to a simple, clear Mandelbrot implementation... that run... 15 times faster!

Let me say this again: 1500% faster. Jaw dropping. Or put a different way: I went from 147fps at 320x240... to 210fps... at 1024x768!

I only have one comment for my fellow developers: It is clear that I was lucky - the algorithm in question was perfect for a CUDA implementation. You won't always get this kind of speedups (while at the same time doing it with clearer and significantly less code).

But what I am saying, is that you must start looking into these things: CUDA, OpenCL, etc.

Code: [Select]
_global__ void CoreLoop( int *p,
  float xld, float  yld, /* Left-Down coordinates */  
  float xru, float  yru, /* Right-Up coordinates */  
  int MAXX, int  MAXY) /* Window size */
{
    float re,im,rez,imz;
    float t1, t2, o1, o2;
    int k;
    unsigned result =  0;
    unsigned idx =  blockIdx.x*blockDim.x + threadIdx.x;
    int y = idx / MAXX;
    int x = idx % MAXX;

    re = (float) xld + (xru-xld)*x/MAXX;
    im = (float) yld + (yru-yld)*y/MAXY;
    
   rez = 0.0f;
   imz = 0.0f;
   k = 0;
   while (k < ITERA)
   {
     o1 = rez * rez;
     o2 = imz * imz;
     t2 = 2  * rez * imz;
     t1 = o1 -  o2;
     rez = t1 +  re;
     imz = t2 +  im;
     if (o1 +  o2 > 4)
     {
        result = k;
        break;
     }
     k++;
  }
  p[y*MAXX + x] =  lookup[result]; // Palettized lookup
}

240
3D-Tech News Around The Web / How the 3D engine is changing the world
« on: December 16, 2009, 02:00:15 PM »
Link: http://www.guardian.co.uk/technology/2009/dec/11/3d-engine-videogame-technology

Quote
The Unreal Engine, created by Epic Games, contains a breathtaking 2.5m lines of code – as Tim Sweeney, technical director: "That's roughly comparable to the complexity of a whole operating system a decade ago."

"Game development is at the cutting edge in many disciplines," says Sweeney. "The physics in modern games includes rigid body dynamics and fluid simulation algorithms that are more advanced than the approaches described in research papers."

Pages: 1 ... 10 11 [12] 13 14 ... 19