Radeon HD 4670/4650 Specifications

New Radeon HD 4600 series should be launched on September 10, 2008. Here are the specifications:

Radeon HD 4670:
– GPU: RV730 / 55nm
– 320 stream processors
– 16 texture units
– core clock: 750MHz
– memory clock: 1000MHz
– memory: 512MB GDDR3 128-bit
– DirectX 10.1 and OpenGL 2.1

Radeon HD 4650:
– GPU: RV730 / 55nm
– 320 stream processors
– 16 texture units
– core clock: 600MHz
– memory clock: 667MHz
– memory: 512Mb GDDR2/3 128-bit
– DirectX 10.1 and OpenGL 2.1


More news about Radeon HD 4600 Series: Radeon HD 4670 @ Geeks3D

PhysX Performance: GPU vs PPU vs CPU

FiringSquad has published an article that compares PhysX performance of CPU versus PPU versus GPU. The softwares used for this test are Unreal Tournament 3, Nurien and Warmonger.

Conclusion: for over two years old graphics cards, Ageia PhysX PPU is useful but isn’t able to match the performance of today’s GeForce cards.

Read the complete article here: PhysX Performance Update: GPU vs. PPU vs. CPU

In PhysX FluidMark news, I put a graph that shows CPU/PPU/GPU comparison. The results show a larger difference between PPU and GPU but this is due to the kind of test: fluid simulation.

Sapphire Radeon HD 4870 X2 Review

Technology doesn’t play favorites, and when it reaches an apex there is an indifference towards who develops it. ATI has learned this lesson the hard way, having watched from the sidelines for so long that most have forgotten that AMD could be a leader in technology. The Radeon HD 4870 X2 is the latest evolution in graphics cards, and ATI has delivered something bigger than we’ve ever seen before. Benchmark Reviews is fortunate to test the Sapphire Radeon HD 4870 X2 Dual-RV770 GPU video card 100251SR against a collection of todays hottest video cards available.

Read the complete review here: Sapphire Radeon HD 4870 X2 Video Card 100251SR @ Benchmark Reviews

Related links:
Sapphire Radeon HD4870x2 (2Gb GDDR5) @ CPU3D
ATI RADEON HD 4870 X2 2x1024MB

CUDA 2.0 Available

CUDA 2.0 is available here: CUDA Zone. To take advantage of CUDA, you need Forceware 177.84 or better (177.89). Samples work with GeForce 8/9 series.

NVIDIA CUDA technology is the world’s only C language environment that enables programmers and developers to write software to solve complex computational problems in a fraction of the time by tapping into the many-core parallel processing power of GPUs.

NVIDIA ForceWare 177.89: OpenGL Extensions and OpenGL 3.0 Code Sample

Voici la liste des extensions OpenGL supportées par les pilotes Forceware 177.89 WinXP 32 pour une GeForce GTX 280.
Here is the list of OpenGL extensions supported by Forceware 177.89 WinXP 32 drivers for a GeForce GTX 280.

Les ForceWare 177.89 sont les premiers pilotes à offrir le support OpenGL 3.0. Mais pour le moment, ce support n’est pas officiel. Il faut l’activer de manière logiciel grâce à l’utilitaire NVemulate de NVIDIA:

L’activation d’OpenGL 3.0 donne accès à une nouvelle extension WGL_ARB_create_context qui permet de créer un context de rendu OpenGL 3.0. Si vous souhaitez ouvrir la porte d’OpenGL 3.0, voilà le code nécessaire:
Forceware 177.89 are the first drivers to offer the OpenGL 3.0 support. But currently, this support is not official. We have to enable it thanks to NVIDIA’s NVemulate utility:

OpenGL 3.0 activation gives an access to a new extension: WGL_ARB_create_context.
This extension makes it possible to create an OpenGL 3.0 rendering context. If you wish to explore the new world of OpenGL 3.0, here is the necessary code:

// wglCreateContextAttribsARB prototype.
(HDC, HGLRC, const int*);

// Get a pointer on the create context func.
wglCreateContextAttribsARB = \

// Create an OpenGL 3.0 context.
HGLRC gl3Ctx = wglCreateContextAttribsARB(dc, 0, NULL);

// Do something with this opengl 3.0 rendering context.

// Delete the context
wglDeleteContext( gl3Ctx );	
gl3Ctx = NULL;				

Ceci étant dit, voilà la liste des nouvelles extensions pour un contexte de rendu OpenGL 2.1 avec support OpenGL 3.0 activé:
That said, here is the list of the new extensions for an OpenGL 2.1 rendering context with OpenGL 3.0 support enabled:

[French]Carte graphique utilisée[/French]
[English]Graphics card used[/English]: EVGA GeForce GTX 280 / 1Gb

– Drivers Version: Forceware
– OpenGL Version: 2.1.2
– GLSL (OpenGL Shading Language) Version: 1.20 NVIDIA via Cg compiler
– OpenGL Renderer: GeForce GTX 280/PCI/SSE2
– Drivers Renderer: NVIDIA GeForce GTX 280
– ARB Texture Units: 16
– Vertex Shader Texture Units: 32
– Pixel Shader Texture Units: 32
– Geometry Shader Texture Units: 32
– Max Texture Size: 8192×8192
– Max Anisotropic Filtering Value: X16.0
– Max Point Sprite Size: 63.4
– Max Dynamic Lights: 8
– Max Viewport Size: 8192×8192
– Max Vertex Uniform Components: 4096
– Max Fragment Uniform Components: 2048
– Max Varying Float: 60
– Max Vertex Bindable Uniforms: 12
– Max Fragment Bindable Uniforms: 12
– Max Geometry Bindable Uniforms: 12
– MSAA: 2X
– MSAA: 4X
– MSAA: 8X
– MSAA: 16X
– MSAA: 32X

OpenGL Extensions: 168 extensions

Les extensions des anciens pilotes ForceWare se trouvent ICI.
Vous pouvez utiliser GPU Caps Viewer pour récupérer la liste des extensions de votre carte graphique.
The extensions exposed by the old ForceWare drivers are HERE.
You can use GPU Caps Viewer to retrieve the list of extensions of your graphics card.

» Read more

Real Time Realistic Rendering of Nature Scenes

A thesis about real-time realistic rendering of nature scenes with dynamic lighting has been published. This thesis includes all details about real-time grass rendering and about real-time tree rendering with indirect lighting.

You can grab the thesis here: PhD thesis

It’s a pity there is no real-time 3D demo. Screenshots are cool but a real demo is better!

NVIDIA GeForce To Quadro Soft-Mod Guide v3.1

TechARP has updated the GeForce To Quadro Soft-Mod guide. Quadro and GeForce graphics cards are virtually identical in hardware. This guide shows you how to do a software modification on the GeForce to transform it into a Quadro.

Jump to the guide here: NVIDIA GeForce To Quadro Soft-Mod Guide Rev. 3.1.


  • Added a new page with performance results from the soft-modding of an NVIDIA GeForce 7950 GT to the Quadro FX 3500.

Microsoft Windows 7 Development Blog

Microsoft has launched a new blog dedicated to the engineering of Microsoft Windows 7.

The blog is available here. Engineering Windows 7

The audience of enthusiasts, bloggers, and those that are the most passionate about Windows represent the folks we are dedicating this blog to. With this blog we’re opening up a two-way discussion about how we are making Windows 7. Windows has all the challenges of every large scale software project—picking features, designing them, developing them, and delivering them with high quality. Windows has an added challenge of doing so for an extraordinarily diverse set of customers. As a team and as individuals on the team we continue to be humbled by this responsibility.

We strongly believe that success for Windows 7 includes an open and honest, and two-way, discussion about how we balance all of these interests and deliver software on the scale of Windows. We promise and will deliver such a dialog with this blog.


How To Use Multi-GPU with NVIDIA PhysX

In this article, Guru3D explains how to use Multi-GPU with NVIDIA PhysX under Windows XP and Vista. There are 3 ways to use GeForce GPU with PhysX:

  • Standard – one GPU renders both graphics + PhysX (not ideal as you’ll need a lot of GPU horsepower).
  • SLI mode – have two GPUs render both graphics + PhysX (SLI motherboard required)
  • Multi-GPU mode – GPU1 renders graphics and GPU2 renders PhysX (SLI motherboard not required)

Now the trick to use Multi-GPU mode with Vista:

Now there’s a thing you will need to be aware of in the Multi GPU mode, it’s actually a Vista limitation but a second monitor must be attached to enable PhysX running on the second GeForce GPU. You must extend your Windows Vista desktop onto that monitor.

To bypass that issue, most monitors have a standard VGA and a DVI connector, right? Just use both. This limitation is related to the Windows Vista display driver model (WDDM). This limitation does not exist in Windows XP. In NVIDIA’s upcoming drivers, they will be offering a workaround to improve the experience for Windows Vista users.

With a single card or two cards in SLI mode you will not have this problem.


BFG GeForce 9600 GT OCX and 8800 GT OCX Review

Guru3D has published a review on BFG’s GeForce 8800 GT and 9600 GT in their OCX version. OCX means OverClocking eXtreme. Both cards have the ZeroTHERM cooling solution.

– GPU: G94
– shader processors: 64
– core clock: 725MHz (ref=650MHz)
– shader clock: 1850MHz (ref=1625MHz)
– memory clock: 972MHz (ref=900MHz)
– memory: 512Mb GDDR3 / 256-bit
– DirectX 10 / OpenGL 2.1

BFG 8800 GT OXC:
– GPU: G92
– shader processors: 112
– core clock: 700MHz (ref=600MHz)
– shader clock: 1728MHz (ref=1512MHz)
– memory clock: 1000MHz (ref=900MHz)
– memory: 512Mb / 256-bit
– DirectX 10 / OpenGL 2.1

Read the complete review here: BFG GeForce 9600 GT OCX and 8800 GT OCX review.

The verdict:
The OCX cards tested today both qualify as cards among the best we have ever tested. Though it would have been nicer to see a black PCB or something special, everything else is just top notch. Excellent thermal design, low temperatures while retaining a low noise level.

Foxconn GeForce 9500 GT Review

Not everyone can afford the bleeding edge of graphics hardware, and not everyone needs it. NVIDIA has already made headlines with their high-end GeForce GTX 280 video card, which offers high performance at a premium price. Most enthusiasts have seen so much coverage for the high-level graphic products that it might feel like nothing exists for the lower consumer segment. Benchmark Reviews hasn’t forgotten about the entry-level enthusiast, which is why we test the Foxconn GeForce 9500 GT 256MB PCI Express video card 9500GT-256FR3 against the bigger names in this review.

Read the complete review here: Foxconn GeForce 9500 GT G96 Video Card Review

1 2 3 4 5 7