Now that Core i7 reviews have hit the streets, it’s time for Geeks3D to offer a quick overview of this new technology.
» Read more
Need a very fast image viewer? Just try FastPictureViewer. FastPictureViewer helps you work faster by taking advantage, when available, of the power of multicore processors and the speed of DirectX (Direct3D) graphic accelerators, all working in concert to speed up viewing experience to unprecedented levels.
FiringSquad has published an article that compares PhysX performance of CPU versus PPU versus GPU. The softwares used for this test are Unreal Tournament 3, Nurien and Warmonger.
Conclusion: for over two years old graphics cards, Ageia PhysX PPU is useful but isn’t able to match the performance of today’s GeForce cards.
Read the complete article here: PhysX Performance Update: GPU vs. PPU vs. CPU
In PhysX FluidMark news, I put a graph that shows CPU/PPU/GPU comparison. The results show a larger difference between PPU and GPU but this is due to the kind of test: fluid simulation.
FiringSquad takes a look at PhysX performance with GeForce 8/9/GTX200 based graphics cards by testing several games that support PhysX (Unreal Tournament, Warmonger, NKZ, Nurien). The first conclusion is that PhysX is really accelerated on GeForce and the difference between CPU PhysX and GPU PhysX is notable:
To do the tests, FiringSquad has used a modified version of NVIDIA Forceware 177.79. But no indication is given about the modifications on that driver.
They also used the NVIDIA PhysX driver 8.07.18.
SLI PhysX performance is also analyzed but for SLI brings so much power that the tests are CPU-bound. Anyway, SLI PhysX rocks!
Read the complete article here: PhysX Performance with GeForce.
More news about PhysX: PhysX News at Geeks3D.
[English]Thermaltake V1 CPU Cooler is a Masterpiece[/English][French]Le Refroidisseur CPU Thermaltake V1 est un Chef-d’Oeuvre[/French]
The aim of this program is to explore the possibilities of modern 32bit CPU’s how to speed up (without any loss of precision or non-exact calculation) the traditional Mandelbrot algorithm including also full support for multiple cores. The Mandelbrot algorithm is implemented with double precision floating point numbers. You will find 3 different in the archive file:
- KMB_V0.53H-32b-MT_FPU…..: only standard FPU code is used for calculation
- KMB_V0.53H-32b-MT_SSE2….: SSE2 tuned version almost best for all CPU’s
- KMB_V0.53H-32b-MT_SSE2_PM.: SSE2 tuned version especially for Intel Pentium M and Intel Core1 CPUs (it’s in fact KMB_V0.53G-32b-MT_SSE2 as Version H was slower)
Download Kümmel Mandelbrot Benchmark HERE.
Here are my scores on an old clock-stock Core2Duo 6600:
Do you know what CUDA and OpenCL stand for and how they could make your computer 50 times faster? If so, you can safely jump to the “Ending the mess” section below. Otherwise read on for a gentle introduction.
A computer has two important processing units: the CPU and GPU. Think of them as the two brothers in Rain Man. The GPU is the ultimate autistic savant. He’s really, really good at counting stuff and doing a lot of complex math at the same time.
The CPU is your regular guy. He can do all kinds of stuff that the savant can’t. He goes along well with everybody, as long as they speak English. If he learns to take advantage of the savant, the two of them can do amazing things like count cards at Poker.
In other words, the GPU is natural at some operations that involve repetitive calculations, like those necessary for drawing 3D graphics and doing basic image manipulation.
Read the rest of this article HERE.
Here is a small benchmark that try to compare several optimized Intel OpenCV library functions with their GPU analogs, written using OpenGL and GLSL shader language.
More information HERE.
Because I can’t resist, here is my score (Core 2 Duo 6600 default clocks, Radeon HD 3870 Catalyst8.5, WinXP 32-bit) with Resolution multiplier set to 4:
------------ CPU | GPU step1: 75.3 21.5 step2: 35.8 22.5 step3: 05.7 00.7 Total Time: 116.9 345.3
AMD will add the Havok Physics engine to both its multi-core CPUs and GPUs, but AMD managing director noted that the focus is on CPUs given feedback from gaming developers who like the idea of offsetting physics computation to CPU cores.
Read whole article HERE
AMD is hoping to accelerate Havok Physics on both its multi-core CPUs and GPUs and claims that it’s striving to deliver the best of both worlds. However, the main focus at the moment appears to be AMD’s CPUs. AMD and Havok say that they’re planning to optimise the ‘full range of Havok technologies on AMD x86 superscalar processors, and AMD claims that Havok Physics scales extremely well across the entire family of AMD processors.
Havok’s managing director, David O’Meara, explained the priority for CPUs, saying that the feedback that we consistently receive from leading game developers is that core game play simulation should be performed on CPU cores. However, he added that GPU physics acceleration could become a feature in the future, saying that ‘the capabilities of massively parallel products offer technical possibilities for computing certain types of simulation.
– AMD’s physics secret revealed: It’s Havok @ TG Daily
Un jour après sa sortie sur le Net, Geeks3D vous propose un petit résumé du nouveau produit de NVIDIA, la puce Tegra.
NVIDIA Tegra est system-on-a-chip (SoC) or computer-on-a-chip (CoC). Tegra consiste en un processeur ARM11 (800MHz), un GPU (puce graphique) GeForce (renommée en GeForce ULP – Ultra Low Power) supportant OpenGL ES 2.0, un processeur d’image (support camera digitale), un processeur video HD (PureVideo pour handhelds), de la mémoire (NAND Flash, Mobile DDR), un northbridge (controlleur mémoire, affichage vidéo, HDMI+HDCP), et un southbridge, le tout au sein de la même enveloppe physique.
One day after its official release on the Net, Geeks3D offers you a round-up of the new NVIDIA product, the Tegra chip.
NVIDIA Tegra is a system-on-a-chip (SoC) or computer-on-a-chip (CoC). Tegra consists of an ARM11 CPU core (800MHz), a GeForce GPU (renamed into GeForce ULV) supporting OpenGL ES 2.0, an image processor (digital camera support), a HD video processor (PureVideo for handhelds), memory (NAND Flash, Mobile DDR), a northbridge (memory controller, display output, HDMI+HDCP, security engine) and a southbridge (USB OTG, UART, external memory card SPI SDIO, etc). In short, NVIDIA Tegra includes the whole shebang: CPU, graphics and what you traditionally find on a motherboard are squeezed onto a single silicon die.
Liens en anglais:
Links in english:
- Official Press Release @ NVIDIA.com
- Nvidia Tegra All-in-One Mobile Processors Aim to Nuke Intel’s Atom, Promise 30 Hours HD Playback @ gizmodo.com
- The NVIDIA Tegra – Is This The End For The Intel Atom? @ techarp.com
- NVIDIA Announces Tegra GPUs for Ultra Mobile Devices @ dailytech.com
- NVIDIA strikes back: targets Intel Atom with Tegra SoC @ hexus.net
- NVIDIA Launches TEGRA System-On-a-Chip Designs @ hothardware.com
- NVIDIA launches Tegra, hopes to change the smartphone / MID game @ engadget.com
- NVIDIA Tegra: The Mobile Charge @ pcper.com
- Nvidia announces Tegra 650, next gen mobile GPU @ mobilegd.com
- NVIDIA Tegra takes on Atom: super-performance for smartphones and MIDs @ slashgear.com
- NVIDIA Tegra: Tiny Computer Packs Massive Punch @ techpowerup.com
- Nvidia Launched Tegra In Taipei @ vr-zone.com
Liens en français:
Links in french:
SuperPi, one of the most popular CPU benchmarks, is being adapted to GPU using CUDA, by a member of XtremeSystems.
This article is available in french only. It talks about the struggle between CPUs and GPUs.
L’année écoulée a vu poindre les prémices d’une guerre technologique et commerciale qui va prendre une importance grandissante dans les années à venir : la guerre entre les GPU et les CPU. De simples afficheurs de triangles texturés en 1995, les cartes graphiques munies de GPU sont devenues des sortes d’énormes DSP bientôt composées d’un milliard de transistors, délivrant des performances approchant bientôt le téraFLOPS. La répartition des tâches semblait jusqu’ici bien déterminée entre le CPU, dévolu aux tâches de gestion et de décision, et le GPU se chargeant du calcul brut, en particulier graphique. Mais les quelques acteurs du marché ont récemment amorcé des mouvements les préparant à dépasser leurs frontières traditionnelles.
Read the full article HERE.
David Kirk, Nvidia’s Chief Scientist in a 8-page interview by the guys at bit-tech.net.
Read the full interview HERE.
Lazy people can read interview snippets HERE.