NVIDIA Ampere GPUs

Started by JeGX, January 22, 2020, 09:45:29 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Stefan


JeGX

New 8-pin power connector on upcoming Ampere-based Quadro RTX. This new connector is able to deliver up to 235W (the regular 8-pin power connector is limited to 150W).

Links:
- source
- via


NVIDIA EPS-12V 8-pin power connector

NVIDIA EPS-12V 8-pin power connector


JeGX

According to Igor's Lab, RTX 3080 crashes could find their source in the capacitors (under the GPU) that filter high frequencies on the voltage rails.

There are three main categories of capacitors:
- worst and cheapest: POSCAPs (Conductive Polymer Tantalum Solid Capacitors)
- better: SP-CAPs (Conductive Polymer-Aluminium-Electrolytic-Capacitors)
- best but more expensive: MLCCs (Multilayer Ceramic Chip Capacitors)

Crashes are related to the choice of the capacitors. ASUS RTX 3080 seems to have the best design at this level.

Links:
- source
- via

- New RTX 3080 Hitching and Crashing Across All Titles @ nvidia forums
- RTX 3080 Crash @ linustechtips.com



ZOTAC RTX 3080 with six POSCAPs
Zotac Geforce RTX 3080 GPU capacitors


MSI RTX 3080 with five SP-CAPs and one MLCC
MSI Geforce RTX 3080 GPU capacitors


NVIDIA RTX 3080 with four SP-CAPs and two MLCCs
NVIDIA Geforce RTX 3080 GPU capacitors


ASUS RTX 3080 with six MLCCs
ASUS Geforce RTX 3080 GPU capacitors




Stefan


Stefan

Ampere: ga100 host, copy engine, MMU reference manuals

Several new reference manuals, and some additional content for several pre-existing ones as well. This is mostly just bringing Ampere up to parity with what NVIDIA released for Volta and Turing.

Stefan

Turing and Ampere interrupt maps

These are spreadsheets, in .csv format. They contain the names and interrupt numbers for Turing and Ampere chips.

This information makes it easier to figure out interrupt routing for drivers (such as Nouveau) that bake the number directly into the code.

Stefan


JeGX

Official news:

NVIDIA Doubles Down: Announces A100 80GB GPU, Supercharging World's Most Powerful GPU for AI Supercomputing

Quote
SC20—NVIDIA today unveiled the NVIDIA A100 80GB GPU — the latest innovation powering the NVIDIA HGX™ AI supercomputing platform — with twice the memory of its predecessor, providing researchers and engineers unprecedented speed and performance to unlock the next wave of AI and scientific breakthroughs.

The new A100 with HBM2e technology doubles the A100 40GB GPU's high-bandwidth memory to 80GB and delivers over 2 terabytes per second of memory bandwidth. This allows data to be fed quickly to A100, the world's fastest data center GPU, enabling researchers to accelerate their applications even faster and take on even larger models and datasets.

"Achieving state-of-the-art results in HPC and AI research requires building the biggest models, but these demand more memory capacity and bandwidth than ever before," said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. "The A100 80GB GPU provides double the memory of its predecessor, which was introduced just six months ago, and breaks the 2TB per second barrier, enabling researchers to tackle the world's most important scientific and big data challenges."

The NVIDIA A100 80GB GPU is available in NVIDIA DGX™ A100 and NVIDIA DGX Station™ A100 systems, also announced today and expected to ship this quarter.

Leading systems providers Atos, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Quanta and Supermicro are expected to begin offering systems built using HGX A100 integrated baseboards in four- or eight-GPU configurations featuring A100 80GB in the first half of 2021.

Link: https://nvidianews.nvidia.com/news/nvidia-doubles-down-announces-a100-80gb-gpu-supercharging-worlds-most-powerful-gpu-for-ai-supercomputing