Amazing Nvidia H200 GPU 4.1GB HBM3e, 4.8 TB/s! Gen AI

Nvidia’s H200 GPU will power next-generation AI exascale supercomputers with 4.1GB of HBM3e and 4.8 TB/s bandwidth

Adding extra memory and processing capability to the Hopper H100 architecture, they are the most powerful processors Nvidia has ever produced

With six HBM3e stacks and 141GB of total HBM3e memory, the upgraded H200 GPU can operate at an effective 6.25 Gbps, providing 4.8 TB/s of total bandwidth per GPU

Compared to the original H100, which had 3.35 TB/s of bandwidth and 80GB of HBM3, it is a huge boost. Some H100 versions did come with greater memory, such as the H100 NVL, which coupled two boards and offered an overall 188GB of memory

Eight of these GPUs have already produced around 32 petaflops of FP8, because the original H100 delivered 3,958 teraflops of FP8

A total of 624GB of memory will be included in each GH200 “superchip“. The new GH100 employs the previously mentioned 144GB of HBM3e memory, while the original GH100 paired 480GB of LPDDR5x memory for the CPU with 96GB of HBM3 memory

However, Nvidia offered some comparisons between the GH200 and a “modern dual-socket x86” setup; note that the speedup was mentioned in relation to “non-accelerated systems.”