Micron 6500 ION SSD: Turn AI with 256 Accelerators
These outcomes demonstrate how effectively the Micron 9400 NVMe SSD performs in the use case of an AI server as a local cache
MLPerf Storage AI workload was chosen to be tested for SC23 on a WEKA storage cluster that was powered by a 30TB Micron 6500 ION SSD
MLPerf Storage aims to tackle several issues related to the storage workload of AI training systems, including the limited size of available datasets and the high expense of AI accelerators
Changes are also being made to the MLPerf Storage benchmark from version 0.5 to the upcoming version for the first 2024 release
In the 0.5 version, the MLPerf Storage benchmark simulates NVIDIA V100 accelerators. There are sixteen V100 accelerators on the NVIDIA DGX-2 server
With 16 V100 GPUs per system, this performance is equivalent to 16 NVIDIA DGX-2 systems, which is an astonishingly large number of AI systems powered by a six-node WEKA cluster
Future-focused AI training servers, such as the H100 / H200 (PCIe Gen5) and X100 (PCIe Gen6) models, are expected to push extremely high throughput