Google Cloud Parallelstore Powering AI And HPC Workloads

Parallelstore, which is based on the Distributed Asynchronous Object Storage (DAOS) architecture, combines a key-value architecture with completely distributed metadata to provide high throughput and IOPS

Optimizing the expenses of AI workloads is dependent on maximizing good output to GPUs and TPUs, which is achieved through efficient data transmission

The largest Parallelstore deployment of 100 TiB yields throughput scaling to around 115 GiB/s, with a low latency of ~0.3 ms, 3 million read IOPS, and 1 million write IOPS

According to Google Cloud benchmarks, Parallelstore‘s performance with tiny files and metadata operations allows for up to 3.7x higher training throughput

According to Google Cloud benchmarks, Parallelstore‘s performance with tiny files and metadata operations allows for up to 3.7x higher training throughput

The storage solution you need to maintain the demanding GPU/TPUs and workloads is Parallelstore, with its novel architecture, performance

With just four lines of code, you can integrate the Parallelstore module into your blueprint and begin using Cluster Toolkit right away