Lates News

date
06/06/2025
NVIDIA, CoreWeave, and IBM have submitted the largest MLPerf Training v5.0 report in history around the GB200 Grace Blackwell super chip. They used 2496 Blackwell GPUs to run CoreWeave's AI optimized cloud platform. The results of this submission are the largest scale NVIDIA GB200 NVL72 cluster in MLPerf inference so far, being 34 times larger than the only previous submission from a cloud provider, highlighting the immense scale of CoreWeave's cloud platform and its full readiness for today's demanding AI workloads. The results of this submission achieved groundbreaking progress on the largest and most complex baseline model Llama 3.1 405B in the benchmark suite, completing the run in just 27.3 minutes. Compared to other participants' submissions at similar cluster scales, CoreWeave's GB200 cluster training performance improved by more than 2 times. This result highlights the significant performance leap brought by the GB200 NVL72 architecture and the strong capabilities of CoreWeave's infrastructure in providing consistent, top-notch AI workload performance.