Oracle has introduced the world's first zettascale cloud computing cluster, which is powered by NVIDIA Blackwell GPUs. The cluster provides unprecedented scale, with up to 131,072 GPUs generating 2.4 zettaFLOPS of peak performance. This new milestone places Oracle Cloud Infrastructure (OCI) at the vanguard of AI-powered cloud computing, giving businesses and researchers the capabilities they need to handle large-scale AI workloads.
Oracle's AI infrastructure, fuelled by NVIDIA's GPUs, enables the execution of some of the most demanding AI workloads across several industries. This novel design improves flexibility and data sovereignty, which is critical for businesses such as healthcare and global collaboration platforms like Zoom and WideLabs.
Mahesh Thiagarajan, Executive Vice President of Oracle Cloud Infrastructure, said, "We have one of the broadest AI infrastructure offerings and are supporting customers that are running some of the most demanding AI workloads in the cloud."
The collaboration between Oracle and NVIDIA has paved the way for many further advances in AI research and development. Ian Buck, NVIDIA's Vice President of Hyperscale and HPC, stated, "NVIDIA's full-stack AI computing platform on Oracle's broadly distributed cloud will deliver AI compute capabilities at unprecedented scale."
With 131,072 NVIDIA B200 GPUs, OCI offers six times the GPU capacity of competitors such as AWS, Azure and Google Cloud. While AWS UltraClusters can host up to 20,000 GPUs, Oracle's offering outperforms that, delivering unprecedented computational capability. OCI also supports a variety of NVIDIA GPU architectures, including Hopper and Blackwell, making it suitable for AI workloads of varied sizes.
The latest AI supercomputer represents a technological advancement, outperforming competitors still operating at exascale. The system also includes NVIDIA GB200 Grace Blackwell Superchips, which are supposed to provide 4 times faster training and 30 times quicker inference than previous-generation H100 GPUs. This is critical for real-time AI model inference and multimodal LLM training.
Larry Ellison, Oracle's Chairman and CTO, pressed upon the significance of the new supercluster by saying, "Leaders such as OpenAI are selecting OCI because it is the world's fastest and most cost-effective AI infrastructure. Oracle and OpenAI cannot be stopped now that their supercluster has grown to 131,072 NVIDIA B200 GPUs.”
With this bold move, Oracle's position as a pioneer in AI-powered cloud computing has been strengthened, allowing for quicker and more efficient AI model development and deployment across sectors.