Google Cloud announced a new supercomputer virtual-machine series aimed at rapidly training large AI models. Unveiled at the Google I/O conference, the new A3 supercomputer VMs are purpose-built to handle the considerable resource demands of a large language model (LLM). 

Google Product Management Director Roy Kim and Corporate Product Manager Chris Kleban explained in a co-authored blog post that artificial intelligence and machine learning require massive amounts of computing power provided by the infrastructure.

“A3 GPU VMs were purpose-built to deliver the highest-performance training for today’s ML workloads, complete with modern CPU, improved host memory, next-generation Nvidia GPUs, and major network upgrades,” the company said in a statement.

The instances are powered by eight Nvidia H100 GPUs, Nvidia’s newest GPU that just begin shipping earlier this month, as well as Intel’s 4th Generation Xeon Scalable processors, 2TB of host memory, and 3.6 TBs bisectional bandwidth between the eight GPUs via Nvidia’s NVSwitch and NVLink 4.0 interconnects.

Altogether, Google is claiming these machines can provide up to 26 exaFlops of power. That’s the cumulative performance of the entire supercomputer, not each individual instance. Still, it blows away the old record for the fastest supercomputer, Frontier, which was just a little over one exaFlop.

According to Google, A3 is the first production-level deployment of its GPU-to-GPU data interface, which Google calls the infrastructure processing unit (IPU). It allows for sharing data at 200 Gbps directly between GPUs without having to go through the CPU. This result is a ten-fold increase in available network bandwidth for A3 virtual machines compared to prior-generation A2 VMs.

A3 workloads will be run on Google’s specialized Jupiter data center networking fabric, which the company says “scales to tens of thousands of highly interconnected GPUs and allows for full-bandwidth reconfigurable optical links that can adjust the topology on demand.”

Google will be offering the A3 in two ways: customers can run it themselves or as a managed service where Google handles most of the work. If you opt to do it yourself, the A3 VMs run on Google Kubernetes Engine (GKE) and Google Compute Engine (GCE). If you go with a managed service, the VMs run on Vertex, the company’s managed machine learning platform.

The A3 virtual machines are available for preview, which requires filling out an application to join the Early Access Program. Google makes no promises you will get a spot in the program.

Total views: 600