Computex Nvidia’s Grace CPU and Hopper Superchips will make their first appearance early next year in systems that’ll be based on reference servers unveiled at Computex 2022 this week.
It’s hoped these Arm-compatible HGX-series designs will be used to build computer systems that power what Nvidia believes will be a “half trillion dollar” market of machine learning, digital-twin simulation, and cloud gaming applications.
“This transformation requires us to reimagine the datacenter at every level, from hardware to software from chips to infrastructure to systems,” Paresh Kharya, senior director of product management and marketing at Nvidia, said during a press briefing.
All of the four reference systems are powered by Nvidia’s Arm-compatible Grace and Grace-Hopper Superchips announced at GTC this spring.
The Grace Superchip fuses two Grace CPU dies, connected by the chipmaker’s 900 GB/s NVLink-C2C interconnect tech, onto on a single daughter board that delivers 144 CPU cores and 1TB/s of memory bandwidth in a 500W footprint. Grace-Hopper swaps one of the CPU dies for an H100 GPU die, also connected directly to the CPU by NVLink-C2C.
These latest additions to the HGX line are supposed to be chipmaker’s answer to large HPC deployments where compute density is the primary concern. One reference design, the 2U HGX Grace-Hopper blade node, uses a Grace-Hopper Superchip with 512GB of LPDDR5x DRAM and 80GB of HBM3 memory.
For compute workloads that aren’t optimized for GPU acceleration, Nvidia also offers the 1U HGX Grace blade server, which swaps out the Grace-Hopper Superchip for an a CPU-only module with 1TB of LPDDR5x memory. Two HGX Grace-Hopper or four HGX Grace nodes can be slotted into a single chassis for system power.
“For these HGX references, Nvidia will provide [OEMs with] the Grace-Hopper and Grace CPU Superchip modules as well as the corresponding PCB reference designs,” Kharya said.
Six Nvidia partner vendors — Asus, Foxconn, Gigabyte, QCT, Supermicro, and Wiwynn — plan to develop systems based on the reference designs, with initial shipments slated for early next year.
Alongside HGX, Nvidia also unveiled refreshed CGX and OVX reference designs aimed at cloud gaming and digital twin simulation, respectively.
Both designs pair a Grace Superchip CPU with a variety of PCIe-based GPUs, including Nvidia’s A16.
Networking for all four systems is handled by an Nvidia BlueField-3, but we’re told Nvidia also plans to offer NVLink connectivity for Grace-Hopper-based systems to enable GPU memory pooling across nodes.
With Nvidia’s top-end Arm CPUs slated for commercial release early next year, Kharya emphasized that it has no plans to walk away from x86 any time soon.
“x86 is a very important CPU. It’s pretty much all of the market of Nvidia’s GPUs today and we’ll continue to support x86 and will continue to support Arm-based CPUs, offering our customers and market the choice for wherever they want to deploy accelerated computing,” he said.
Jetson Orin multiples at edge
Alongside HPC-focused reference designs, Nvidia signaled broader deployment of its low-power Jetson AGX Orin platform by more than 30 partner vendors in systems targeted at edge and embedded applications, including AI inference.
Announced at GTC this spring, the 60W Jetson Orin AGX developer kit is a single-board computer based on Nvidia’s Ampere-series GPU and an Arm CPU with 12 Cortex-A78AE cores.
“We are seeing a strong momentum in robotics and edge AI use cases across large industries such as retail, agriculture, manufacturing, smart cities, logistics, and healthcare. All these applications need processing at the edge for latency, bandwidth, or data sovereignty reasons,” Amit Goel, director of product management for Nvidia’s edge, AI, and robotics business unit, said during a press briefing this week. “Nvidia Jetson has become the platform of choice for these applications,” he opined.
To address growing demand for the platform, Nvidia also announced four iterations on the design, including an eight-core 32GB and 12-core 64GB versions of its AGX Orin platform coming in July and October. The chip house also plans to launch 8GB and 16GB versions of the smaller Orin NX platform, which shares the same SODIMM memory-style edge connector of its predecessor, in September and December.
Nvidia claims more than a million developers, 6,000 companies, and 150 partners are developing products based on the low-power AI edge platform. ®