At the Intercontinental Supercomputer (ISC 2022) trade exhibit, HPE demonstrated blade techniques that will power two exascale supercomputers set to arrive on-line this 12 months — Frontier and Aurora. Regretably, HPE experienced to use subtle and energy-hungry components to get unprecedented computing effectiveness. Consequently, equally equipment use liquid cooling, but even huge h2o blocks can not cover some intriguing style peculiarities the blades element.
The two Frontier and Aurora supercomputers are crafted by HPE employing its Cray EX architecture. While the devices leveraged AMD and Intel hardware, respectively, they use significant-functionality x86 CPUs to run common duties, and GPU-centered compute accelerators to operate remarkably parallel supercomputing and AI workloads.
The Frontier supercomputer builds on HPE’s Cray EX235a nodes (opens in new tab) powered by two AMD’s 64-core EPYC ‘Trento’ processors showcasing the firm’s Zen 3 microarchitecture increased with 3D V-Cache and optimized for superior clocks. The Frontier Blades also occur with eight of AMD’s Instinct MI250X accelerators (opens in new tab) featuring 14,080 stream processors and 128GB of HBM2E memory. Every node provides peak FP64/FP32 vector performance of all around 383 TFLOPS and peak 765 FP64/FP32 matrix effectiveness of approximately 765 TFLOPS. Both of those CPUs and compute GPUs employed by HPE’s Frontier blade use a unified liquid cooling system with two nozzles on the entrance of the node.
The Aurora blade (opens in new tab) is presently named just like that, carries an Intel badge, and does not have HPE’s Cray Ex design number still, perhaps since it continue to wants some polishing. HPE’s Aurora Blades employ two Intel Xeon Scalable ‘Sapphire Rapids’ processors with more than 40 cores and 64GB of HBM2E memory per socket (in addition to DDR5 memory). The nodes also aspect 6 of Intel’s Ponte Vecchio (opens in new tab) accelerators, but Intel is tranquil about the precise specs of these beasts that pack over 100 billion transistors every single (opens in new tab).
Just one point that catches the eye with the Aurora blade set to be utilized with the 2 ExaFLOPS Aurora supercomputers (opens in new tab) is mysterious black packing containers with a triangular ‘hot surface’ sign positioned up coming to Sapphire Rapids CPUs and Ponte Vecchio compute GPUs. We do not know what they are, but they may possibly be modular sophisticated ability provide circuitry for further flexibility. After all, again in the working day, VRMs were detachable (opens in new tab), so working with them for remarkably ability-hungry factors might make some sense even now (assuming that the suitable voltage tolerances are achieved), particularly with pre-generation components.
Yet again, the Aurora blade uses liquid cooling for its CPUs and GPUs, although this cooling procedure is fully different from the a person utilized by Frontier blades. Intriguingly, it appears to be like Ponte Vecchio compute GPUs in the Aurora blade use unique water blocks than Intel demonstrated (opens in new tab) a couple months in the past however we can only surprise about attainable reasons for that.
Interestingly, the DDR5 memory modules Intel-based blade utilizes appear with fairly formidable warmth spreaders that look even larger than people utilized on fanatic-quality memory modules. Maintaining in brain that DDR5 RDIMMs also have a electric power administration IC and voltage regulating module, they the natural way need to have far better cooling than DDR4 sticks, primarily in space-constrained environments like blade servers.