Delivering five petaflops of AI performance, the elastic architecture of the NVIDIA DGX A100 enables enterprises to accelerate diverse AI workloads such as data analytics, training, and inference.
Nvidia DGX A100 uses the high-performance capabilities, 128 cores, DDR4-3200MHz and PCIe 4 support from two AMD EPYC 7742 processors running at speeds up to 3.4 GHz1. The 2nd Gen AMD EPYC processor is the first and only current x86-architecture server processor that supports PCIe 4, providing leadership high-bandwidth I/O that’s critical for high performance computing and connections between the CPU and other devices like GPUs.
Raghu Nambiar, corporate vice president, data center ecosystems and application engineering, AMD claimed that only second generation AMD EPYC processors can provide up to 64 cores and 128 lanes of PCIe 4 interconnectivity in a single x86 data center processor, and we’re excited to see how the power of the Nvidia DGX A100 system enables the I/O bandwidth to be effectively doubled.”
“With second generation EPYC processors, our partners and customers can maximise performance and cost efficiencies in heterogeneous computing, virtualized and hyper converged infrastructure workloads, providing teams with the flexibility and capability to stay at the forefront of innovation.”
Charlie Boyle, vice president and general manager, DGX systems at Nvidia said: ““The Nvidia DGX A100 delivers a tremendous leap in performance and capabilities. The second generation AMD EPYC processors used in DGX A100 provide high performance and support for PCIe Gen4. Nvidia has put those features to work to create the world’s most powerful AI system while maintaining compatibility with the GPU-optimised software stack used across the entire DGX family.”