Published in News

Nvidia boss talks cunning plans with Goldman Sachs

by on12 September 2024


Its all about moats without walls

Nvidia CEO Jensen Huang has been outlining his glorious five-year plan to Goldman Sachs, outlining how his outfit would stay ahead in the AI race.

Lately, Huang has been under a bit of pressure from the cocaine nose jobs of Wall Street, who think Nvidia shares are overpriced and that AI is a flash in the pan.

Huang said Nvidia was banking on its software expertise and broad GPU ecosystem to stay ahead in the fiercely competitive AI chip market.

Then, he got all medieval in his imagery, describing Nvidia as having a competitive moat. Rather than smelly water, he had a large installed base of GPUs across multiple platforms. Then, there was the ability to enhance hardware with software like domain-specific libraries and expertise in building rack-level systems. So, there were three moats, really, but he didn’t mention any walls or drawbridges.

The CEO also mentioned Nvidia's chip design prowess, noting that the company has developed seven different chips for its upcoming Blackwell platform.

Addressing supply chain concerns, Huang said NVIDIA has sufficient in-house intellectual property to shift manufacturing without significant disruption. According to Huang, the company plans to begin shipping Blackwell-based products in the fourth quarter of fiscal 2025, with volume production ramping up in fiscal 2026.

Goldman Sachs reported to its clients that Huang told them that Moore's Law was no longer delivering the rate of innovation it had in the past and, as such, was driving computation inflation in Data Centres.

He noted that the densification and acceleration of the $1 trillion data centre infrastructure installed base alone would drive growth over the next decade, as it would deliver material performance improvement and cost savings.

Huang claimed that humanity had hit the end of transistor scaling that enabled better utilization rates and cost reductions in the previous virtualization and cloud computing cycles.

While using a GPU to augment a CPU will drive an increase in cost in absolute terms (~2x) in the case of Spark (a distributed processing system and analytics engine for big data), the net cost benefit could be as large as ~10x for an application like Spark, given the speed up of ~20x.

From a revenue generation perspective, Huang said that hyperscale customers can generate $5 in rental revenue for every $1 spent on Nvidia's infrastructure, given the sustained strength in the demand for accelerated computing.

Last modified on 12 September 2024
Rate this item
(1 Vote)