Print this page
Published in PC Hardware

Moore's Law is murdered, claims Nvidia boss

by on27 September 2017


GPU computing did it in the library with the candlestick

Nvidia founder and CEO Jensen Huang has said that GPU computing following the decline of the CPU era has killed off Moore's Law.

Huang said that it was just as well that Nvidia had a GPU-centered ecosystem which had won support from China's top-five AI players and was in a good position to do well on it all.

He was talking to the assembled throngs at the GPU Technology Conference (GTC) China 2017, in Beijing, on the topic of "AI: Trends, Challenges and Opportunities". He claimed to be the first head of a major semiconductor company to say publicly that Moore's Law is dead.

Moore's Law, named after Intel co-founder Gordon Moore, reflects his observation that the number of transistors in a dense integrated circuit doubles approximately every two years. Huang said now is an era beyond Moore's Law, which has become outdated. He stressed that both GPU computing capability and neural network performance are developing at a faster pace than set in Moore's Law.

While the  number of CPU transistors has grown at an annual pace of 50 percent, the CPU performance has advanced by only 10 percent, adding that designers can hardly work out more advanced parallel instruction set architectures for CPU and therefore GPUs will soon replace CPUs, Huang said.

Intel disagrees and said the opposite at its Technology and Manufacturing Dayin Beijing conference, when it launched a few more details about its 10nm process.

Huang said that Nvidia's GPU can fill up the deficiency of CPUs, as strengthening high intensity computational capacity is the most ideal solutions for AI application scenarios.

He also disclosed that Alibaba, Baidu, Tencent, JD.com, and iFLYTEK, now China's top 5 e-commerce and AI players, have adopted the Nvidia Volta GPU architectures to support their cloud services, while Huawei, Inspur and Lenovo have also deployed the firm's HGX-based GPU servers.

At the conference, Nvidia also showcased its latest product TensorRT3, a programmable inference accelerator that can sharply boost the performance and slash the cost of inferencing from the cloud to edge devices, including self-driving cars and robots. With the TensorRT3, there will be no need to maintain data centres as it is suitable for diverse applications and thus helps to save a lot of costs.

Nvidia is now teaming up with Huawei, Inspur and Lenovo to develop the Tesla 100 HGX-1 accelerator, dedicated to AI application. Huang said that a single HGX server can outperform 150 traditional CPU servers, in handling voice, speech and image recognition and inferencing operations, but the cost of using a HGX-1 server for AI training and inferencing is only one fifth and one tenth, respectively, that of using a traditional CPU server.

Last modified on 27 September 2017
Rate this item
(0 votes)