Published in AI

AI forces a "golden age" of chip innovation

by on11 May 2020

GPUs are not optimised for an AI workload

AI has ushered in a new golden age of semiconductor innovation as flaws in the use of GPUs to do the number crunching are getting annoying.

According to Forbes, while most of computing's history ahs been about the CPU, or central processing unit AI techniques demand a very specific and intensive set of computations. Deep learning entails the iterative execution of millions or billions of relatively simple multiplication and addition steps. It is prohibitively inefficient to train a neural network on a CPU...

In the early 2010s, the AI community began to realize that Nvidia's gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded. Through sheer good fortune, the GPU had found a massive new market. Nvidia capitalised on the opportunity, positioning itself as the market-leading provider of AI hardware. The company has reaped incredible gains as a result: Nvidia's market capitalization jumped twenty-fold from 2013 to 2018.

But Gartner analyst Mark Hung warned that era is coming to an end as "Everyone agrees that GPUs are not optimised for an AI workload."

The GPU has been adopted by the AI community, but it was not born for AI. In recent years, a new crop of entrepreneurs and technologists has set out to re-imagine the computer chip, optimising it from the ground up in order to unlock the potential of AI.

The race is on to develop the hardware that will power the upcoming era of AI. More innovation is happening in the semiconductor industry today than at any time since Silicon Valley's earliest days. Untold billions of dollars are in play.

Forbes said that Google, Amazon, Tesla, Facebook and Alibaba, among other technology giants, all have in-house AI chip programmes Groq has announced a chip performing one quadrillion operations per second. "If true, this would make it the fastest single-die chip in history."

Cerebras' chip "is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors - 1.2 trillion, to be exact. It has 18 GB memory on-chip — again, the most ever". Lightmatter believes using light instead of electricity "will enable its chip to outperform existing solutions by a factor of ten".

Last modified on 11 May 2020
Rate this item
(2 votes)

Read more about: