Dubbed the B100 Blackwell graphics processor chip will be released in 2024, if a slide Nvidia showed at SC23 is to be believed.
For those not in the know the H100 performs 11 times better than the A100, and the H200 performs 18 times better than this chip. Next year’s B100 set to ramp the performance up even higher, according to a chart comparing performance against the 175-billion-parameter GPT-3 large language model (LLM).
The next generation of this AI chip, the B100, will likely hit the market towards the end of next year as Nvidia seeks to double down on its position as the industry leader for graphics processors for AI workloads.
The architecture used in next year’s B100 chip, accompanied by a GB200 super chip is likely released in the following year.
Nvidia said that the B100 will see a greater increase in memory bandwidth, and the Blackwell chips are set to incorporate a better version of the HBM3e technology into the H200 chip.
The plans seem slowed because Micron, which supplies HBM3e memory for the H200, won’t release the next generation of its high-bandwidth memory unit, HBM4, until 2025. Nvidia could look to Samsung if it wants to get it out earlier.
From 2025 and beyond, Nvidia will stick to an annual release cycle with the launch of the X100 and GX200 chips, although the nomenclature for this architecture isn’t yet known.