Published in Graphics

Update to High Bandwidth Memory DRAM posted

by on14 January 2016


Memory Stacking will go beyond Fuji

AMD’s Fuji was one of the first GPUs to use JEDEC’s High Bandwidth Memory (HBM) DRAM standard and now JEDEC has released an update which should attract others.

JEDEC has announced an update to ‘JESD235’ which is essentially a brief of what users can expect from HBM-2. You can read the spec here

HBM-2 is now going to be available in 2, 4, and 8 high stacks. If Nvidia or AMD use even 4 stacks of with the upcoming Polaris or Pascal GPUs it could give them up to 32 GB of video memory. A gaming graphics card with as much as 32 GB of memory capacity is clearly an overkill, but for high-end workstations might just work.

JEDEC also says the JESD235A standard adds a "pseudo-channel architecture to improve effective bandwidth," as well as a feature to alert controllers when the temperature of the DRAM exceeds safe levels.

Anyway with those sorts of numbers it seems fairly likely that Nvidia will be using the technology. The only reason this makes for a good guess is that the numbers match what we know about Nvidia's next-generation Pascal GPU, . Nvidia claimed that the chip will have four stacks of "3D memory," and it's said that its next-generation chip would come with up to 32GB of RAM onboard so it is a  match.

Nvida said that the chip's memory bandwidth would be about three times that of the Maxwell GPU.If you think that the  the GeForce Titan X had a 336 GB/s theoretical bandwidth Nvidia's statment becomes clearer.

Barry Wagner, JEDEC HBM Task Group Chairman said:

“GPUs and CPUs continue to drive demand for more memory bandwidth and capacity, amid increasing display resolutions and the growth in computing datasets. HBM provides a compelling solution to reduce the IO power and memory footprint for our most demanding applications.”

The report further reads that the cheapest version of HBM-2 is 2-Hi variant which is basically 1GB per stack which as compared to HBM-1’s 256 MB per layer is 512 MB per layer in HBM-2.

Last modified on 14 January 2016
Rate this item
(13 votes)

Read more about: