Xeon E5 v4, which will replace last year’s E5 v3 series from high-performance professional workstations to multi-socket servers for big data. In fact the v4 is pretty similar and is even socket compatible with the earlier version, but it uses Intel’s more advanced 14nm process node and the biggest of the chips can feature up to 22 processor cores (44 threads).
The Broadwell-EP based Xeon E5 v4 still supports up to quad-channel DDR4 memory, but the maximum supported speed now tops out at 2400MT/s, up from 2133MT/s.
Thanks to its additional cores, the E5-2600 v4 series now features up to 55MB of last-level cache. Support for 3D die stacked LRDIMMs has been added, along with DDR4 write CRC, and of course the higher speeds. Though, with three 3DS LRDIMMs per channel, the max supported frequency drops down to 1600MHz.
The changes to the Xeon E5 V4 family’s memory configuration bring in reduced latency and increased bandwidth. Intel’s numbers show up to a 15 per cent increase in bandwidth with latency reductions across the board.
In addition to these high-level updates, there are also new virtualization and security related features, along with more performance and efficiency enhancements as too.
Intel is keen to emphasise that the new chip, with its SDI features is cloud friendly. SDI is the foundation for the most advanced clouds in the world. It makes the delivery of cloud services faster and more efficient by dynamically allocating the required compute, storage and network resources through intelligent software, carefully orchestrating the delivery of applications and services on-demand and across many users.
SDI including Intel Resource Director Technology, which enables customers to move to fully automated SDI-based clouds with greater visibility and control over critical shared resources like processor caches and main memory. The result is intelligent orchestration and improved use and service levels.