Intel Xeon E5 Processor review

Intel Xeon E5 Processor review
Intel Xeon E5
After an inordinate amount of speculation, and tales of delays due to a C600 “Patsburg” SAS controller bug, Intel’s Xeon E5 processors officially launched last week with some interesting features and improvements.

As a member of Intel’s “Romley” platform, the Sandy Bridge architecture covers the Xeon E5-2600 and E5-1600 CPU families, the first to be announced. Both use the Sandy Bridge EP (efficient performance) variant and have a 32nm manufacturing process. QPI (Quick Path Interconnect) speeds have been boosted and range from 6.4GT/sec, 7.2GT/sec and 8.0GT/sec.

The E5-1600 series comprises five models with four- and six-core options, with speeds ranging from 2.8GHz up to 3.6GHz, and they target workstation applications. Instead, we’ll focus on the E5-2600 series, which is aimed mainstream two-socket servers, plus blade, storage and HPC systems.

It uses the Socket-R LGA 2011 package and ups the number of QPI links to two. There are now four memory channels per socket and it can support up to 24 DIMMs. This allows capacity in a dual-CPU system to be boosted to 768GB using 32GB LR-DIMMs.

Sandy Bridge Architecture



Intel offers eight-, six- and four-core choices. The six eight-core options all have 20MB of L3 cache and support 1,600MHz memory speeds, an 8GT/sec QPI, and Turbo Boost 2, which we’ll come to later. The two 80W TDP four-core models have a 10MB L3 cache, a 6.4GT/sec QPI, and support 1,066MHz memory speeds, but not Turbo Boost. The odd-man out is the eight-core E5-2687W, which has a high 150W TDP and is aimed at workstation duties. There are also the low-power 1.8GHz E5-2650L and 2GHz E5-2630L options. These have TDPs of 70W and 60W plus eight and six cores respectively.

Introduced in the Nehalem-EX platforms, Intel’s Scalable Ring Architecture was designed to reduce the number of physical wires needed to connect the CPU cores and LLC (last level cache). With Sandy Bridge-EP, Intel has extended this to the memory and PCI Express controllers, and with a 3GHz ring frequency each component now has a fast 96GB/sec link to the ring.

Intel’s original Turbo Boost technology allowed cores within a CPU to have their speed boosted by powering down other cores; Turbo Boost 2 enables them to go beyond their TDP rating. During idle periods, the CPUs build up a thermal budget, and in times of increased activity this is used to boost performance. Depending on available thermal budget, this period can be up to 25 seconds and heat build up is monitored closely by integrated sensors.

PCI Express implementations have changed, with the 5520 chipset’s IOH (I/O Hub) moving on-chip. Socket-R increases the number of PCI Express lanes to 40 per socket, but bear in mind that to use all available slots in a 2S Xeon E5 server you must have both sockets populated.

The C600 “Patsburg” chipset incorporates a new PCH (platform controller hub), and to confuse you even further there are four variants. The entry-level C600-A supports four SATA III ports; the basic C600-B ups this to SAS 2 support; the performance C600-D doubles the port count and has a PCI Express 3 CPU uplink; and the premium C600-T upgrade adds RAID5 support for SAS 2 drives.

Power control and capping have been extended, as CPU and memory power consumption can be monitored. This requires Intel’s Data Center Manager or its Node Manager API, which can enforce power capping by adjusting power usage of the CPU or memory and can allocate cores dynamically.

Along with mirroring across two pairs of memory banks, you also have SDDC (Single Device Data Correction) and Lockstep advanced memory protection. And then there’s support for LR-DIMMs (load reduced DIMMs) which are designed to increase capacity. They use a buffer chip to replace the register so more memory can be put on each channel and it can run faster.



The Xeon E5 is one of the largest processor families yet and Intel is supporting it with its biggest ever server platform launch. There’s even more likely to come, with one range for premium single-socket and entry-level dual-socket servers, and another expected to support scalable dual and glue-less quad-socket platforms. We thought the Xeon 5500/5600 series represented the biggest challenge to AMD, but the Xeon E5 will have it quaking in its boots.