Revision History Intel® Compute Module MFS2600KI TPS
ii
Date
Revision
Number
Modifications
April, 2012
0.5
Initial release.
June, 2012
1.0
Corrected BMC LAN settings.
Revision History
Disclaimers
Information in this document is provided in connection with Intel® products. No license, express or implied, by
estoppel or otherwise, to any intellectual property rights is granted by this document. Except as provided in Intel®’s
Terms and Conditions of Sale for such products, Intel® assumes no liability whatsoever, and Intel® disclaims any
express or implied warranty, relating to sale and/or use of Intel® products including liability or warranties relating to
fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property
right. Intel® products are not intended for use in medical, lifesaving, or life sustaining applications. Intel® may make
changes to specifications and product descriptions at any time, without notice.
Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or
“undefined”. Intel® reserves these for future definition and shall have no responsibility whatsoever for conflicts or
incompatibilities arising from future changes to them.
The Intel® Compute Module MFS2600KI may contain design defects or errors known as errata which may cause the
product to deviate from published specifications. Current characterized errata are available on request.
Intel Corporation server baseboards support peripheral components and contain a number of high-density VLSI and
power delivery components that need adequate airflow to cool. Intel®’s own chassis are designed and tested to meet
the intended thermal requirements of these components when the fully integrated system is used together. It is the
responsibility of the system integrator that chooses not to use Intel® developed server building blocks to consult
vendor datasheets and operating parameters to determine the amount of air flow required for their specific application
and environmental conditions. Intel Corporation cannot be held responsible if components fail or the compute module
does not operate correctly when used outside any of their published operating or non-operating limits.
Intel, Pentium, Itanium, and Xeon are trademarks or registered trademarks of Intel Corporation.
*Other brands and names may be claimed as the property of others.
Intel® Compute Module MFS2600KI TPS List of Tables
< This page intentionally left blank.>
Revision 1.0 vii
Intel order number: G51989-002
Intel® Compute Module MFS2600KI TPS Introduction
1. Introduction
This Technical Product Specification (TPS) provides board-specific information detailing the
features, functionality, and high-level architecture of the Intel® Compute Module MFS2600KI.
1.1 Chapter Outline
This document is divided into the following chapters:
Chapter 1 – Introduction
Chapter 2 – Product Overview
Chapter 3 – Functional Architecture
Chapter 4 – System Security
Chapter 5 – Connector/Header Locations and Pin-outs
Chapter 6 – Jumper Block Settings
Chapter 7 – Product Regulatory Requirements
Appendix A – Integration and Usage Tips
Appendix B – POST Code Diagnostic LED Decoder
Appendix C – Post Error Code
Appendix D – Supported Intel® Modular Server System
Glossary
Reference Documents
Support for one or two Intel® Xeon® Processor E5-2600 series with up to 95W Thermal
Design Power (TDP).
8.0 GT/s, and 6.4 GT/s Intel® QuickPath Interconnect (Intel® QPI)
Enterprise Voltage Regulator-Down (EVRD) 12.0
Memory
Support for 1067/1333/1600 MT/s ECC registered (RDIMM), unbuffered (UDIMM)
and LRDIMM DDR3 memory.
16 DIMMs total across 8 memory channels (4 channels per processor).
Note: Mixed memory is not tested or supported. Non-ECC memory is not tested and is
not supported in a server environment.
Chipset
Intel® C602-J Chipset
On-board
Connectors/Headers
External connections:
Four USB 2.0 ports
DB-15 Video connector
Internal connectors/headers:
One low-profile USB Type-A connector to support low-profile USB solid state drives
One internal 7pin SATA connector for embedded SATA Flash Drive
One eUSB for embedded USB device
Intel® I/O Mezzanine connectors supporting Dual Gigabit NIC Intel® I/O Expansion
Module (Optional)
On-board Video
Integrated Matrox* G200 Core, one DB15 Video port (Front)
On-board Hard Drive
Controller
LSI* 1064e SAS controller
LAN
Intel® I350 Dual 1GbE Network Controller
2. Product Overview
The Intel® Compute Module MFS2600KI is a monolithic printed circuit board with features that
were designed to support the high-density compute module market.
2.1 Intel
®
Compute Module MFS2600KI Feature Set
Table 1. Intel® compute module MFS2600KI Feature Set
The following figure shows the board layout of the Intel® Compute Module MFS2600KI. Each
connector and major component is identified by a number or letter. A description of each
identified item is provided below the figure.
Figure 1. Component and Connector Location Diagram
The architecture of the Intel® Compute Module MFS2600KI is developed around the integrated
features and functions of the Intel® Xeon® processor E5-2600 product family the Intel® C602-J
chipset, the Intel® Ethernet Controller I350 GbE controller chip and the Baseboard
Management Controller.
The following diagram provides an overview of the compute module architecture, showing the
features and interconnects of each of the major sub-system components.
The compute module includes two Socket-R (LGA2011) processor sockets and can support one
or two of the Intel® Xeon® processor E5-2600 product family, with a Thermal Design Power
(TDP) of up to 95W processors.
Each processor socket of the server board is pre-assembled with an Independent Latching
Mechanism (ILM) and Back Plate which allow for secure placement of the processor and
processor heat to the server board.
The illustration below identifies each sub-assembly component.
Figure 4. Processor Socket Assembly
3.1.1.2Processor Population Rules
Note: Although the Compute Module does support dual-processor configurations consisting of
different processors that meet the defined criteria below, Intel® does not perform validation
testing of this configuation. For optimal performance in dual-processor configurations, Intel®
recommends that identical processors be installed.
When using a single processor configuration, the processor must be installed into the processor
socket labeled CPU1.
When two processors are installed, the following population rules apply:
Both processors must be of the same processor family.
Both processors must have the same number of cores.
Both processors must have the same cache sizes for all levels of processor cache
memory.
Processors with different core frequencies can be mixed in a system, given the prior
rules are met. If this condition is detected, all processor core frequencies are set to the
lowest common denominator (highest common speed) and an error is reported.
Quickpath (QPI) Link Frequencies may operate
together if they are otherwise compatible and if a common link frequency can be
selected. The common link frequency would be the highest link frequency that all
installed processors can achieve.
Processor stepping within a common processor family can be mixed as long as it is
listed in the processor specification updates published by Intel Corporation.
3.1.2 Processor Initialization Error Summary
The following table describes mixed processor conditions and recommended actions for the
MFS2600KIdesigned around the Intel® Xeon® processor E5-2600 product family and Intel®
C602-J chipset product family architecture. The errors fall into one of the following categories:
Fatal: If the system can boot, it pauses at a blank screen with the text “Unrecoverable
fatal error found. System will not boot until the error is resolved” and “Press <F2>
to enter setup”, regardless of whether the “Post Error Pause” setup option is enabled or
disabled.
When the operator presses the <F2> key on the keyboard, the error message is
displayed on the Error Manager screen, and an error is logged to the System Event Log
(SEL) with the POST Error Code.
The system cannot boot unless the error is resolved. The user needs to replace the
faulty part and restart the system.
For Fatal Errors during processor initialization, the System Status LED will be set to a
steady Amber color, indicating an unrecoverable system failure condition.
Major: If the “Post Error Pause” setup option is enabled, the system goes directly to the
Error Manager to display the error, and logs the POST Error Code to SEL. Operator
intervention is required to continue booting the system.
Otherwise, if “POST Error Pause” is disabled, the system continues to boot and no
prompt is given for the error, although the Post Error Code is logged to the Error
Manager and in a SEL message.
Minor: The message is displayed on the screen or on the Error Manager screen, and
the POST Error Code is logged to the SEL. The system continues booting in a degraded
state. The user may want to replace the erroneous unit. The POST Error Pause option
setting in the BIOS setup does not have any effect on this error.
Fatal
The BIOS detects the error condition and responds as follows:
Logs the POST Error Code into the System Event Log (SEL).
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0194: Processor family mismatch detected”
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the
fault condition is remedied.
Processor model not
Identical
Fatal
The BIOS detects the error condition and responds as follows:
Logs the POST Error Code into the System Event Log (SEL).
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0196: Processor model mismatch detected”
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the
fault condition is remedied.
Processor cores/threads not
identical
Fatal
The BIOS detects the error condition and responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0191: Processor core/thread count mismatch
detected” message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the
fault condition is remedied.
Processor cache not
identical
Fatal
The BIOS detects the error condition and responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0192: Processor cache size mismatch detected
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the
fault condition is remedied.
Processor frequency (speed)
not identical
Fatal
The BIOS detects the processor frequency difference, and responds
as follows:
Adjusts all processor frequencies to the highest common
frequency.
No error is generated – this is not an error condition.
Continues to boot the system successfully.
If the frequencies for all processors cannot be adjusted to be the
same, then this is an error, and the BIOS responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Does not disable the processor.
Displays “0197: Processor speeds unable to synchronize”
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Processor Intel® QuickPath
Interconnect link frequencies
not identical
Fatal
The BIOS detects the QPI link frequencies and responds as follows:
Adjusts all QPI interconnect link frequencies to highest common
frequency.
No error is generated – this is not an error condition.
Continues to boot the system successfully.
If the link frequencies for all QPI links cannot be adjusted to be the
same, then this is an error, and the BIOS responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0195: Processor Intel® QPI link frequencies unable
to synchronize” message in the Error Manager.
Does not disable the processor.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
3.2 Processor Functions Overview
With the release of the Intel® Xeon® processor E5-2600 product family, several key system
components, including the CPU, Integrated Memory Controller (IMC), and Integrated IO Module
(IIO), have been combined into a single processor package and feature per socket; two Intel®
QuickPath Interconnect point-to-point links capable of up to 8.0 GT/s, up to 40 lanes of Gen 3
PCI Express* links capable of 8.0 GT/s, and 4 lanes of DMI2/PCI Express* Gen 2 interface with
a peak transfer rate of 5.0 GT/s. The processor supports up to 46 bits of physical address space
and 48-bit of virtual address space.
The following sections will provide an overview of the key processor features and functions that
help to define the architecture, performance and supported functionality of the server board. For
more comprehensive processor specific information, refer to the Intel® Xeon® processor E52600 product family documents listed in the Reference Document list in Chapter 1.
Processor Core Features:
Up to 8 execution cores
Each core supports two threads (Intel
per socket
46-bit physical addressing and 48-bit virtual addressing
1 GB large page support for server applications
A 32-KB instruction and 32-KB data first-level cache (L1) for each core
A 256-KB shared instruction/data mid-level (L2) cache for each core
Up to 20 MB last level cache (LLC): up to 2.5 MB per core instruction/data last level
The Intel® QuickPath Interconnect (QPI) is a high speed, packetized, point-to-point interconnect
used in the processor. The narrow high-speed links stitch together processors in distributed
shared memory and integrated I/O platform architecture. It offers much higher bandwidth with
low latency. The Intel® QuickPath Interconnect has an
efficient architecture
allowing more
interconnect performance to be achieved in real systems. It has a snoop protocol optimized for
low latency and high scalability, as well as packet and lane structures enabling quick
completions of transactions. Reliability, availability, and serviceability features (RAS) are built into
the architecture.
The physical connectivity of each interconnect link is made up of twenty differential signal pairs
plus a differential forwarded clock. Each port supports a link pair consisting of two uni-directional
links to complete the connection between two components. This supports traffic in both
directions simultaneously. To facilitate flexibility and longevity, the interconnect is defined as
having five layers: Physical, Link, Routing, Transport, and Protocol.
The Intel® QuickPath Interconnect includes a cache coherency protocol to keep the distributed
memory and caching structures coherent during system operation. It supports both low-latency
source snooping and a scalable home snoop behavior. The coherency protocol provides for
direct cache-to-cache transfers for optimal latency.
3.2.2 Intel
®
Hyper-Threading Technology
Most Intel® Xeon® processors support Intel® Hyper-Threading Technology. The BIOS detects
processors that support this feature and enables the feature during POST.
If the processor supports this feature, the BIOS Setup provides an option to enable or disable
this feature. The default is enabled.
3.3 Processor Integrated I/O Module (IIO)
The processor’s integrated I/O module provides features traditionally supported through chipset
components. The integrated I/O module provides the following features:
3.3.1 PCI Express Interfaces
The integrated I/O module incorporates the PCI Express interface and supports up to 40 lanes
of PCI Express. The following tables list the CPU PCIe port connectivity of the Intel® Compute
Module MFS2600KI.
Table 3. Intel® Compute Module MFS2600KI PCIe Bus Segment Characteristics
3.3.2DMI2 Interface to the PCH
The platform requires an interface to the legacy Southbridge (PCH) which provides basic,
legacy functions required for the server platform and operating systems. Since only one PCH is
required and allowed for the system, CPU2 which does not connect to PCH would use this port
as a standard x4 PCI Express 2.0 interface.
3.3.3 Integrated IOAPIC
Provides support for PCI Express devices implementing legacy interrupt messages without
interrupt sharing.
3.3.4 Intel
®
QuickData Technology
Used for efficient, high bandwidth data movement between two locations in memory or from
memory to I/O.
3.4 Memory Subsystem
3.4.1 Integrated Memory Controller (IMC) and Memory Subsystem
Figure 5. Intergrated Memory Controller (IMC) and Memory Subsystem
Integrated into the processor is a memory controller. Each processor provides four DDR3
channels that support the following:
Unbuffered DDR3 and registered DDR3 DIMMs
LR DIMM (Load Reduced DIMM) for buffered memory solutions demanding higher
Independent channel mode or lockstep mode
Data burst length of eight cycles for all memory organization modes
Memory DDR3 data transfer rates of 800, 1066, 1333, and 1600 MT/s
64-bit wide channels plus 8-bits of ECC support for each channel
DDR3 standard I/O Voltage of 1.5 V and DDR3 Low Voltage of 1.35 V
1-Gb, 2-Gb, and 4-Gb DDR3 DRAM technologies supported for these devices:
o UDIMM DDR3 – SR x8 and x16 data widths, DR – x8 data width
o RDIMM DDR3 – SR,DR, and QR – x4 and x8 data widths
o LRDIMM DDR3 – QR – x4 and x8 data widths with direct map or with rank
multiplication
Up to eight ranks supported per memory channel, 1, 2 or 4 ranks per DIMM
Open with adaptive idle page close timer or closed page policy
Per channel memory test and initialization engine can initialize DRAM to all logical zeros
with valid ECC (with or without data scrambler) or a predefined test pattern
Isochronous access support for Quality of Service (QoS)
Minimum memory configuration: independent channel support with 1 DIMM populated
Integrated dual SMBus* master controllers
Command launch modes of 1n/2n
RAS Support:
o Rank Level Sparing and Device Tagging
o Demand and Patrol Scrubbing
o DRAM Single Device Data Correction (SDDC) for any single x4 or x8 DRAM
Slot per Channel (SPC) and DIMM Per Channel (DPC)2,3
1 Slot per Channel
2 Slots per Channel
1DPC
1DPC
2DPC
1.35V
1.5V
1.35V
1.5V
1.35V
1.5V
SRx8
Non-
ECC
1GB
2GB
4GB
n/a
1066,
1333, 1600
n/a
1066, 1333
n/a
1066, 1333
DRx8
Non-
ECC
2GB
4GB
8GB
n/a
1066,
1333, 1600
n/a
1066, 1333
n/a
1066, 1333
SRx16
Non-
ECC
512MB
1GB
2GB
n/a
1066,
1333, 1600
n/a
1066, 1333
n/a
1066, 1333
SRx8
ECC
1GB
2GB
4GB
1066, 1333
1066,
1333, 1600
1066
1066, 1333
1066
1066, 1333
DRx8
ECC
2GB
4GB
8GB
1066, 1333
1066,
1333, 1600
1066
1066, 1333
1066
1066, 1333
Supported and Validated
Supported but not Validate
The silk screened DIMM slot identifiers on the board provide information about the
channel, and therefore the processor to which they belong. For example, DIMM_A1 is
the first slot on Channel A on processor 1; DIMM_E1 is the first DIMM socket on
Channel E on processor 2.
The memory slots associated with a given processor are unavailable if the
corresponding processor socket is not populated.
A processor may be installed without populating the associated memory slots provided
and a second processor is installed with associated memory. In this case, the memory is
shared by the processors. However, the platform suffers performance degradation and
latency due to the remote memory.
Processor sockets are self-contained and autonomous. However, all memory subsystem
support (such as Memory RAS, Error Management,) in the BIOS setup are applied
commonly across processor sockets.
For a complete list of supported memory for the Intel® Compute Module MFS2600KI, refer to the
Tested Memory List published in the Intel® Server Configurator Tool.
Table 4. UDIMM Support Guidelines (Preliminary. Subject to Change)
Notes:
1. Supported DRAM Densities are 1Gb, 2Gb, and 4Gb. Only 2Gb and 4Gb are validated by Intel®
2. Command Address Timing is 1N for 1DPC and 2N for 2DPC