Revision History Intel® Compute Module MFS5000SI TPS
Revision History
Date Revision
Number
July 2007 0.95 Initial release.
August 2007 0.96 Updated
September 2007 1.0 Updated
February 2008 1.1 Updated
November 2008 1.2 Updated
May 2009 1.3 Updated
June 2009 1.4 Updated supported memory configurations
Modifications
Disclaimers
Information in this document is provided in connection with Intel® products. No license, express or
implied, by estoppel or otherwise, to any intellectual property rights is granted by this document. Except
as provided in Intel's Terms and Conditions of Sale for such products, Intel assumes no liability
whatsoever, and Intel disclaims any express or implied warranty, relating to sale and/or use of Intel
products including liability or warranties relating to fitness for a particular purpose, merchantability, or
infringement of any patent, copyright or other intellectual property right. Intel products are not intended for
use in medical, life saving, or life sustaining applications. Intel may make changes to specifications and
product descriptions at any time, without notice.
Designers must not rely on the absence or characteristics of any features or instructions marked
"reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility
whatsoever for conflicts or incompatibilities arising from future changes to them.
®
The Intel
cause the product to deviate from published specifications. Current characterized errata are available on
request.
Intel Corporation server baseboards support peripheral components and contain a number of highdensity VLSI and power delivery components that need adequate airflow to cool. Intel’s own chassis are
designed and tested to meet the intended thermal requirements of these components when the fully
integrated system is used together. It is the responsibility of the system integrator that chooses not to use
Intel developed server building blocks to consult vendor datasheets and operating parameters to
determine the amount of air flow required for their specific application and environmental conditions. Intel
Corporation can not be held responsible if components fail or the compute module does not operate
correctly when used outside any of their published operating or non-operating limits.
Intel, Pentium, Itanium, and Xeon are trademarks or registered trademarks of Intel Corporation.
*Other brands and names may be claimed as the property of others.
This Technical Product Specification (TPS) provides board-specific information detailing the features,
functionality, and high-level architecture of the Intel
Series Chipsets Server Board Family Datasheet should also be referenced for more in-depth detail of
various board subsystems, including chipset, BIOS, System Management, and System Management
software.
®
Compute Module MFS5000SI. The Intel® 5000
1.1 Chapter Outline
This document is divided into the following chapters:
Chapter 1 – Introduction
Chapter 2 – Product Overview
Chapter 3 – Functional Architecture
Chapter 4 – Connector / Header Locations and Pin-outs
Chapter 5 – Jumper Block Settings
Chapter 6 – Product Regulatory Requirements
Appendix A – Integration and Usage Tips
Appendix B – BMC Sensor Tables
Appendix C – Post Error Messages and Handling
Appendix D – Supported Intel
1.2Intel
®
Compute Module Use Disclaimer
®
Modular Server System
Intel® Modular Server components require adequate airflow to cool. Intel ensures through its own chassis
development and testing that when these components are used together, the fully integrated system will
meet the intended thermal requirements. It is the responsibility of the system integrator who chooses not
to use Intel-developed server building blocks to consult vendor datasheets and operating parameters to
determine the amount of airflow required for their specific application and environmental conditions. Intel
Corporation cannot be held responsible if components fail or the system does not operate correctly when
used outside any of their published operating or non-operating limits.
The Intel® Compute Module MFS5000SI is a monolithic printed circuit board with features that were
designed to support the high-density compute module market.
2.1 Intel
Processors 771-pin LGA sockets supporting one or two Dual-Core or Quad-Core Intel® Xeon®
The following figure shows the board layout of the Intel® Compute Module MFS5000SI. Each connector
and major component is identified by a number or letter. A description of each identified item is provided
below the figure.
Description Description
A Midplane Power Connector B Midplane Signal Connector
C POST Code Diagnostic LEDs D SAS Controller
E FBDIMM Slots F Intel® 5000P Memory Controller Hub (MCH)
G CPU #1 Socket H Voltage Regulator Heatsink
I Power/Fault LEDs J Power Button
K Activity and ID LEDs L Video Connector
M USB1 and USB2 Connectors N CPU #2 Socket
O Intel® 6321ESB I/O Controller Hub P CMOS Battery
Q I/O Mezzanine Card Connector
Figure 1. Component and Connector Location Diagram
The architecture and design of the Intel® Compute Module MFS5000SI is based on the Intel® 5000
Chipset Family. The chipset is designed for systems based on the Dual-Core and Quad-Core Intel
®
processor 5000 sequence with system bus speeds of 667 MHz, 1066 MHz, and 1333 MHz. The
Xeon
chipset is made up of two main components: the Memory Controller Hub (MCH) for the host bridge and
the Intel
®
6321ESB I/O controller hub for the I/O subsystem. This chapter provides a high-level
®
description of the functionality associated with each chipset component and the architectural blocks that
make up the server board. For more in-depth detail of the functionality for each
components and each of the functional architecture blocks, see the Intel
®
of the chipset
5000 Series Chipsets Server
Board Family Datasheet.
Figure 4. Compute Module Functional Block Diagram
Note: The previous diagram uses the Intel
®
5000P MCH as a general reference designator for MCH
This section describes the general functionality of the memory controller hub as it is implemented on this
server board.
The MCH is a single 1432-pin FCBGA package, which includes the following core platform functions:
System Bus Interface for the processor subsystem
Memory Controller
PCI Express* Ports, including the Enterprise South Bridge Interface (ESI)
FBD Thermal Management
SMBus Interface
Additional information about MCH functionality can be obtained from the Intel
Server Board Family Datasheet and the Intel
®
5000P Memory Controller Hub External Design
®
5000 Series Chipsets
Specification.
3.1.1 System Bus Interface
The MCH is configured for symmetric multi-processing across two independent front-side bus interfaces
that connect to the Dual-Core and Quad-Core Intel
bus on the MCH uses a 64-bit wide 1066 or 1333 MHz data bus. The 1333-MHz data bus is capable of
transferring data at up to 10.66 GB/s. The MCH supports a 36-bit wide address bus, capable of
addressing up to 64 GB of memory. The MCH is the priority agent for both front-side bus interfaces, and
is optimized for one processor on each bus.
®
Xeon® processors 5000 sequence. Each front-side
3.1.2 Processor Support
The Intel® Compute Module MFS5000SI supports one or two Dual-Core Intel® Xeon® processors 5100
sequence or Quad-Core Intel
1066 MHz and 1333 MHz. Previous generations of the Intel
®
Compute Module MFS5000SI. To see a list of the latest processors that have been validated on
Intel
this product, refer to
http://support.intel.com/support/motherboards/server/MFS5000SI/ and select
the Supported Processors List.
3.1.2.1 Processor Population Rules
When two processors are installed, both must be of identical revision, core voltage, and bus/core speed.
Mixed processor steppings is supported in N and N-1 configurations only. When only one processor is
installed, it must be in the socket labeled CPU1. The other socket must be empty.
The board is designed to provide up to 115 A of current per processor. Processors with higher current
requirements are not supported.
When using a single processor configuration, a terminator is not required in the second processor socket.
®
Xeon® processors 5300 and 5400 sequence with system bus speeds of
The compute module complies with Intel’s Common Enabling Kit (CEK) processor mounting and heatsink
retention solution. The compute module ships with a CEK spring snapped onto the underside of the
server board, beneath each processor socket. The heatsink attaches to the CEK, over the top of the
processor and the thermal interface material (TIM). For the stacking order of the chassis, CEK spring,
server board, TIM, and heatsink, see the following figure.
The CEK spring is removable, allowing for the use of non-Intel heatsink retention solutions.
Note: The processor heatsink and CEK spring shown in the following diagram are for reference purposes
only. The actual processor heatsink and CEK solutions compatible with this generation server board may
be of a different design.
Heatsink assembly
Thermal interface material (TIM)
Server board
CEK spring
Chassis
Figure 5. CEK Processor Mounting
3.1.3Memory Subsystem
The MCH masters four fully buffered DIMM (FBD) memory channels. FBD memory utilizes a narrow highspeed frame-oriented interface referred to as a channel. The four FBD channels are organized into two
branches of two channels per branch. Each branch is supported by a separate memory controller. The
two channels on each branch operate in lock step to increase FBD bandwidth. On the server board, the
four channels are routed to eight DIMM slots and are capable of supporting registered DDR2-533 and
DDR2-667 FBDIMM memory (stacked or unstacked). Peak theoretical memory data bandwidth is 6.4
GB/s with DDR2-533 and 8.0 GB/s with DDR2-667.
®
On the Intel
consists of channels A and B, and Branch 1 consists of channels C and D. FBD memory channels are
organized into two branches for RAID 1 (mirroring) support.
Compute Module MFS5000SI, a pair of channels becomes a branch where Branch 0
To boot the system, the system BIOS on the server board uses a dedicated I
2
C bus to retrieve DIMM
information needed to program the MCH memory registers. The following table provides the I
addresses for each DIMM slot.
TP02299
2
C
2
Table 1. I
C Addresses for Memory Module SMB
Device Address
DIMM A1 0xA0
DIMM A2 0xA2
DIMM B1 0xA0
DIMM B2 0xA2
DIMM C1 0xA0
DIMM C2 0xA2
DIMM D1 0xA0
DIMM D2 0xA2
3.1.3.1 Memory RASUM Features
The MCH supports several memory RASUM (Reliability, Availability, Serviceability, Usability, and
Manageability) features. These features include the Intel
1
®
x4 Single Device Data Correction (Intel® x4
SDDC) for memory error detection and correction, Memory Scrubbing, Retry on Correctable Errors,
Memory Built In Self Test, DIMM Sparing, and Memory Mirroring. For more information regarding these
features, see the Intel
®
5000 Series Chipsets Server Board Family Datasheet.
1
DIMM Sparing and Memory Mirroring features will be made available post production launch with a BIOS update.
3.1.3.2 Supported and Nonsupported Memory Configurations
The server board design supports up to eight DDR2-533 or DDR2-667 Fully Buffered DIMMs (FBD
memory). Use of identical DIMMs with this server board is recommended. The following tables show the
maximum memory configurations supported using the specified memory technology.
Table 2. Maximum 8-DIMM System Memory Configuration – x8 Single Rank
Table 3. Maximum 8-DIMM System Memory Configuration – x4 Dual Rank
DRAM Technology x4 Dual
Rank
256 Mb 4 GB 8 GB
512 Mb 8 GB 16 GB
1024 Mb 16 GB 32 GB
2048 Mb 16 GB 32 GB
Maximum Capacity Mirrored
Mode
The following configurations are not validated or supported with the Intel
DDR2 DIMMs that are not fully buffered are NOT supported on this server board.
DDR2-533 memory is not planned to be validated on this product.
Mixing memory type, size, speed, and/or rank is not validated and is not supported.
Mixing memory vendors is not validated and is not supported.
Non-ECC memory is not validated and is not supported in a server environment
Maximum Capacity
Non-Mirrored Mode
Maximum Capacity
Non-Mirrored Mode
®
Compute Module MFS5000SI:
For a complete list of supported memory for the Intel
Memory List published in the Intel
®
Server Configurator Tool.
®
Compute Module MFS5000SI, refer to the Tested
3.1.3.3 DIMM Population Rules and Supported DIMM Configurations
DIMM population rules depend on the operating mode of the memory controller, which is determined by
the number of DIMMs installed. DIMMs must be populated in pairs. DIMM pairs are populated in the
following DIMM slot order: A1 and B1, C1 and D1, A2 and B2, C2 and D2. DIMMs within a given pair
must be identical with respect to size, speed, and organization.
Intel supported DIMM configurations for this server board are shown in the following table.
Supported and Validated configuration : Slot is populated
Supported but not validated configuration : Slot is
populated
Slot is not populated
Mirroring: Y = Yes and indicates that configuration supports Memory Mirroring.
Sparing: Y(x) = Yes and indicates that configuration supports Memory Sparing.
Where x = 0: Sparing supported on Branch0 only
1: Sparing supported on Branch1 only
0,1: Sparing supported on both branches
Single channel mode is only tested and supported with a 512-MB x8 FBDIMM installed in DIMM
slot A1.
The supported memory configurations must meet population rules defined above.
For best performance, the number of DIMMs installed should be balanced across both memory
branches. For example, a four-DIMM configuration will perform better than a two-DIMM
configuration and should be installed in DIMM slots A1, B1, C1, and D1. An eight-DIMM
configuration will perform better then a six-DIMM configuration.
Although mixed DIMM capacities (size, type, timing and/or rank) between channels is supported
by the memory controller, mixed DIMMs configurations are not validated or supported with the
®
Compute Module MFS5000SI. Refer to section 3.1.3.2 for supported and nonsupported
Intel
DIMM configuration information.
3.1.3.3.1 Minimum Non-Mirrored Mode Configuration
The server board is capable of supporting a minimum of one DIMM installed. However, for system
performance reasons, Intel’s recommendation is that at least 2 DIMMs be installed.
The following diagram shows the recommended minimum DIMM memory configuration. Populated DIMM
Note: The server board supports single DIMM mode operation. Intel only validates and supports this
configuration with a single 512MB x8 FBDIMM installed in DIMM slot A1.
3.1.3.4 Non-mirrored Mode Memory Upgrades
The minimum memory upgrade increment is two DIMMs per branch. The DIMMs must cover the same
slot position on both channels. DIMM pairs must be identical with respect to size, speed, and
organization.
When adding two DIMMs to the configuration shown in Figure 7, the DIMMs should be populated in
DIMM slots C1 and D1 as shown in the following diagram. Populated DIMM slots are shown in Grey.
Functionally, DIMM slots A2 and B2 could also have been populated instead of DIMM slots C1 and D1.
However, your system will not achieve equivalent performance. Figure 8 shows the supported DIMM
configuration that is recommended because it allows both memory branches from the MCH to operate
independently and simultaneously. FBD bandwidth is doubled when both branches operate in parallel.
3.1.3.4.1 Mirrored Mode Memory Configuration
When operating in the mirrored mode, both branches operate in lock step. In mirrored mode, branch 1
contains a replicate copy of the data in branch 0. The minimum DIMM configuration to support memory
mirroring is four DIMMs, populated as shown in Figure 8. All four DIMMs must be identical with respect to
size, speed, and organization.
To upgrade a four-DIMM mirrored memory configuration, four additional DIMMs must be added to the
system. All four DIMMs in the second set must be identical to the first.
3.1.3.4.2 DIMM Sparing Mode Memory Configuration
The MCH provides DIMM sparing capabilities. Sparing is a RAS feature that involves configuring a DIMM
to be placed in reserve so it can be used to replace a DIMM that fails. DIMM sparing occurs within a
given bank of memory and is not supported across branches.
Two Memory Sparing configurations are supported:
Single Branch Mode Sparing
Dual Branch Mode Sparing
Figure 9. Single Branch Mode Sparing DIMM Configuration
DIMM_A1 and DIMM_B1 must be identical in organization, size and speed.
DIMM_A2 and DIMM_B2 must be identical in organization, size and speed.
DIMM_A1 and DIMM_A2 should be identical in organization, size and speed. See note below.
DIMM_B1 and DIMM_B2 should be identical in organization, size and speed. See note below.
Sparing should be enabled in the BIOS setup.
The BIOS will configure Rank Sparing Mode.
The larger of the pairs {DIMM_A1, DIMM_B1} and {DIMM_A2, DIMM_B2} will be selected as the
spare pair unit.
Note: Use of identical memory is recommended with the Intel
®
Compute Module MFS5000SI. Mixing
memory type, size, speed, rank and/or vendors is not validated and is not supported with this product.
Refer to section 3.1.3.2 for supported and nonsupported memory features and configuration information.
3.1.3.4.2.2 Dual Branch Mode Sparing
Dual branch mode sparing requires that all eight DIMM slots be populated and compliant with the
following population rules.
DIMM_A1 and DIMM_B1 must be identical in organization, size and speed.
DIMM_A2 and DIMM_B2 must be identical in organization, size and speed.
DIMM_C1 and DIMM_D1 must be identical in organization, size and speed.
DIMM_C2 and DIMM_D2 must be identical in organization, size and speed.
DIMM_A1 and DIMM_A2 should be identical in organization, size and speed. See note below.
DIMM_B1 and DIMM_B2 should be identical in organization, size and speed. See note below.
DIMM_C1 and DIMM_C2 should be identical in organization, size and speed. See note below.
DIMM_D1 and DIMM_D2 should be identical in organization, size and speed. See note below.
Sparing should be enabled in BIOS setup.
BIOS will configure Rank Sparing Mode.
The larger of the pairs {DIMM_A1, DIMM_B1} and {DIMM_A2, DIMM_B2} and {DIMM_C1,
DIMM_D1} and {DIMM_C2, DIMM_D2} will be selected as the spare pair units.
Note: Use of identical memory is recommended with the Intel
®
Compute Module MFS5000SI. Mixing
memory type, size, speed, rank and/or vendors is not validated and is not supported with this product.
Refer to section 3.1.3.2 for supported and nonsupported memory features and configuration information.
The Intel® 6321ESB I/O Controller Hub is a multi-function device that provides four distinct functions: an
IO Controller, a PCI-X Bridge, a Gb Ethernet Controller, and an Integrated Baseboard Management
Controller (BMC). Each function within the Intel
®
6321ESB I/O Controller Hub has its own set of
configuration registers. Once configured, each appears to the system as a distinct hardware controller.
A primary role of the Intel
devices and features. The server board uses the following Intel
®
6321ESB I/O Controller Hub is to provide the gateway to all PC-compatible I/O
®
6321ESB I/O Controller Hub features:
Dual GbE MAC
Integrated Baseboard Management Controller (BMC)
Universal Serial Bus 2.0 (USB) interface
LPC bus interface
PC-compatible timer/counter and DMA controllers
APIC and 8259 interrupt controller
Power management
System RTC
General purpose I/O
This section describes the function of most of the listed features as they pertain to this server board. For
more detailed information, see the Intel
®
Enterprise South Bridge-2 External Design Specification.
Intel
®
5000 Series Chipsets Server Board Family Datasheet or the
3.2.1 PCI Subsystem
The primary I/O buses for the server board are PCI and PCI Express*. The PCI buses comply with the
PCI Local Bus Specification, Revision 2.3. The following table lists the characteristics of the PCI bus
segments. Details about each bus segment follow the table.
Table 4. PCI Bus Segment Characteristics
PCI Bus Segment Voltage Width Speed Type On-board Device Support
PCI32
®
6321ESB I/O
Intel
Controller Hub
PE1
®
6321ESB I/O
Intel
Controller Hub
PCI Express*
Port2
PE2
®
6321ESB I/O
Intel
Controller Hub
PCI Express*
Port3
PE4, PE5
BNB PCI Express*
Ports 4,5
3.3V 32 bit 33 MHz PCI Used internally for video controller
3.3V x4 10 Gb/S
3.3V x4 10 Gb/S
3.3V x8 20 Gb/S
PCI
Express*
PCI
Express*
PCI
Express*
This interface is not used in the Intel®
Compute Module MFS5000SI design.
PCI Bus Segment Voltage Width Speed Type On-board Device Support
PE6, PE7
BNB PCI Express*
Ports 6,7
3.3V x8 20 Gb/S
PCI
Express*
This interface is not used in the Intel®
Compute Module MFS5000SI design.
3.2.1.1 PCI32: 32-bit, 33-MHz PCI Bus Segment
All 32-bit, 33-MHz PCI I/O is directed through the Intel® 6321ESB I/O Controller Hub. The 32-bit, 33-MHz
PCI segment created by the Intel
®
6321ESB I/O Controller Hub is known as the PCI32 segment. The
PCI32 segment supports the following embedded device:
2D Graphics Accelerator: ATI* ES1000 Video Controller
3.2.1.2 PXA: 64-bit, 133MHz PCI-X Bus Segment
One 64-bit PCI-X bus segment is directed through the Intel® 6321ESB I/O Controller Hub. PCI-X segment
PXA is not used in the Intel
®
Compute Module MFS5000SI design.
3.2.1.3 PE1: One x4 PCI Express* Bus Segment
One x4 PCI Express* bus segment is directed through the Intel® 6321ESB I/O Controller Hub. PCI
Express* segment PE1 is not used in the Intel
®
Compute Module MFS5000SI design.
3.2.1.4 PE2: One x4 PCI Express* Bus Segment
One x4 PCI Express* bus segment is directed through the Intel® 6321ESB I/O Controller Hub. PCI
Express* segment PE2 supports the LSI* 1064e SAS controller.
3.2.1.5 PE4, PE5: Two x4 PCI Express* Bus Segments
Two x4 PCI Express* bus segments are directed through the MCH. PCI Express* segments PE4 and
PE5 support the optional I/O mezzanine card.
3.2.1.6 PE6, PE7: Two x4 PCI Express* Bus Segments
Two x4 PCI Express* bus segments are directed through the MCH. PCI Express* segments PE6 and
PE7 are not used in the Intel
®
Compute Module MFS5000SI design.
3.2.2 Serial ATA Support
The Intel® 6321ESB I/O Controller Hub has an integrated Serial ATA (SATA) controller that supports
independent DMA operation on six ports and supports data transfer rates of up to 3.0 Gb/s. These ports
are not used in the Intel
®
Compute Module MFS5000SI design.
3.2.3 Parallel ATA (PATA) Support
The integrated IDE controller of the Intel® 6321ESB I/O Controller Hub provides one IDE channel. The
PATA interface is not used in the Intel
The USB controller functionality integrated into the Intel® 6321ESB I/O Controller Hub provides the server
board with the interface for up to eight USB 2.0 ports. Two external connectors are located on the front
edge of the server board. These two ports are the only ports of the Intel
®
6321ESB I/O Controller Hub
that are used in the compute module design.
3.3 Video Support
The server board provides an ATI* ES1000 PCI graphics accelerator, along with 16 MB of video DDR
SDRAM and supports circuitry for an embedded SVGA video subsystem. The ATI* ES1000 chip contains
an SVGA video controller, clock generator, 2D engine, and RAMDAC in a 359-pin BGA. One 4Mx16x4
bank DDR SDRAM chip provides 16 MB of video memory.
The SVGA subsystem supports a variety of modes, up to 1024 x 768 resolution in 8 / 16 / 32 bpp modes
under 2D. It also supports both CRT and LCD monitors up to a 100-Hz vertical refresh rate.
Video is accessed using a standard 15-pin VGA connector found on the front edge of the server board.
Hot plugging the video while the system is still running is supported.
On-board video can be disabled using the BIOS Setup utility.
3.3.1.1 Video Modes
The ATI* ES1000 chip supports all standard IBM* VGA modes. The following table shows the 2D modes
supported for both CRT and LCD.
The memory controller subsystem of the ATI* ES1000 arbitrates requests from the direct memory
interface, the VGA graphics controller, the drawing co-processor, the display controller, the video scalar,
and the hardware cursor. Requests are serviced in a manner that ensures display integrity and maximum
CPU/co-processor drawing performance.
The server board supports a 16 MB (4Meg x 16-bit x 4 banks) DDR SDRAM device for video memory.
Network interface support is provided from the built-in Dual GbE MAC features of the Intel® 6321ESB I/O
Controller Hub. These interfaces are routed over the midplane board to the Ethernet switch module in the
rear of the system. These interfaces are used in SERDES mode and do not require a Physical Layer
Transceiver (PHY). These ports provide the server board with support for dual LAN ports designed for
10/100/1000 Mbps operation.
Each Network Interface Controller (NIC) drives a single LED located on the front edge of the board. The
link/activity LED indicates network connection when on, and Transmit/Receive activity when blinking.
3.4.1 Intel
Intel® I/O Acceleration Technology (I/OAT) moves network data more efficiently through Dual-Core and
Quad-Core Intel
responsiveness across diverse operating systems and virtualized environments. Intel
network application responsiveness by unleashing the power of Dual-Core and Quad-Core Intel
processors 5000 sequence through more efficient network data movement and reduced system
overhead. Intel multi-port network adapters with Intel
consolidation and virtualization through stateless network acceleration that seamlessly scales across
multiple ports and virtual machines. Intel
®
I/O Acceleration Technology
®
Xeon® processors 5000 sequence-based servers for improved application
®
I/OAT provide high-performance I/O for server
®
I/OAT provides safe and flexible network acceleration through
®
I/OAT improves
®
Xeon®
tight integration into popular operating systems and virtual machine monitors, avoiding the support risks
of third-party network stacks and preserving existing network requirements, such as teaming and failover.
3.4.2 MAC Address Definition
Each Intel® Compute Module MFS5000SI has four MAC addresses assigned to it at the Intel factory.
During the manufacturing process, each server board will have a white MAC address sticker placed on
the board. The sticker will display the MAC address in both barcode and alpha numeric formats. The
printed MAC address is assigned to NIC 1 on the server board. NIC 2 is assigned the NIC 1 MAC
address + 1.
Two additional MAC addresses are assigned to the Integrated Baseboard Management Controller (BMC)
embedded in the Intel
BMC’s embedded network stack to enable IPMI remote management over LAN. BMC LAN Channel 1 is
assigned the NIC1 MAC address + 2, and BMC LAN Channel 2 is assigned the NIC1 MAC address + 3.
®
6321ESB I/O Controller Hub. These MAC addresses are used by the Integrated
3.5 Super I/O
Legacy I/O support is provided by using a National Semiconductor* PC87427 Super I/O device. This chip
contains all of the necessary circuitry to support the following functions:
GPIOs
One serial port (internal and used for debug only)
Wake-up control
3.5.1.1 Serial Ports
The server board provides one serial port through an internal DH-10 serial header (J1B1) to be used for
debug purposes only. The serial interface follows the standard RS-232 pin-out as defined in the following
table.
The server board does not support a floppy disk controller (FDC) interface. However, the system BIOS
does recognize USB floppy devices.
3.5.1.3 Keyboard and Mouse Support
Keyboard and mouse support is provided locally by the two USB ports located on the front panel of the
board. The compute module also provides remote keyboard and mouse support.
3.5.1.4 Wake-up Control
The super I/O contains functionality that allows various events to power on and power off the system.
The following section provides detailed information regarding all connectors, headers and jumpers on the
server board. Table 7 lists all connector types available on the board and the corresponding reference
designators printed on the silkscreen.
Table 7. Board Connector Matrix
Connector Quantity Reference Designators
Power Connector 1 J1A1
Midplane Signal Connector 1 J3A1
CPU 2 J7G1, J5G1
Main Memory 8 J7B1,J7B2,J7B3,J8B2,J8B3,J8B4,J9B2,
J9B3
I/O Mezzanine 1 J2B1
Battery 1 XBT1F1
USB 2 J4K1,J4K2
Serial Port A 1 J1B1
Video connector 1 J6K1
System Recovery Setting
Jumpers
1 J4A1, J7A1, J1F2
4.2 Power Connectors
The power connection is obtained using a 2x2 FCI Airmax* power connector. The following table defines
the power connector pin-out.
The following table details the pin-out definition of the VGA connector (J6K1).
Table 9. VGA Connector Pin-out (J6A1)
Pin Signal Name Description
1 V_IO_R_CONN Red (analog color signal R)
2 V_IO_G_CONN Green (analog color signal G)
3 V_IO_B_CONN Blue (analog color signal B)
4 TP_VID_CONN_B4 No connection
5 GND Ground
6 GND Ground
7 GND Ground
8 GND Ground
9 TP_VID_CONN_B9 No connection
10 GND Ground
11 TP_VID_CONN_B11 No connection
12 V_IO_DDCDAT DDCDAT
13 V_IO_HSYNC_CONN HSYNC (horizontal sync)
14 V_IO_VSYNC_CONN VSYNC (vertical sync)
15 V_IO_DDCCLK DDCCLK
4.3.2 I/O Mezzanine Card Connector
The server board provides an internal 120-pin Airmax* connector (J2B1) to accommodate high-speed I/O
expansion modules, which expands the I/O capabilities of the server board. The currently available I/O
mezzanine card for this server is the Intel
Ethernet card based on the Intel
®
82571EB.The following table details the pin-out of the Intel® I/O
The server board connects to the midplane through a 96-pin Airmax* connector (J3A1) (power is J1A1) to
connect the various I/O, management, and control signals of the system.
Table 11. 96-pin Midplane Signal Connector Pin-out
The server board has several 3-pin jumper blocks that can be used to configure, protect, or recover
specific features of the server board. Pin 1 on each jumper block is denoted by an “*” or “▼”.
1-2 BMC Firmware Force Update Mode – Enabled J7A1: BMC Force
Update
J4A1: Password
Clear
J1F2: CMOS Clear
J3A3: BIOS Bank
Select
2-3 BMC Firmware Force Update Mode – Disabled (Default)
1-2 These pins should have a jumper in place for normal system operation. (Default)
2-3 If these pins are jumpered, the administrator and user passwords are cleared
immediately. These pins should not be jumpered for normal operation.
1-2 These pins should have a jumper in place for normal system operation. (Default)
2-3 If these pins are jumpered, the CMOS settings are cleared immediately. These pins
should not be jumpered for normal operation
1-2 If these pins are jumpered, the BIOS is forced to boot from the lower bank. These pins
should not be jumpered for normal operation.
2-3 These pins should have a jumper in place for normal system operation. (Default)
5.1.1 CMOS Clear and Password Reset Usage Procedure
The CMOS Clear (J1F2) and Password Reset (J4A1) recovery features are designed such that the
desired operation can be achieved with minimal system down time. The usage procedure for these two
features has changed from previous generation Intel
new usage model.
®
server boards. The following procedure outlines the
1. Power down compute module (do not remove AC power).
2. Remove compute module from modular server chassis.
3. Open compute module.
4. Move jumper from Default operating position (pins 1-2) to Reset/Clear position (pins 2-3).
5. Wait 5 seconds.
6. Move jumper back to default position (pins 1-2).
7. Close the compute module.
8. Reinstall compute module in modular server chassis.
9. Power up the compute module.
Password and/or CMOS is now cleared and can be reset by going into the BIOS setup.
Note: Removing AC power before performing the CMOS Clear operation will cause the system to
automatically power up and immediately power down after the reset procedure has been completed and
AC power is re-applied. Should this occur, remove the AC power cord again, wait 30 seconds, and reinstall the AC power cord. Power up the system and proceed to the <F2> BIOS Setup utility to reset
desired settings.
5.1.2 BMC Force Update Procedure
When performing a standard BMC firmware update procedure, the update utility places the BMC into an
update mode, allowing the firmware to load safely onto the flash device. In the unlikely event that the
BMC firmware update process fails due to the BMC not being in the proper update state, the server board
provides a BMC Force Update jumper (J7A1) which will force the BMC into the proper update state. The
following procedure should be followed in the event the standard BMC firmware update process fails.
1. Power down and remove AC power
2. Remove compute module from modular server chassis
4. Move jumper from Default operating position (pins 2-3) to “Enabled” position (pins 1-2)
5. Close the compute module
6. Reconnect AC power and power up the compute module
7. Perform standard BMC firmware update procedure through the Intel® Modular Server Control
software
8. Power down and remove AC power
9. Remove compute module from the server system
10. Move jumper from “Enabled” position (pins 1-2) to “Disabled” position (pins 2-3)
11. Close the server system
12. Reinstall the compute module into the modular server chassis
13. Reconnect AC power and power up the compute module
Note: Normal BMC functionality (for example, KVM, monitoring, and remote media) is disabled with the
force BMC update jumper set to the “Enabled” position. The server should never be run with the BMC
force update jumper set in this position and should only be used when the standard firmware update
process fails. This jumper should remain in the default – disabled position when the server is running
normally.
5.1.3 System Status LED – BMC Initialization
When the AC power is first applied to the system and 5V-STBY is present, the Integrated BMC controller
on the server board requires 15-20 seconds to initialize. During this time, the system status LED blinks,
alternating between amber and green, and the power button functionality of the control panel is disabled,
preventing the server from powering up. Once BMC initialization has completed, the status LED stops
blinking and power button functionality is restored. The power button can then be used to turn on the
server.
The Intel® Compute Module MFS5000SI is evaluated as part of the Intel® Modular Server System
MFSYS25/MFSYS35, which requires meeting all applicable system component regulatory requirements. Refer to the
®
Intel
Modular Server System MFSYS25/MFSYS35 Technical Product Specification for a complete listing of all
system and component regulatory requirements.
6.2 Product Regulatory Compliance and Safety Markings
No markings are required on the Intel® Compute Module MFS5000SI server board itself as it is evaluated
as part of the Intel
®
Modular Server System MFSYS25/MFSYS35.
6.3 Product Environmental/Ecology Requirements
The Intel® Compute Module MFS5000SI is evaluated as part of the Intel® Modular Server System
MFSYS25/MFSYS35, which requires meeting all applicable system component environmental and
ecology requirements. For a complete listing of all system and component environment and ecology
requirements and markings, refer to the IntelProduct Specification.
®
Modular Server System MFSYS25/MFSYS35 Technical
6.4 Product Environmental/Ecology Markings
The following Product Ecology markings are required on the Intel® Compute Module MFS5000SI server
board:
Requirement
China Restriction of Hazardous Substance
Environmental Friendly Use Period Mark
When two processors are installed, both must be of identical revision, core voltage, and bus/core
speed. Mixed processor steppings is supported. However, the stepping of one processor cannot
be greater than one stepping back of the other.
Processors must be installed in order. CPU 1 is located near the edge of the server board and
must be populated to operate the board.
Only Fully Buffered DIMMs (FBD) are supported on this server board.
Mixing memory type, size, speed, rank and/or memory vendors is not validated and is not
supported on this server board.
Non-ECC memory is not validated and is not supported in a server environment
For a list of supported memory for this server board, see the Intel
Tested Memory List in the Intel
For a list of Intel supported operating systems, add-in cards, and peripherals for this server
board, see the Intel
®
Compute Module MFS5000SI Tested Hardware and Operating System List.
®
Server Configurator Tool.
Only Dual-Core processors 5100 sequence or Quad-Core Intel
sequence, with system bus speeds of 1066/1333 MHz are supported on this server board.
Previous generation Intel
®
Xeon® processors are not supported.
For best performance, the number of DIMMs installed should be balanced across both memory
branches. For example, a four-DIMM configuration will perform better than a two-DIMM
configuration and should be installed in DIMM Slots A1, B1, C1, and D1. An eight-DIMM
configuration will perform better than a six-DIMM configuration.
Normal Integrated BMC functionality (for example, KVM, monitoring, and remote media) is
disabled with the force BMC update jumper set to the “enabled” position (pins 1-2). The server
should never be run with the BMC force update jumper set in this position and should only be
used when the standard firmware update process fails. This jumper should remain in the default
(disabled) position (pins 2-3) when the compute module is running normally.
When performing the BIOS update procedure, the BIOS select jumper must be set to its default
Table 15 lists the sensor identification numbers and information regarding the sensor type, name,
supported thresholds, and a brief description of the sensor purpose. See the Intelligent Platform Management Interface Specification, Version 2.0, for sensor and event / reading-type table information.
Sensor Type
The Sensor Type references the values enumerated in the Sensor Type Codes table in the IPMI
Specification. It provides the context in which to interpret the sensor, for example, the physical entity or
characteristic that is represented by this sensor.
Event / Reading Type
The Event / Reading Type references values from the Event / Reading Type Code Ranges and Generic
Event / Reading Type Codes tables in the IPMI Specification. Note that digital sensors are specific type of
discrete sensors, which have only two states.
Event Offset Triggers
This column defines what event offsets the sensor generates.
For Threshold (analog reading) type sensors, the BMC can generate events for the following thresholds:
The abbreviation [U, L] is used to indicate that both Upper and Lower thresholds are supported. A few
sensors support only a subset of the standard four threshold triggers. Note that even if a sensor does
support all thresholds, the SDRs may not contain values for some thresholds. Consult Table 16 for
information on the thresholds that are defined in the SDRs.
For Digital and Discrete type sensor event triggers, the supported event generating offsets are listed. The
offsets can be found in the Generic Event / Reading Type Codes or Sensor Type Codes tables in the
IPMI Specification, depending on whether the sensor event / reading type is a generic or sensor-specific
response.
All sensors generate both assertions and deassertions of the defined event triggers. The assertions and
deassertions may or may not generate events into the System Event Log (SEL), depending on the sensor
SDR settings.
Fault LED
This column indicates whether an assertion of an event lights the front panel fault LED. The Integrated
BMC aggregates all fault sources (including outside sources such as the BIOS) such that the LED will be
lit as long as any source indicates that a fault state exists. The Integrated BMC extinguishes the fault LED
when all sources indicate no faults are present.
The rearm is a request for the event status for a sensor to be rechecked and updated upon a transition
between good and bad states. Rearming the sensors can be done manually or automatically. The
following abbreviations are used in the column:
‘A’: Auto rearm
‘M’: Manual rearm
Readable
Some sensors are used simply to generate events into the System Event Log. The Watchdog timer
sensor is one example. These sensors operate by asserting and then immediately de-asserting an event.
Typically the SDRs for such sensors are defined such that only the assertion causes an event message
to be deposited in the SEL. Reading such a sensor produces no useful information and is marked as ‘No’
in this column. Note that some sensors may
response to the IPMI Get Sensor Reading command. These sensors are represented by type 3 SDR
records.
Standby
Some sensors operate on standby power. These sensors may be accessed and / or generate events
when the compute module payload power is off, but standby power is present.
actually be unreadable in that they return an error code in
Name #
Power Unit
Status
Watchdog 03h
System
ACPI Power
State
01h
0Ch
Sensor
Type
Power Unit
09h
Watchdog
2
23h
System
ACPI
Power
Event /
Reading
Type
Sensor
Specific
6Fh
Sensor
Specific
6Fh
Sensor
Specific
6Fh
Table 15. BMC Sensors
Event Offset Triggers
0: Power down None
1: Power cycle None
4: A/C lost (DC input lost) None
5: Soft power control failure
(did not turn on or off)
Intel® Compute Module MFS5000SI TPS Appendix C: POST Error Messages and Handling
Appendix C: POST Error Messages and Handling
Whenever possible, the BIOS will output the current boot progress codes on the video screen. Progress
codes are 32-bit quantities plus optional data. The 32-bit numbers include class, subclass, and operation
information. The class and subclass fields point to the type of hardware that is being initialized. The
operation field represents the specific initialization activity. Based on the data bit availability to display
progress codes, a progress code can be customized to fit the data width. The higher the data bit, the
higher the granularity of information that can be sent on the progress port. The progress codes may be
reported by the system BIOS or option ROMs.
The Response section in the following table is divided into two types:
Minor: The message is displayed on the screen or in the Error Manager screen. The system will
continue booting with a degraded state. The user may want to replace the erroneous unit. The
setup POST error Pause setting does not have any effect with this error.
Major: The message is displayed in the Error Manager screen, and an error is logged to the SEL.
The setup POST error Pause setting determines whether the system pauses to the Error
Manager for this type of error, where the user can take immediate corrective action or choose to
continue booting.
Fatal: The message is displayed in the Error Manager screen, an error is logged to the SEL, and
the system cannot boot unless the error is resolved. The user needs to replace the faulty part and
restart the system. The setup POST error Pause setting does not have any effect with this error.
Table 17. POST Error Messages and Handling
Error Code Error Message Response
004C Keyboard / interface error Major
0012 CMOS date / time not set Major
0048 Password check failed Fatal
0141 PCI resource conflict Major
0146 Insufficient memory to shadow PCI ROM Major
0192 L3 cache size mismatch Fatal
0194 CPUID, processor family are different Fatal
0195 Front side bus mismatch Major
0197 Processor speeds mismatched Major
5220 Configuration cleared by jumper Minor
5221 Passwords cleared by jumper Major
8110 Processor 01 internal error (IERR) on last boot Major
8111 Processor 02 internal error (IERR) on last boot Major
8120 Processor 01 thermal trip error on last boot Major
8121 Processor 02 thermal trip error on last boot Major
8130 Processor 01 disabled Major
8131 Processor 02 disabled Major
8160 Processor 01 unable to apply BIOS update Major
8161 Processor 02 unable to apply BIOS update Major
8190 Watchdog timer failed on last boot Major
8198 Operating system boot watchdog timer expired on last boot Major
8300 Baseboard management controller failed self-test Major
Revision 1.4
37
Intel order number: E15154-007
Appendix C: POST Error Messages and Handling Intel® Compute Module MFS5000SI TPS
Error Code Error Message Response
8305 Hot swap controller failed Major
84F2 Baseboard management controller failed to respond Major
84F3 Baseboard management controller in update mode Major
84F4 Sensor data record empty Major
84FF System event log full Minor
8500
8510
8520 DIMM_A1 failed Self Test (BIST). Major
8521 DIMM_A2 failed Self Test (BIST). Major
8522 DIMM_A3 failed Self Test (BIST). Major
8523 DIMM_A4 failed Self Test (BIST). Major
8524 DIMM_B1 failed Self Test (BIST). Major
8525 DIMM_B2 failed Self Test (BIST). Major
8526 DIMM_B3 failed Self Test (BIST). Major
8527 DIMM_B4 failed Self Test (BIST). Major
8528 DIMM_C1 failed Self Test (BIST). Major
8529 DIMM_C2 failed Self Test (BIST). Major
852A DIMM_C3 failed Self Test (BIST). Major
852B DIMM_C4 failed Self Test (BIST). Major
852C DIMM_D1 failed Self Test (BIST). Major
852D DIMM_D2 failed Self Test (BIST). Major
852E DIMM_D3 failed Self Test (BIST). Major
852F DIMM_D4 failed Self Test (BIST). Major
8580 DIMM_A1 Correctable ECC error encountered. Minor/Major after 10 events
8581 DIMM_A2 Correctable ECC error encountered. Minor/Major after 10 events
8582 DIMM_A3 Correctable ECC error encountered. Minor/Major after 10 events
8583 DIMM_A4 Correctable ECC error encountered. Minor/Major after 10 events
8584 DIMM_B1 Correctable ECC error encountered. Minor/Major after 10 events
8585 DIMM_B2 Correctable ECC error encountered. Minor/Major after 10 events
8586 DIMM_B3 Correctable ECC error encountered. Minor/Major after 10 events
8587 DIMM_B4 Correctable ECC error encountered. Minor/Major after 10 events
8588 DIMM_C1 Correctable ECC error encountered. Minor/Major after 10 events
8589 DIMM_C2 Correctable ECC error encountered. Minor/Major after 10 events
858A DIMM_C3 Correctable ECC error encountered. Minor/Major after 10 events
858B DIMM_C4 Correctable ECC error encountered. Minor/Major after 10 events
858C DIMM_D1 Correctable ECC error encountered. Minor/Major after 10 events
858D DIMM_D2 Correctable ECC error encountered. Minor/Major after 10 events
858E DIMM_D3 Correctable ECC error encountered. Minor/Major after 10 events
858F DIMM_D4 Correctable ECC error encountered. Minor/Major after 10 events
8601
8602 WatchDog timer expired (secondary BIOS may be bad!). Minor
8603 Secondary BIOS checksum fail. Minor
Memory Component could not be configured in the selected RAS
mode.
System supports a maximum of 16 GB of main memory. Additional
memory will not be counted. (This error is S5000V specific.)
Override jumper is set to force boot from lower alternate BIOS bank
of flash ROM.
Major
Major
Minor
Revision 1.4
38
Intel order number: E15154-007
Intel® Compute Module MFS5000SI TPS Appendix C: POST Error Messages and Handling
Error Code Error Message Response
92A3 Serial port component was not detected. Major
92A9 Serial port component encountered a resource conflict error. Major
0xA000 TPM device not detected. Minor
0xA001 TPM device missing or not responding. Minor
0xA002 TPM device failure Minor
0xA003 TPM device failed self test. Minor
POST Error Pause Option
In case of POST error(s) that are listed as “Major”, the BIOS enters the Error Manager and waits for the
user to press an appropriate key before booting the operating system or entering the BIOS Setup.
The user can override this option by setting “POST Error Pause” to “disabled” in the BIOS Setup Main
menu page. If the “POST Error Pause” option is set to “disabled”, the system boots the operating system
without user-intervention. The default value is set to “disabled”.
POST Error Beep Codes
The following table lists the POST error beep codes. Prior to system video initialization, the BIOS uses
these beep codes to inform users of error conditions. The beep code is followed by a user visible code on
POST Progress LEDs.
Table 18. POST Error Beep Codes
Beeps Error Message POST Progress Code
(PPC)
3 Memory error No PPC System halted because a fatal error related to the memory
was detected.
6 BIOS recovery No PPC The system has detected a corrupted BIOS in the flash
Appendix D: Supported Intel® Modular Server System
The Intel® Compute Module MFS5000SI is supported in the following chassis:
Intel
Intel
This section provides a high-level descriptive overview of each chassis. For more details, refer to the
®
Intel
Modular Server System MFSYS25/MFSYS35 Technical Product Specification (TPS).
®
Modular Server System MFSYS25
®
Modular Server System MFSYS35
A Shared hard drive storage bay
B I/O cooling fans
C Empty compute module bay
D Compute module cooling fans
E Compute module midplane connectors
Figure 11. Intel® Modular Server System MFSYS25
Revision 1.4
40
Intel order number: E15154-007
Intel® Compute Module MFS5000SI TPS Glossary
Glossary
This appendix contains important terms used in the preceding chapters. For ease of use, numeric entries
are listed first (for example, “82460GX”) followed by alpha entries (for example, “AGP 4x”). Acronyms are
followed by non-acronyms.
Term Definition
ACPI Advanced Configuration and Power Interface
AP Application Processor
APIC Advanced Programmable Interrupt Control
ASIC Application Specific Integrated Circuit
ASMI Advanced Server Management Interface
BIOS Basic Input/Output System
BIST Built-In Self Test
BMC Baseboard Management Controller
Bridge Circuitry connecting one computer bus to another, allowing an agent on one to access the other
BSP Bootstrap Processor
byte 8-bit quantity.
CBC Chassis Bridge Controller (A microcontroller connected to one or more other CBCs, together they
bridge the IPMB buses of multiple chassis.
CEK Common Enabling Kit
CHAP Challenge Handshake Authentication Protocol
CMOS In terms of this specification, this describes the PC-AT compatible region of battery-backed 128 bytes
of memory, which normally resides on the server board.