Revision History Intel® Compute Module MFS5520VI TPS
Revision History
Date Revision
Number
February, 2009 1.0 Initial release.
June, 2009 1.1 Updated the document.
March, 2010 1.2 Updated the document.
April, 2010 1.3 Updated the document.
May, 2010 1.4 Removed CCC and CNCA.
December, 2010 1.5 Updated Video mode info and BMC memory size.
Modifications
Disclaimers
Information in this document is provided in connection with Intel® products. No license, express or implied, by
estoppel or otherwise, to any intellectual property rights is granted by this document. Except as provided in Intel's
Terms and Conditions of Sale for such products, Intel assumes no liability whatsoever, and Intel disclaims any
express or implied warranty, relating to sale and/or use of Intel products including liability or warranties relating to
fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property
right. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make
changes to specifications and product descriptions at any time, without notice.
Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or
"undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or
incompatibilities arising from future changes to them.
®
The Intel
product to deviate from published specifications. Current characterized errata are available on request.
Compute Module MFS5520VI may contain design defects or errors known as errata which may cause the
Intel Corporation server baseboards support peripheral components and contain a number of high-density VLSI and
power delivery components that need adequate airflow to cool. Intel’s own chassis are designed and tested to meet
the intended thermal requirements of these components when the fully integrated system is used together. It is the
responsibility of the system integrator that chooses not to use Intel developed server building blocks to consult vendor
datasheets and operating parameters to determine the amount of air flow required for their specific application and
environmental conditions. Intel Corporation can not be held responsible if components fail or the compute module
does not operate correctly when used outside any of their published operating or non-operating limits.
Intel, Pentium, Itanium, and Xeon are trademarks or registered trademarks of Intel Corporation.
*Other brands and names may be claimed as the property of others.
Table 18. POST Error Messages and Handling.......................................................................... 42
®
Compute Module MFS5520VI PCI Bus Segment Characteristics....................... 18
Revision 1.5
vi
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS List of Tables
< This page intentionally left blank.>
Revision 1.5 vii
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Introduction
1. Introduction
This Technical Product Specification (TPS) provides board-specific information detailing the
features, functionality, and high-level architecture of the Intel
®
Compute Module MFS5520VI.
1.1 Chapter Outline
This document is divided into the following chapters:
• Chapter 1 – Introduction
• Chapter 2 – Product Overview
• Chapter 3 – Functional Architecture
• Chapter 4 – Connector/Header Locations and Pin-outs
• Chapter 5 – Jumper Block Settings
• Chapter 6 – Product Regulatory Requirements
• Appendix A – Integration and Usage Tips
• Appendix B – BMC Sensor Tables
• Appendix C – Post Error Messages and Handling
• Appendix D – Supported Intel
• Glossary
• Reference Documents
1.2 Intel
®
Compute Module Use Disclaimer
®
Modular Server System
Intel® Modular Server components require adequate airflow to cool. Intel ensures through its
own chassis development and testing that when these components are used together, the fully
integrated system will meet the intended thermal requirements. It is the responsibility of the
system integrator who chooses not to use Intel-developed server building blocks to consult
vendor datasheets and operating parameters to determine the amount of airflow required for
their specific application and environmental conditions. Intel Corporation cannot be held
responsible if components fail or the system does not operate correctly when used outside any
of their published operating or non-operating limits.
The Intel® Compute Module MFS5520VI is a monolithic printed circuit board with features that
were designed to support the high-density compute module market.
2.1 Intel
®
Compute Module MFS5520VI Feature Set
Table 1. Intel compute module MFS5520VI Feature Set
Feature Description
Processors Support for one or two Intel® Xeon® Processor 5500 series or two Intel® Xeon®
Processor 5600 series in FC-LGA 1366 Socket B package with up to 95 W Thermal
Design Power (TDP).
The following figure shows the board layout of the Intel® Compute Module MFS5520VI. Each
connector and major component is identified by a number or letter. A description of each
identified item is provided below the figure.
E
D
C
F
G
H
B
A
Q
P
O
J
N
M
M
L
K
A Intel® 5520 Chipset I/O Hub J CPU 2 Socket
B CPU2 DIMM Slots K Power/Fault LEDs
C Mezzanine Card Connector 1 L Power Switch
D CPU 1 with Heatsink M Activity and ID LEDs
E Mezzanine Card Connector 2 N Video Connector
F Midplane Power Connector O USB Ports 2 and 3
G Midplane Signal Connector P USB1 Ports 0 and 1
H Midplane Guide Pin Receptacle Q CMOS Battery
I CPU 1 DIMM Slots
I
AF003077
Figure 1. Component and Connector Location Diagram
2.2.2External I/O Connector Locations
The following drawing shows the layout of the external I/O components for the Intel® Compute
Module MFS5520VI.
The architecture and design of the Intel® Compute Module MFS5520VI is based on the Intel®
5520 Chipset I/O Hub (IOH) and the Intel
systems based on the Intel
QuickPath Interconnect (Intel
Intel
®
5520 Chipset I/O Hub (IOH) that provides a connection point between various I/O
®
Xeon® Processor in FC-LGA 1366 socket B package with Intel®
®
QPI). The chipset contains two main components:
®
82801JR ICH10 RAID. The chipset is designed for
components.
Intel
®
82801JR, which is the I/O controller hub (ICH10R) for the I/O subsystem.
This chapter provides a high-level description of the functionality associated with each chipset
component and the architectural blocks that make up the server board.
The Compute Module supports the following processors:
One or two Intel
One or two Intel
®
Intel
QPI link interface and Thermal Design Power (TDP) up to 95 W.
and Thermal Design Power (TDP) up to 95 W.
Previous generations of the Intel
3.1.1.1Processor Population Rules
Note: Although the Compute Module does support dual-processor configurations consisting of
different processors that meet the defined criteria below, Intel does not perform validation
testing of this configuation. For optimal performance in dual-processor configurations, Intel
recommends that identical processors be installed.
When using a single processor configuration, the processor must be installed into the processor
socket labeled CPU1. A terminator is not required in the second processor socket when using a
single processor configuration.
When two processors are installed, the following population rules apply:
®
Xeon® Processor 5500 series with 4.8 GT/s, 5.86 GT/s or 6.4 GT/s
®
Xeon® Processor 5600 series with a 6.4 GT/s Intel® QPI link interface
®
Xeon® processors are not supported on the compute module.
Both processors must be of the same processor family.
Both processors must have the same front-side bus speed.
Both processors must have the same cache size.
Processors with different speeds can be mixed in a system, given the prior rules are met.
If this condition is detected, all processor speeds are set to the lowest common
denominator (highest common speed) and an error is reported.
Processor stepping within a common processor family can be mixed as long as it is
listed in the processor specification updates published by Intel Corporation.
3.1.2 Mixed Processor Configuration
The following table describes mixed processor conditions and recommended actions for the
®
Intel
Compute Module MFS5520VI. Errors fall into one of the following categories:
Fatal: If the compute module can boot, it pauses at a blank screen with the text
“Unrecoverable fatal error found. System will not boot until the error is resolved”
and “Press <F2> to enter setup”, regardless of whether the “Post Error Pause” setup
option is enabled or disabled. When the operator presses the F2 key on the keyboard,
the error message is displayed on the Error Manager screen, and an error is logged with
the error code. The compute module cannot boot unless the error is resolved. The user
needs to replace the faulty part and restart the system.
Major: If the “Post Error Pause” setup option is enabled, the compute module goes
directly to the Error Manager to display the error and log the error code. Otherwise, the
compute module continues to boot and no prompt is given for the error, although the
error code is logged to the Error Manager.
Minor: The message is displayed on the screen or on the Error Manager screen. The
system continues booting in a degraded state. The user may want to replace the
erroneous unit. The POST Error Pause option setting in the BIOS setup does not have
any effect on this error.
Table 2. Mixed Processor Configurations
Error Severity System Action
Processor family not
Identical
Processor cache not
identical
Processor frequency (speed)
not identical
Processor Intel® QuickPath
Interconnect speeds not
identical
Fatal The BIOS detects the error condition and responds as follows:
Logs the error.
Alerts the Integrated BMC about the configuration error.
Does not disable the processor.
Displays “0194: Processor 0x family mismatch detected”
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the
fault condition is remedied.
Fatal The BIOS detects the error condition and responds as follows:
Logs the error.
Alerts the Integrated BMC about the configuration error.
Minor The BIOS detects the error condition and responds as follows:
Logs the error.
Does not disable the processor.
Displays “8180: Processor 0x microcode update not found”
message in the Error Manager or on the screen.
The system continues to boot in a degraded state, regardless of
the setting of POST Error Pause in the Setup.
3.1.3 Turbo Mode
The Turbo Mode feature allows processors to program thresholds for power/current which can
increase platform performance by 10%.
If the processor supports this feature, the BIOS setup provides an option to enable or disable
this feature. The default is enabled.
3.1.4 Hyper-Threading
Most Intel® Xeon® processors support Intel® Hyper-Threading Technology. The BIOS detects
processors that support this feature and enables the feature during POST.
If the processor supports this feature, the BIOS Setup provides an option to enable or disable
this feature. The default is enabled.
3.1.5 Intel
Intel® QPI is a cache-coherent, link-based interconnect specification for processor, chipset, and
I/O bridge components. Intel
®
QuickPath Interconnect
®
QPI provides support for high-performance I/O transfer between
I/O nodes. It allows connection to standard I/O buses such as PCI Express*, PCI-X, PCI
(including peer-to-peer communication support), AGP, and so on, through appropriate bridges.
Each Intel
and receiver, plus a differential forwarded clock. A full-width Intel
signals (20 differential pairs in each direction plus a forwarded differential clock in each
direction). Each Intel
processors support two Intel
Intel
In the current implementation, Intel
6.4 GT/s. Intel
- 5 lanes) independently in each direction between a pair of devices communicating through
Intel
®
QPI link consists of 20 pairs of uni-directional differential lanes for the transmitter
®
Xeon® Processor 5500 series and Intel® Xeon® Processor 5600 series
®
5520 IOH.
®
QPI ports operate at multiple lane widths (full - 20 lanes, half - 10 lanes, quarter
®
QPI. The Compute Module supports full width communication only.
®
QPI links, one going to the other processor and the other to the
®
QPI ports are capable of operating at transfer rates of up to
The Compute Module complies with Intel’s Unified Retention System (URS) and the Unified
Backplate Assembly. The Compute Module ships with a made-up assembly of Independent
Loading Mechanism (ILM) and Unified Backplate at each processor socket.
The URS retention transfers load to the Compute Module through the unified backplate
assembly. The URS spring, captive in the heatsink, provides the necessary compressive load
for the thermal interface material. All components of the URS heatsink solution are captive to
the heatsink and only require a Philips* screwdriver to attach to the unified backplate assembly.
See the following figure for the stacking order of the URS components.
.
Screw
ILM and Socket
ILM Attach Studs
Heatsink
Attach Studs
Heatsink
Server Board
Compression Spring
Retention Cup
Retaining Ring
Thermal Interface Material (TIM)
Unified Backplate
AF002699
Figure 5. Unified Retention System and Unified Backplate Assembly
Maximum memory capacity of 192 GB with two processors installed
Use of identical DIMMs in the compute module is recommended
®
The following configurations are not validated or supported with the Intel
Compute Module
MFS5520VI:
Mixing of RDIMMs and UDIMMs is not supported
Mixing memory type, size, speed and/or rank on this server board is not validated and is
not supported
Mixing memory vendors is not validated and is not supported on this server board
Non-ECC memory is not validated and is not supported in a server environment
®
For a complete list of supported memory for the Intel
Tested Memory List published in the
Intel® Server Configurator Tool.
Compute Module MFS5520VI, refer to the
3.2.2 Publishing Compute Module Memory
The BIOS displays the “Total Memory” of the compute module during POST if Display
Logo is disabled in the BIOS setup. This is the total size of memory discovered by the
BIOS during POST, and is the sum of the individual sizes of installed DDR3 DIMMs in
the system.
The BIOS displays the “Effective Memory” of the compute module in the BIOS setup.
The term Effective Memory refers to the total size of all DDR3 DIMMs that are active (not
disabled) and not used as redundant units.
The BIOS provides the total memory of the compute module in the main page of the
BIOS setup. This total is the same as the amount described by the first bullet above.
If Display Logo is disabled, the BIOS displays the total system memory on the diagnostic
screen at the end of POST. This total is the same as the amount described by the first
bullet above.
The compute module Quick Reference Label DIMM slot identifiers provide information
about the channel, and therefore the processor to which they belong. For example,
DIMM_A1 is the first slot on Channel A on processor 1; DIMM_D1 is the first DIMM
socket on Channel D on processor 2.
The memory slots associated with a given processor are unavailable if the given
processor socket is not populated.
A processor may be installed without populating the associated memory slots provided a
second processor is installed with associated memory. In this case, the memory is
shared by the processors. However, the platform suffers performance degradation and
latency due to the remote memory.
Processor sockets are self-contained and autonomous. However, all memory subsystem
support (that is, Memory RAS, Error Management, and so on) in the BIOS setup are
applied commonly across processor sockets.
3.2.4 Memory RAS
3.2.4.1 RAS Features
The Compute Module supports the following memory RAS features:
The memory RAS offered by the Intel
Processor 5600 series processors is done at channel level, that is, during mirroring, channel B
mirrors channel A. All DIMM matching requirements are on a slot to slot basis on adjacent
channels. For example, to enable mirroring, corresponding slots on channel A and channel B
must have DIMMs of identical parameters.
®
Xeon® Processor 5500 series and Intel® Xeon®
If one socket fails the population requirements for RAS, the BIOS sets all six channels to the
Channel Independent mode.
The memory slots of DDR3 channels from the Intel
®
Xeon
Processor 5600 series processors should be populated on a farthest first fashion. This
®
Xeon® Processor 5500 series and Intel®
holds true even in the Channel Independent mode. This means that A2 cannot be
populated/used if A1 is empty.
3.2.4.2 Channel Independent Mode
In the Channel Independent mode, multiple channels can be populated in any order (for
example, channels B and C can be populated while channel A is empty). Therefore, all DIMMs
are enabled and utilized in the Channel Independent mode.
3.2.4.3 Channel Mirroring Mode
®
The Intel
Xeon® Processor 5500 series and Intel® Xeon® Processor 5600 series support
channel mirroring to configure available channels of DDR3 DIMMs in the mirrored configuration.
The mirrored configuration is a redundant image of the memory, and can continue to operate
despite the presence of sporadic uncorrectable errors.
Channel mirroring is a RAS feature in which two identical images of memory data are
maintained, thus providing maximum redundancy. On the Intel
and Intel
®
Xeon® Processor 5600 series processors based Intel® server boards, mirroring is
achieved across channels. Active channels hold the primary image and the other channels hold
the secondary image of the system memory. The integrated memory controller in the Intel
®
Xeon
Processor 5500 series and Intel® Xeon® Processor 5600 series processors alternates
®
between both channels for read transactions. Write transactions are issued to both channels
under normal circumstances.
When the system is in the Channel Mirroring mode, channel C and channel F of socket 1 and
socket 2 respectively are not used. Hence, the DIMMs populated on these channels are
disabled and therefore do not contribute to the available physical memory. For example, if the
system is operating in the Channel Mirroring mode and the total size of the DDR3 DIMMs is 1.5
GB (3 x 512 MB DIMMs), and then the active memory is only 1 GB.
Because the available system memory is divided into a primary image and a copy of the image,
the effective system memory is reduced by at least one-half. For example, if the system is
operating in the Channel Mirroring mode and the total size of the DDR3 DIMMs is 1 GB, then
the effective size of the memory is 512 MB because half of the DDR3 DIMMs are the
secondary images.
For channel mirroring to work, participant DDR3 DIMMs on the same DIMM slots on the
adjacent channels must be identical in terms of technology, number of ranks, and size.
The BIOS setup provides an option to enable mirroring if the current DIMM population is valid
for channel mirroring. When memory mirroring is enabled, the BIOS attempts to configure the
memory system accordingly. If the BIOS finds that the DIMM population is not suitable for
mirroring, it falls back to the default Channel Independent mode with maximum
memory interleaving.
3.2.4.3.1 Minimum DDR3 DIMM Population for Channel Mirroring
Memory mirroring has the following minimum requirements:
Channel configuration: Mirroring requires the first two adjacent channels to be active.
Socket configuration: Mirroring requires that both socket 1 and socket 2 DIMM
population meets the requirements for mirroring mode. The platform BIOS configures the
system in mirroring mode only if both nodes qualify. The only exception to this rule is
socket 2 with all empty DIMM slots.
As a direct consequence of these requirements, the minimal DIMM population is {A1, B1}. In
this configuration, processor cores on socket 2 suffer memory latency due to usage of remote
memory from socket 1. An optimal DIMM population for channel mirroring in a DP server
platform is {A1, B1, D1, E1}. {A1, B1} must be identical and {D1, E1} must be identical.
In this configuration, DIMMs {A1, B1} and {D1, E1} operate as (primary copy, secondary copy)
pairs independent from each other. Therefore, the optimal number of DDR3 DIMMs for channel
mirroring is a multiple of four, arranged as mentioned above. The BIOS disables all nonidentical DDR3 DIMMs or pairs of DDR3 DIMMs across the channels to achieve symmetry and
balance between the channels.
3.2.4.3.2 Mirroring DIMM Population Rules Variance across Nodes
Memory mirroring in Intel
®
Xeon® Processor 5500 series and Intel® Xeon® Processor 5600
series processors based platforms is channel mirroring. Mirroring is not done across sockets, so
each socket may have a different memory configuration. Channel mirroring in socket 1 and
socket 2 are mutually independent. As a result, if channel A and channel B have identical DIMM
population, and if channel D and channel E have identical DIMM population, then mirroring
is possible.
For example, if the system is populated with six DIMMS {A1, B1, A2, B2, D1, E1}, channel
mirroring is possible. Both the populations shown in the following table are valid.
Table 3. Mirroring DIMM Population Rules Variance across Nodes
A1 A2 B1 B2 C1 C2 D1 D2 E1 E2 F1 F2 Mirroring
Possible?
P P P P Yes
P P P P P P Yes
3.2.5 Memory Upgrade Rules
Upgrading the system memory requires careful positioning of the DDR3 DIMMs based on the
following factors:
Current RAS mode of operation
Existing DDR3 DIMM population
DDR3 DIMM characteristics
Optimization techniques used by the Intel
Xeon
®
Processor 5600 series to maximize memory bandwidth
In the Channel Independent mode, all DDR3 channels operate independently. The Channel
Independent mode can also be used to support a single DIMM configuration in channel A and in
the single channel mode.
®
Xeon® Processor 5500 series and Intel®
The following general rules must be observed when selecting and configuring memory to obtain
the best performance from the system.
Mixing RDIMMs and UDIMMs is not supported.
Mixing memory type, size, speed, rank and/or vendors in the compute module is
not supported.
Non-ECC memory is not validated and is not supported in a server environment.
Use of identical DIMMs in the compute module is recommended.
If an installed DDR3 DIMM has faulty or incompatible SPD data, it is ignored during
memory initialization and is (essentially) disabled by the BIOS. If a DDR3 DIMM has no
or missing SPD information, the slot in which it is placed is treated as empty by
the BIOS.
When CPU Socket 1 is empty, any DIMM memory in Channel A through Channel C
is unavailable.
When CPU Socket 2 is empty, any DIMM memory in Channel D through Channel F
is unavailable.
If both processor sockets are populated but Channel A through Channel C is empty, the
platform can still function with remote memory in Channel D through Channel F.
However, platform performance suffers latency due to remote memory.
The memory operational mode is configurable at the channel level. Two modes are
supported: Independent Channel and Mirrored Channel.
The memory slots of each DDR3 channel from the Intel
and Intel
®
Xeon® Processor 5600 series are populated on a farthest first fashion. This
®
Xeon® Processor 5500 series
holds true even for the Independent Channel mode. Therefore, if A1 is empty, A2 cannot
be populated or used.
The BIOS selects Independent Channel mode by default, which enables all installed
memory on all channels simultaneously.
Mirrored Channel mode is not available when only one processor is populated (CPU
Socket 1).
If both processor sockets are populated and the installed DIMMs are associated with
both processor sockets, then a given RAS mode is selected only if both the processor
sockets are populated to conform to that mode.
The minimum memory population possible is one DIMM in slot A1. In this configuration,
the system operates in the Independent Channel mode. RAS is not available.
If both processor sockets are populated, the next upgrade from the Single Channel
mode installs DIMM_D1. This configuration results in an optimal memory thermal
spread, as well as Non-Uniform Memory Architecture (NUMA) aware interleaving. The
BIOS selects the Independent Channel mode of operation.
If only one processor socket is populated, the next upgrade from the Single Channel
mode is installing DIMM_B1 to allow channel interleaving. The system operates in the
Independent Channel mode.
The DIMM parameter-matching requirements for memory RAS is local to a socket. For
example, while Channels A/B/C can have one match of timing, technology, and size,
Channels D/E/F can have a different set of parameters and RAS still functions.
For the Mirrored Channel mode, the memory in Channels A and B of Socket 1 must be
identical and Channel C should be empty. Similarly, the memory in Channels D and E of
Socket 2 must be identical and Channel F should be empty.
a. The minimum population upgrade for the Mirrored Channel mode is DIMM_A1,
DIMM_B1, DIMM_D1, and DIMM_E1 with both processor sockets populated.
DIMM_A1 and DIMM_B1 as a pair must be identical, and so must DIMM_D1 and
DIMM_E1. Failing to comply with these rules results in a switch back to the
Independent Channel mode.
b. If Mirrored Channel mode is selected and the third channel of each processor socket
is not empty, the BIOS disables the memory in the third channel of each processor
socket.
In the Mirrored Channel mode, both sockets must simultaneously satisfy the DIMM
matching rules on their respective adjacent channels. If the DDR3 DIMMs on adjacent
channels of a socket are not identical, the BIOS configures both of the processor
sockets to default to the Independent Channel mode. If DIMM_D1 and DIMM_E1 are not
identical, then the system switches to the Independent Channel Mode.
Note: Mixed memory size, type, speed, rank and/or vendor is not validated or supported
with the Intel® Compute Module MFS5520VI. Refer to section 3.2.1.1 for supported and
nonsupported memory features and configuration information.
The primary I/O buses for the Intel® Compute Module MFS5520VI are PCI Express* Gen1 and
PCI Express* Gen2 with six independent PCI bus segments.
PCI Express* Gen1 and Gen2 are dual-simplex point-to point serial differential low-voltage
interconnects. A PCI Express* topology can contain a host bridge and several endpoints (I/O
devices). The signaling bit rate is 2.5 Gbit/s one direction per lane for Gen1 and 5.0 Gbit/s one
direction per lane for Gen2. Each port consists of a transmitter and receiver pair. A link between
the ports of two devices is a collection of lanes (x1, x2, x4, x8, x16, and so on.). All lanes within
a port must transmit data using the same frequency.
The following table lists the characteristics of the PCI bus segments. Details about each bus
segment are provided in the following table.
Table 4. Intel® Compute Module MFS5520VI PCI Bus Segment Characteristics
PCI Bus Segment Voltage Width Speed Type PCI I/O Card Slots
ESI or DMI Port 0
ICH10R
Port 5
ICH10R
PE1, PE2
®
5520
Intel
Chipset IOH PCI
Express*
PE3, PE4
®
5520
Intel
Chipset IOH PCI
Express*
PE5, PE6
®
Intel
5520
Chipset IOH PCI
Express*
PE7, PE8
®
Intel
5520
Chipset IOH PCI
Express*
PE9, PE10
®
5520
Intel
Chipset IOH PCI
Express*
3.3 V x4 10 Gb/s PCI
Express*
Gen1
3.3 V x1 2.5 Gb/s PCI
Express*
Gen1
3.3 V x4 10 Gb/s PCI
Express*
Gen1
3.3 V x8 40 Gb/S PCI
Express*
Gen2
3.3 V x8 40 Gb/S PCI
Express*
Gen2
3.3 V x8 40 Gb/S PCI
Express*
Gen2
3.3 V x8 40 Gb/S PCI
Express*
Gen2
x4 PCI Express* Gen1 throughput to the
Intel® 5520 Chipset IOH
X1 PCI Express* Gen1 throughput to an
on-board Integrated BMC
x4 PCI Express* Gen1 throughput to the
on-board NIC.
x8 PCI Express* Gen2 throughput – Not
used.
Two x4 PCI Express* Gen2 throughput Not used.
x8 PCI Express* Gen2 throughput to the
on-board LSI 1064E.
Two x4 PCI Express* Gen2 throughput to
the IO Module Mezzanine connectors.
3.4.2 USB 2.0 Support
The USB controller functionality integrated into ICH10R provides the Compute Module with an
interface for up to ten USB 2.0 ports. All ports are high-speed, full-speed, and
low-speed capable.
Four external connectors are located on the front of the compute module.
One internal 2x5 header is provided, capable of supporting a low-profile USB solid
state drive.
Two ports are routed to the Integrated BMC to support rKVM.
3.5 Integrated Baseboard Management Controller
The ServerEngines* LLC Pilot II Integrated BMC has an embedded ARM9 controller and
associated peripheral functionality that is required for IPMI-based server management.
Firmware usage of these hardware features is platform dependant.
The following is a summary of the integrated BMC management hardware features found in the
ServerEngines* LLC Pilot II Integrated BMC:
IPMI 2.0 Compliant
Integrated 250 Mhz 32-bit ARM9 processor
Six I
Two independent 10/100 Ethernet Controllers with RMII support
Memory Management Unit (MMU)
DDR2 16-bit up to 667 MHz memory interface
Dedicated real-time clock for Integrated BMC
Up to 16 direct and 64 Serial GPIO ports
Twelve 10-bit Analog to Digital Converters
Eight Fan Tachometers Inputs
Four Pulse Width Modulators (PWM)
JTAG Master interface
Watchdog timer
Additionally, the ServerEngines* Pilot II component integrates a super I/O module with the
following features:
2
C SMBus modules with Master-Slave support
Keyboard Style/BT Interface
16C550 compatible serial ports
Serial IRQ support
16 GPIO ports (shared with Integrated BMC)
LPC to SPI Bridge for system BIOS support
SMI and PME support
ACPI compliant
Wake-up control
The Pilot II contains an integrated KVMS subsystem and graphics controller with the
following features:
USB 2.0 for keyboard, mouse, and storage devices
Hardware Video Compression for text and graphics
The Compute Module does not support a floppy disk controller interface. However, the compute
module BIOS recognizes USB floppy devices.
3.5.2 Keyboard and Mouse Support
The Compute Module does not support PS/2 interface keyboards and mice. However, the
compute module BIOS recognizes USB specification-compliant keyboard and mice.
3.5.3 Wake-up Control
The super I/O contains functionality that allows various events to power on and power off
the system.
3.6 Video Support
The Compute Module includes a video controller in the on-board Server Engines* Integrated
Baseboard Management Controller along with 64 MB of video DDR2 SDRAM. The SVGA
subsystem supports a variety of modes, up to 1600 x 1200 resolution in 8/16 bpp modes under
2D. It also supports both CRT and LCD monitors up to a 100 Hz vertical refresh rate.
The video is accessed using a standard 15-pin VGA connector found on the front panel of the
compute module.
3.6.1 Video Modes
The integrated video controller supports all standard IBM VGA modes. The following table
shows the 2D modes supported for both CRT and LCD.
Network interface support is provided from the on-board Intel® 82575EB NIC, which is a single,
compact component with two fully integrated GbE Media Access Control (MAC) and Physical
Layer (PHY) ports. The on-board Intel
support for dual LAN ports designed for 1000 Mbps operation.
®
The Intel
82575EB device provides two standard IEEE 802.3 Ethernet interface through its
SERDES interfaces. Each network interface controller (NIC) drives two LEDs located on the
front panel. The LED indicates transmit/receive activity when blinking.
3.7.1 Direct Cache Access (DCA)
Direct Cache Access (DCA) is a component of Intel® I/O Acceleration Technology 2 (Intel®
I/OAT2). The DCA mechanism is a system-level protocol in a multi-processor system to improve
I/O network performance thereby providing higher system performance. The basic idea is to
minimize cache misses when a demand read is executed. This is accomplished by placing the
data from the I/O devices directly into the CPU cache through hints to the processor to perform
a data pre-fetch and install it in its local caches. The Intel
®
Xeon® Processor 5600 series processors support Direct Cache Access (DCA). DCA can
Intel
be enabled or disabled in the BIOS processor setup menu.
®
82575EB NIC provides the Compute Module with
®
Xeon® Processor 5500 series and
3.8 Intel
®
Virtualization Technology for Directed I/O (Intel® VT-d)
The Intel® Virtualization Technology is designed to support multiple software environments
sharing same hardware resources. Each software environment may consist of an OS and
applications. The Intel
®
Virtualization Technology can be enabled or disabled in the BIOS setup.
The default behavior is disabled.
Note: If the setup options are changed to enable or disable the Virtualization Technology setting
in the processor, the user must perform an AC power cycle for the changes to take effect.
Guest Physical Address (GPA) to Host Physical Address (HPA). PCI devices are directly
assigned to a virtual machine leading to a robust and efficient virtualization.
Revision 1.5
22
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Connector/Header Locations and Pin-outs
4. Connector/Header Locations and Pin-outs
4.1 Board Connector Information
The following section provides detailed information regarding all connectors, headers, and
jumpers on the compute module. The following table lists all connector types available on the
board and the corresponding reference designators printed on the silkscreen.
Table 6. Board Connector Matrix
Connector Quantity Reference Designators
Power Connector 1 J1K1
Midplane Signal Connector 1 J1H1
CPU 2 CPU1(U2D2), CPU2(U7C1)
Main Memory 12 J4A1,JFA2,J4B1,J4B2,J4B3,J4C1,
J5E1,J5E2,J5E3,J5F2,J5F3,J5F4
I/O Mezzanine 2 J3K1,J1J1
Battery 1 BT9H1
USB 2 J9F1, J9G1
Serial Port A 1 J9J1
Video connector 1 J9E1
Mini USB connector 1 J9B7
4.2 Power Connectors
The power connection is obtained using a 2x2 FCI Airmax* power connector. The following
table defines the power connector pin-out.
Table 7. Power Connector Pin-out (J1A1)
Position Signal
1 +12 Vdc
2 GND
3 GND
4 +12 Vdc
Revision 1.5 23
Intel order number: E64311-007
Connector/Header Locations and Pin-outs Intel® Compute Module MFS5520VI TPS
4.3 I/O Connector Pin-out Definition
4.3.1 VGA Connector
The following table details the pin-out definition of the VGA connector (J6K1).
Table 8. VGA Connector Pin-out (J6A1)
Pin Signal Name Description
1 V_IO_R_CONN Red (analog color signal R)
2 V_IO_G_CONN Green (analog color signal G)
3 V_IO_B_CONN Blue (analog color signal B)
4 TP_VID_CONN_B4 No connection
5 GND Ground
6 GND Ground
7 GND Ground
8 GND Ground
9 P5V_VID_CONN_9 P5V
10 GND Ground
11 TP_VID_CONN_B11 No connection
12 V_IO_DDCDAT DDCDAT
13 V_IO_HSYNC_CONN HSYNC (horizontal sync)
14 V_IO_VSYNC_CONN VSYNC (vertical sync)
15 V_IO_DDCCLK DDCCLK
4.3.2 I/O Mezzanine Card Connector
The compute module provides an internal 120-pin Tyco dual-row receptacle (J3K1) and a Tyco
40-pin dual-row receptacle (J1J1) to accommodate high-speed I/O expansion modules, which
expands the I/O capabilities of the compute module.The following table details the pin-out of
the Intel
®
I/O expansion module connector.
Revision 1.5
24
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Connector/Header Locations and Pin-outs
Signal Name Connector Location Signal Name Connector Location
TP 1 GND 2
RMII_IBMC_IOMEZZ
_CRS_DV
GND 5 XE_B1_TXN 6
XE_B1_RXP 7 GND 8
XE_B1_RXN 9 GND 10
GND 11 XE_B2_TXP 12
GND 13 XE_B2_TXN 14
XE_B2_RXP 15 GND 16
XE_B2_RXN 17 GND 18
GND 19 XE_D2_TXP 20
GND 21 XE_D2_TXN 22
XE_D1_RXP 23 GND 24
XE_D1_RXN 25 GND 26
GND 27 XE_D1_TXP 28
GND 29 XE_D1_TXN 30
XE_D2_RXP 31 GND 32
XE_D2_RXN
GND
RMII_IBMC_IOMEZZ
_RXD1
RMII_IBMC_IOMEZZ
_RXD0
3
33
35
37
39
XE_B1_TXP
RMII_IBMC_IOME
ZZ_TX_EN
RMII_IBMC_IOME
ZZ_TXD1
RMII_IBMC_IOME
ZZ_TXD0
CLK_IOMEZZ_RMI
I
4
34
36
38
40
4.3.3 Midplane Signal Connector
The compute module connects to the midplane through a 96-pin Airmax* connector (J1H1)
(power is J1K1) to connect the various I/O, management, and control signals of the system.
Table 12. 96-pin Midplane Signal Connector Pin-out
Pin Signal Name Pin Signal Name Pin Signal Name
A1 XE_P1_A_RXP E1 XE_P2_D_RXN I1 GND
A2 GND E2 XE_P2_D_TXP I2 SAS_P1_TXN
A3 XE_P1_B_RXP E3 SMB_SDA_B I3 GND
A4 GND E4 FM_BL_X_SP I4 XE_P2_C_TXN
A5 XE_P1_C_RXP E5 XE_P2_B_RXN I5 GND
A6 GND E6 XE_P2_B_TXP I6 SAS_P2_TXN
A7 XE_P1_D_RXP E7 XE_P2_A_RXN I7 GND
A8 GND E8 XE_P2_A_TXP I8 Fm_bl_slot_id5
B1 XE_P1_A_RXN F1 GND J1 SMB_SCL_A
B2 XE_P1_A_TXP F2 XE_P2_D_TXN J2 GND
B3 XE_P1_B_RXN F3 GND J3 FM_BL_SLOT_ID2
Revision 1.5
28
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Connector/Header Locations and Pin-outs
Pin Signal Name Pin Signal Name Pin Signal Name
B4 XE_P1_B_TXP F4 12V (BL_PWR_ON) J4 GND
B5 XE_P1_C_RXN F5 GND J5 reserved
B6 XE_P1_C_TXP F6 XE_P2_B_TXN J6 GND
B7 XE_P1_D_RXN F7 GND J7 reserved
B8 XE_P1_D_TXP F8 XE_P2_A_TXN J8 GND
C1 GND G1 SAS_P1_RXP K1 SMB_SDA_A
C2 XE_P1_A_TXN G2 GND K2 FM_BL_SLOT_ID0
C3 GND G3 XE_P2_C_RXP K3 FM_BL_SLOT_ID3
C4 XE_P1_B_TXN G4 GND K4 FM_BL_SLOT_ID4
C5 GND G5 SAS_P2_RXP K5 reserved
C6 XE_P1_C_TXN G6 GND K6 reserved
C7 GND G7 spare K7 reserved
C8 XE_P1_D_TXN G8 GND K8 reserved
D1 XE_P2_D_RXP H1 SAS_P1_RXN L1 GND
D2 GND H2 SAS_P1_TXP L2 FM_BL_SLOT_ID1
D3 SMB_SCL_B H3 XE_P2_C_RXN L3 GND
D4 GND H4 XE_P2_C_TXP L4 FM_BL_PRES_N
D5 XE_P2_B_RXP H5 SAS_P2_RXN L5 GND
D6 GND H6 SAS_P2_TXP L6 reserved
D7 XE_P2_A_RXP H7 spare L7 GND
D8 GND H8 spare L8 reserved
4.3.4 Serial Port Connector
The compute module provides one internal 9-pin Serial port header (J9J1). The following table
defines the pin-out.
Table 13. Internal 9-pin Serial Header Pin-out (J9J1)
Pin Signal Name Description
1 SPA_DCD DCD (carrier detect)
2 SPA_DSR DSR (data set ready)
3 SPA_SIN_L RXD (receive data)
4 SPA_RTS RTS (request to send)
5 SPA_SOUT_N TXD (transmit data)
6 SPA_CTS CTS (clear to send)
7 SPA_DTR DTR (data terminal ready)
8 SPA_RI RI (ring Indicate)
9 GND Ground
4.3.5 USB 2.0 Connectors
The following table details the pin-out of the external USB connectors (J4K1, J4K2) found on the
front edge of the compute module.
Revision 1.5 29
Intel order number: E64311-007
Connector/Header Locations and Pin-outs Intel® Compute Module MFS5520VI TPS
Table 14. External USB Connector Pin-out
Pin Signal Name Description
1 +5V USB_PWR
2 USB_N Differential data line paired with DATAH0
3 USB_P (Differential data line paired with DATAL0
4 GND Ground
One low-profile 2x5 connector (J9B7) on the compute module provides an option to support lowprofile Intel
®
Z-U130 Value Solid State Drive. The pin-out of the connector is detailed in the
following table.
Table 15. Pin-out of Internal USB Connector for low-profile Solid State Drive (J9B7)
The server board has several 3-pin jumper blocks that can be used to configure, protect, or
recover specific features of the server board. Pin 1 on each jumper block is denoted by
an “*” or “▼”.
1-2 BMC Firmware Force Update Mode – Disabled (Default)
2-3 BMC Firmware Force Update Mode – Enabled
1-2 These pins should have a jumper in place for normal operation. (Default)
2-3 If these pins are jumpered, the administrator and user passwords are cleared
immediately. These pins should not be jumpered for normal operation.
1-2 These pins should have a jumper in place for normal operation. (Default)
2-3 If these pins are jumpered, the CMOS settings are cleared on the next boot. These
pins should not be jumpered for normal operation
1-2 These pins should have a jumper in place for normal operation. (Default)
2-3 If these pins are jumpered, the compute module boots from the emergency BIOS
image. These pins should not be jumpered for normal operation.
5.1.1 CMOS Clear and Password Clear Usage Procedure
The CMOS Clear (J9A4) and Password Clear (J9A3) recovery features are designed such that
the desired operation can be achieved with minimal system downtime. The usage procedure for
these two features has changed from previous generation Intel
procedure outlines the new usage model.
®
server boards. The following
1. Power down the compute module.
2. Remove the compute module from the modular server chassis.
3. Open the compute module.
4. Move jumper from the default operating position (pins 1-2) to the Clear position
(pins 2-3).
5. Wait 5 seconds.
6. Move jumper back to the default position (pins 1-2).
7. Close the compute module.
8. Reinstall the compute module in the modular server chassis.
9. Power up the compute module.
Password and/or CMOS are now cleared and can be reset by going into the BIOS setup.
5.1.2 Integrated BMC Force Update Procedure
When performing a standard Integrated BMC firmware update procedure, the update utility
places the Integrated BMC into an update mode, allowing the firmware to load safely onto the
flash device. In the unlikely event that the Integrated BMC firmware update process fails due to
the Integrated BMC not being in the proper update state, the server board provides a BMC
Force Update jumper (J9A5), which will force the Integrated BMC into the proper update state.
The following procedure should be followed in the event the standard Integrated BMC firmware
update process fails.
1. Power down the compute module.
2. Remove the compute module from the modular server chassis.
9. Remove the compute module from the server system.
10. Move jumper from the “Enabled” position (pins 2-3) to the “Disabled” position (pins 1-2).
11. Close the compute module.
12. Reinstall the compute module into the modular server chassis.
13. Power up the compute module.
Note: Normal Integrated BMC functionality (for example, KVM, monitoring, and remote media)
is disabled with the force BMC update jumper set to the “Enabled” position. The server should
never be run with the BMC force update jumper set in this position and should only be used
when the standard firmware update process fails. This jumper should remain in the default –
disabled position when the server is running normally.
5.1.3 Integrated BMC Initialization
When the DC power is first applied to the compute module by installing it into a chassis, 5VSTBY is present, the Integrated BMC on the compute module requires 15-30 seconds to
initialize. During this time, the power button functionality of the control panel is disabled,
preventing the compute module from powering up.
The Intel® Compute Module MFS5520VI is evaluated as part of the Intel® Modular Server
System MFSYS25/MFSYS25V2/MFSYS35, which requires meeting all applicable system
component regulatory requirements. Refer to the IntelProduct Specification for a complete listing of all system and component regulatory
requirements.
6.2 Product Regulatory Compliance and Safety Markings
®
Modular Server System Technical
No markings are required on the Intel® Compute Module MFS5520VI itself as it is evaluated as
part of the Intel
®
Modular Server System MFSYS25/MFSYS25V2/MFSYS35.
6.3 Product Environmental/Ecology Requirements
The Intel® Compute Module MFS5520VI is evaluated as part of the Intel® Modular Server
System MFSYS25/MFSYS25V2/MFSYS35, which requires meeting all applicable system
component environmental and ecology requirements. For a complete listing of all system and
component environment and ecology requirements and markings, refer to the IntelServer System Technical Product Specification.
When two processors are installed, both must be of identical revision, core voltage, and
bus/core speed. Mixed processor steppings are supported as long as they are listed in
the processor specification updates published by Intel Corporation. However, the
stepping of one processor cannot be greater than one stepping back of the other.
Only Intel
95 W and less Thermal Design Power (TDP) are supported on this compute module.
Previous generations of the Intel
Processor 5500 series and Intel
W are also not supported.
Processors must be installed in order. CPU 1 is located near the edge of the compute
module and must be populated to operate the board.
Only registered DDR3 DIMMs (RDIMMs) and unbuffered DDR3 DIMMs (UDIMMs) are
supported on this compute module. Mixing of RDIMMs and UDIMMs is not supported.
Mixing memory type, size, speed, rank and/or memory vendors is not validated and is
not supported on this server board.
Non-ECC memory is not validated and is not supported in a server environment
For the best performance, the number of DDR3 DIMMs installed should be balanced
across both processor sockets and memory channels. For example, a two-DIMM
configuration performs better than a one-DIMM configuration. In a two-DIMM
configuration, DIMMs should be installed in DIMM sockets A1 and D1. A six-DIMM
configuration (DIMM sockets A1, B1, C1, D1, E1, and F1) performs better than a threeDIMM configuration (DIMM sockets A1, B1, and C1).
For a list of Intel supported operating systems, add-in cards, and peripherals for this
server board, see the IntelMFS5000SI/MFS5520VI Tested Hardware and Operating System List.
Normal Integrated BMC functionality (for example, KVM, monitoring, and remote media)
is disabled with the force BMC update jumper set to the “enabled” position (pins 2-3).
The compute module should never be run with the BMC force update jumper set in this
position and should only be used when the standard firmware update process fails. This
jumper should remain in the default (disabled) position (pins 1-2) when the compute
module is running normally.
When performing the BIOS update procedure, the BIOS select jumper must be set to its
default position (pins 1-2).
®
Xeon® Processor 5500 series and Intel® Xeon® Processor 5600 series with
®
Xeon® processors are not supported. Intel® Xeon®
®
Xeon® Processor 5600 series with TDP higher than 95
This appendix lists the sensor identification numbers and information regarding the sensor type,
name, supported thresholds, and a brief description of the sensor purpose. See the Intelligent Platform Management Interface Specification, Version 2.0, for sensor and event/reading-type
table information.
Sensor Type
The Sensor Type references the values enumerated in the Sensor Type Codes table in the IPMI
Specification. It provides the context in which to interpret the sensor, such as the physical entity
or characteristic that is represented by this sensor.
Event/Reading Type
The Event/Reading Type references values from the Event/Reading Type Code Ranges and
Generic Event/Reading Type Codes tables in the IPMI Specification. Note that digital sensors
are specific type of discrete sensors, which have only two states.
Event Offset Triggers
This column defines what event offsets the sensor generates.
For Threshold (analog reading) type sensors, the Integrated BMC can generate events for the
following thresholds:
The abbreviation [U, L] is used to indicate that both Upper and Lower thresholds are supported.
A few sensors support only a subset of the standard four threshold triggers. Note that even if a
sensor does support all thresholds, the SDRs may not contain values for some thresholds.
For Digital and Discrete type sensor event triggers, the supported event generating offsets are
listed. The offsets can be found in the Generic Event/Reading Type Codes or Sensor Type Codes tables in the IPMI Specification, depending on whether the sensor event/reading type is
a generic or sensor-specific response.
All sensors generate both assertions and deassertions of the defined event triggers. The
assertions and deassertions may or may not generate events into the System Event Log (SEL),
depending on the sensor SDR settings.
This column indicates whether an assertion of an event lights the front panel fault LED. The
Integrated BMC aggregates all fault sources (including outside sources such as the BIOS) such
that the LED will be lit as long as any source indicates that a fault state exists. The Integrated
BMC extinguishes the fault LED when all sources indicate no faults are present.
Sensor Rearm
The rearm is a request for the event status for a sensor to be rechecked and updated upon a
transition between good and bad states. Rearming the sensors can be done manually or
automatically. The following abbreviations are used in the column:
‘A’: Auto rearm
‘M’: Manual rearm
Readable
Some sensors are used simply to generate events into the System Event Log. The Watchdog
timer sensor is one example. These sensors operate by asserting and then immediately deasserting an event. Typically the SDRs for such sensors are defined such that only the assertion
causes an event message to be deposited in the SEL. Reading such a sensor produces no
useful information and is marked as ‘No’ in this column. Note that some sensors may
actually be
unreadable in that they return an error code in response to the IPMI Get Sensor Reading
command. These sensors are represented by type 3 SDR records.
Standby
Some sensors operate on standby power. These sensors may be accessed and/or generate
events when the compute module payload power is off, but standby power is present.
Appendix C: POST Error Messages and Handling Intel® Compute Module MFS5520VI TPS
Appendix C: POST Error Messages and Handling
Whenever possible, the BIOS outputs the current boot progress codes on the video screen.
Progress codes are 32-bit quantities plus optional data. The 32-bit numbers include class,
subclass, and operation information. The class and subclass fields point to the type of hardware
that is being initialized. The operation field represents the specific initialization activity. Based on
the data bit availability to display progress codes, a progress code can be customized to fit the
data width. The higher the data bit, the higher the granularity of information that can be sent on
the progress port. The progress codes may be reported by the system BIOS or option ROMs.
The Response section in the following table is divided into two types:
Minor: The message is displayed on the screen or in the Error Manager screen. The
system will continue booting with a degraded state. The user may want to replace the
erroneous unit. The setup POST error Pause setting does not have any effect with this
error.
Major: The message is displayed on the Error Manager screen and an error is logged to
the SEL. The setup POST error Pause setting determines whether the system pauses to
the Error Manager for this type of error, where the user can take immediate corrective
action or choose to continue booting.
Fatal: The message is displayed on the Error Manager screen, an error is logged to the
SEL, and the system cannot boot unless the error is resolved. The user needs to replace
the faulty part and restart the system. The setup POST error Pause setting does not
have any effect with this error.
Table 18. POST Error Messages and Handling
Error Code Error Message Response
0012 CMOS date/time not set Major
0048 Password check failed Major
0108 Keyboard component encountered a locked error. Minor
0109 Keyboard component encountered a stuck key error. Minor
0113
0140 PCI component encountered a PERR error. Major
0195 Processor 0x Intel(R) QPI speed mismatch. Major
0196 Processor 0x model mismatch. Fatal
0197 Processor 0x speeds mismatched. Fatal
0198 Processor 0x family is not supported. Fatal
019F Processor and chipset stepping configuration is unsupported. Major
5220 CMOS/NVRAM Configuration Cleared Major
5221 Passwords cleared by jumper Major
5224 Password clear Jumper is Set. Major
Fixed Media. The SAS RAID firmware cannot run properly. The user should
attempt to reflash the firmware.
Major
Revision 1.5
42
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Appendix C: POST Error Messages and Handling
Error Code Error Message Response
8160 Processor 01 unable to apply microcode update Major
8161 Processor 02 unable to apply microcode update Major
8180 Processor 0x microcode update not found. Minor
8190 Watchdog timer failed on last boot Major
8198 OS boot watchdog timer failure. Major
8300 Baseboard management controller failed self-test Major
84F2 Baseboard management controller failed to respond Major
84F3 Baseboard management controller in update mode Major
84F4 Sensor data record empty Major
84FF System event log full Minor
8500 Memory component could not be configured in the selected RAS mode. Major
8520 DIMM_A1 failed Self Test (BIST). Major
8521 DIMM_A2 failed Self Test (BIST). Major
8522 DIMM_B1 failed Self Test (BIST). Major
8523 DIMM_B2 failed Self Test (BIST). Major
8524 DIMM_C1 failed Self Test (BIST). Major
8525 DIMM_C2 failed Self Test (BIST). Major
8526 DIMM_D1 failed Self Test (BIST). Major
8527 DIMM_D2 failed Self Test (BIST). Major
8528 DIMM_E1 failed Self Test (BIST). Major
8529 DIMM_E2 failed Self Test (BIST). Major
852A DIMM_F1 failed Self Test (BIST). Major
852B DIMM_F2 failed Self Test (BIST). Major
8540 DIMM_A1 Disabled. Major
8541 DIMM_A2 Disabled. Major
8542 DIMM_B1 Disabled. Major
8543 DIMM_B2 Disabled. Major
8544 DIMM_C1 Disabled. Major
8545 DIMM_C2 Disabled. Major
8546 DIMM_D1 Disabled. Major
8547 DIMM_D2 Disabled. Major
8548 DIMM_E1 Disabled. Major
8549 DIMM_E2 Disabled. Major
854A DIMM_F1 Disabled. Major
854B DIMM_F2 Disabled. Major
8560 DIMM_A1 Component encountered a Serial Presence Detection (SPD) fail error. Major
8561 DIMM_A2 Component encountered a Serial Presence Detection (SPD) fail error. Major
8562 DIMM_B1 Component encountered a Serial Presence Detection (SPD) fail error. Major
8563 DIMM_B2 Component encountered a Serial Presence Detection (SPD) fail error. Major
8564 DIMM_C1 Component encountered a Serial Presence Detection (SPD) fail error. Major
8565 DIMM_C2 Component encountered a Serial Presence Detection (SPD) fail error. Major
8566 DIMM_D1 Component encountered a Serial Presence Detection (SPD) fail error. Major
8567 DIMM_D2 Component encountered a Serial Presence Detection (SPD) fail error. Major
8568 DIMM_E1 Component encountered a Serial Presence Detection (SPD) fail error. Major
8569 DIMM_E2 Component encountered a Serial Presence Detection (SPD) fail error. Major
Revision 1.5
43
Intel order number: E64311-007
Appendix C: POST Error Messages and Handling Intel® Compute Module MFS5520VI TPS
Error Code Error Message Response
856A DIMM_F1 Component encountered a Serial Presence Detection (SPD) fail error. Major
856B DIMM_F2 Component encountered a Serial Presence Detection (SPD) fail error. Major
85A0 DIMM_A1 Uncorrectable ECC error encountered. Major
85A1 DIMM_A2 Uncorrectable ECC error encountered. Major
85A2 DIMM_B1 Uncorrectable ECC error encountered. Major
85A3 DIMM_B2 Uncorrectable ECC error encountered. Major
85A4 DIMM_C1 Uncorrectable ECC error encountered. Major
85A5 DIMM_C2 Uncorrectable ECC error encountered. Major
85A6 DIMM_D1 Uncorrectable ECC error encountered. Major
85A7 DIMM_D2 Uncorrectable ECC error encountered. Major
85A8 DIMM_E1 Uncorrectable ECC error encountered. Major
85A9 DIMM_E2 Uncorrectable ECC error encountered. Major
85AA DIMM_F1 Uncorrectable ECC error encountered. Major
85AB DIMM_F2 Uncorrectable ECC error encountered. Major
8604 Chipset Reclaim of non critical variables complete. Minor
9000 Unspecified processor component has encountered a non specific error. Major
9223 Keyboard component was not detected. Minor
9226 Keyboard component encountered a controller error. Minor
9243 Mouse component was not detected. Minor
9246 Mouse component encountered a controller error. Minor
9266 Local Console component encountered a controller error. Minor
9268 Local Console component encountered an output error. Minor
9269 Local Console component encountered a resource conflict error. Minor
9286 Remote Console component encountered a controller error. Minor
9287 Remote Console component encountered an input error. Minor
9288 Remote Console component encountered an output error. Minor
92A3 Serial port component was not detected Major
92A9 Serial port component encountered a resource conflict error Major
92C6 Serial Port controller error Minor
92C7 Serial Port component encountered an input error. Minor
92C8 Serial Port component encountered an output error. Minor
94C6 LPC component encountered a controller error. Minor
94C9 LPC component encountered a resource conflict error. Major
9506 ATA/ATPI component encountered a controller error. Minor
95A6 PCI component encountered a controller error. Minor
95A7 PCI component encountered a read error. Minor
95A8 PCI component encountered a write error. Minor
9609 Unspecified software component encountered a start error. Minor
9641 PEI Core component encountered a load error. Minor
9667 PEI module component encountered an illegal software state error. Fatal
9687 DXE core component encountered an illegal software state error. Fatal
96A7 DXE boot services driver component encountered an illegal software state error. Fatal
96AB DXE boot services driver component encountered invalid configuration. Minor
96E7 SMM driver component encountered an illegal software state error. Fatal
0xA000 TPM device not detected. Minor
Revision 1.5
44
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Appendix C: POST Error Messages and Handling
Error Code Error Message Response
0xA001 TPM device missing or not responding. Minor
0xA002 TPM device failure. Minor
0xA003 TPM device failed self test. Minor
0xA022 Processor component encountered a mismatch error. Major
0xA027 Processor component encountered a low voltage error. Minor
0xA028 Processor component encountered a high voltage error. Minor
0xA421 PCI component encountered a SERR error. Fatal
0xA500 ATA/ATPI ATA bus SMART not supported. Minor
0xA501 ATA/ATPI ATA SMART is disabled. Minor
0xA5A0 PCI Express component encountered a PERR error. Minor
0xA5A1 PCI Express component encountered a SERR error. Fatal
0xA5A4 PCI Express IBIST error. Major
0xA6A0
0xB6A3 DXE boot services driver Unrecognized. Major
DXE boot services driver Not enough memory available to shadow a legacy
option ROM.
Minor
POST Error Pause Option
In case of POST error(s) that are listed as Major, the BIOS enters the Error Manager and waits
for the user to press an appropriate key before booting the operating system or entering the
BIOS Setup.
The user can override this option by setting POST Error Pause to Disabled in the BIOS Setup
main menu page. If the POST Error Pause option is set to Disabled, the compute module
boots the operating system without user intervention. The default value is set to Disabled.
Revision 1.5
45
Intel order number: E64311-007
Appendix D: Supported Intel® Modular Server System Intel® Compute Module MFS5520VI TPS
Appendix D: Supported Intel® Modular Server System
The Intel® Compute Module MFS5520VI is supported in the following chassis:
Intel
Intel
Intel
This section provides a high-level pictorial overview of the Intel
MFSYS25. For more details, refer to the Intel
®
Modular Server System MFSYS25
®
Modular Server System MFSYS25V2
®
Modular Server System MFSYS35
®
Modular Server System Technical Product
Specification (TPS).
®
Modular Server System
A Shared hard drive storage bay
B I/O cooling fans
C Empty compute module bay
D Compute module cooling fans
E Compute module midplane connectors
Figure 10. Intel® Modular Server System MFSYS25
Revision 1.5
46
Intel order number: E64311-007
Intel® Compute Module MFS5520VI TPS Glossary
Glossary
This appendix contains important terms used in the preceding chapters. For ease of use,
numeric entries are listed first (for example, “82460GX”) followed by alpha entries (for example,
“AGP 4x”). Acronyms are followed by non-acronyms.
Term Definition
ACPI Advanced Configuration and Power Interface
AP Application Processor
APIC Advanced Programmable Interrupt Control
ASIC Application Specific Integrated Circuit
ASMI Advanced Server Management Interface
BIOS Basic Input/Output System
BIST Built-In Self Test
BMC Baseboard Management Controller
Bridge Circuitry connecting one computer bus to another, allowing an agent on one to access the other
BSP Bootstrap Processor
byte 8-bit quantity.
CBC Chassis Bridge Controller (A microcontroller connected to one or more other CBCs, together they
bridge the IPMB buses of multiple chassis.
CEK Common Enabling Kit
CHAP Challenge Handshake Authentication Protocol
CMOS In terms of this specification, this describes the PC-AT compatible region of battery-backed 128 bytes
of memory, which normally resides on the server board.