INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS
GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR
SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR
IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR
WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR
INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELL E CTUAL PROPERTY RIGHT.
A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly,
in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION
CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES,
SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH,
HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS'
FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL
INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL A P P LICATION, WHETHER OR
NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF
THE INTEL PRODUCT OR ANY OF ITS PARTS.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not
rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves
these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from
future changes to them. The information here is subject to change without notice. Do not finalize a design with this
information.
The products described in this document may contain design defects or errors known as errata which may cause the
product to deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your
product order.
Copies of documents which have an order number and are referenced in this document, or other Intel literature, may
be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature
Table 57. Intel® Server Chassis P4000S for S2400 family ....................................................... 144
Table 58. Intel® Server Chassis P4000M family ...................................................................... 144
Page 11
Intel® Server Board S2400SC TPS List of Tables
Revision 2.0 Intel order number G36516-002
xi
<This page is intentionally left blank.>
Page 12
Page 13
Intel® Server Board S2400SC TPS Introduction
Revision 2.0 Intel order number G36516-002 1
1.
Introduction
1.1 Chapter Outline
1.2 Server Board Use Disclaimer
This Technical Product Specification (TPS) provides board specific information detailing the
features, functionality, and high-level architecture of the Intel
®
Server Board S2400SC.
In addition, you can obtain design-level information for specific subsystems by ordering the
External Product Specifications (EPS) or External Design Specifications (EDS) for a given
subsystem. EPS and EDS documents are not publicly available and you must order them
through your Intel representative.
This document is divided into the following chapters:
Chapter 1 – Introduction
Chapter 2 – Overview
Chapter 3 – Functional Architecture
Chapter 4 – System Security
Chapter 5 – Technology Support
Chapter 6 – Platform Management Functional Overview
Chapter 7 – Advanced Management Feature Support (RMM4)
Chapter 8 – On-board Connector/Header Overview
Chapter 9 – Jumper Blocks
Chapter 10 – Intel
Chapter 11 – Environmental Limits Specifications
Appendix A – Integration and Usage Tips
Appendix B – Integrated BMC Sensor Tables
Appendix C – POST Code Diagnostic LED Decoder
Appendix D – POST Code Errors
Appendix E – Supported Intel
Glossary
Reference Documents
®
Light Guided Diagnostics
®
Server Chassis
Intel® Server Boards contain a number of high-density VLSI (Very-large-scale integration) and
power delivery components that require adequate airflow for cooling. Intel ensures through its
own chassis development and testing that when Intel
the fully integrated system meets the intended thermal requirements of these components. It is
the responsibility of the system integrator who chooses not to use Intel developed server
building blocks to consult vendor datasheets and operating parameters to determine the amount
of airflow required for their specific application and environmental conditions. Intel Corporation
cannot be held responsible if components fail or the server board does not operate correctly
when used outside any of the published operating or non-operating limits.
®
server building blocks are used together,
Page 14
Overview Intel® Server Board S2400SC TPS
2 Intel order number G36516-002 Revision 2.0
2.
Overview
2.1 Intel® Server Boards S2400SC Feature Set
Feature
Description
Processors
Support for one or two Intel® Xeon® E5-2400 processors or Intel® Xeon® E5-2400 v2
Memory
Six memory channels, eight memory DIMMs (three channels per processor socket; 2:1:1
Chipset
Intel® C602 (-A) chipset with support for storage option upgrade keys
Cooling Fan Support
Support for:
Add-in Card Slots
Five expansion slots:
Hard Drive and Optical
Optical devices are supported.
RAID Support
Intel® RSTe SW RAID 0/1/10/5
I/O control support
External connections:
The Intel® Server Board S2400SC is monolithic printed circuit boards (PCBs) with features
designed to support the pedestal and rack server markets.
Table 1. Intel® Server Board S2400SC Feature Set
processors in an FC-LGA 1356 Socket B2 package with Thermal Design Power up to
95W.
6.4 GT/s, 7.2 GT/s and 8.0 GT/s Intel® QuickPath Interconnect (Intel® QPI).
EVRD (Enterprise Voltage Regulator-Down) 12
layout).
Support for 1066/1333 MT/s Unbuffered (UDIMM) LVDDR3 or DDR3 memory.
Support for 800/1066/1333/1600 MT/s ECC Registered (RDIMM) DDR3 memory.
Support for 800/1066/1333 ECC Registered (RDIMM) LVDDR3 memory.
No support for mixing of RDIMMs and UDIMMs.
No support for LRDIMMs.
No support for Quad Rank DIMMs.
Two processor fans (4-pin headers).
Six front system fans (6-pin headers).
One rear system fan (4-pin headers)
3-pin fans are compatible with all fan headers.
Slot 6: PCI Express* Gen3 x16 electrical with x16 physical connector, from first
processor.
Slot 5 PCI Express* Gen3 x8 electrical with x8 open ended physical connector, from
second processor.
*Note: Slot 5 is a blue color slot
Slot 4: PCI Express* Gen3 x4 electrical with x8 physical connector, from first processor.
Slot 3: PCI Express* Gen2 x4 electrical with x8 physical connector, from PCH.
*Note: Slot 3 does not support Intel® Integrated RAID module.
Slot 2: 32-bit/33 MHz PCI slot, from PCH
Drive Support
Two SATA connectors at 6Gbps and four SATA connectors at 3Gbps.
Up to eight SAS connectors at 3Gbps with optional Intel® C600 RAID Upgrade Keys.
LSI* SW RAID 0/1/10/5
DB9 serial port A connection.
Two RJ-45 NIC connectors for 10/100/1000 Mb connections: Dual GbE through the
®
82574L Network Connection.
Intel
Page 15
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 3
Feature
Description
Four USB 2.0 ports at the back of the board.
Video Support
Integrated 2D video controller.
LAN
Two Gigabit Ethernet Ports through Intel® 82574L PHYs
Security
Intel® TPM module – AXXTPME5 (Accessory Option)
Server Management
Onboard ServerEngines* LLC Pilot III* Controller.
BIOS Flash
Winbond* 64MB Flash
Form Factor
SSI CEB 12”x10.5 compliant form factor.
Compatible Intel®
Intel® Server Chassis P4000S for S2400SC.
One DB-15 video connector
Internal connections:
One 2x5 pin USB header, providing front panel support for two USB ports.
One DH10 serial port B header.
One internal Type-A USB 2.0 port.
One 9pin USB header for eUSB SSD.
One 1x7 pin header for optional Intel
One SSI-compliant 24-pin front control panel header.
Dual monitor video mode is supported.
16 MB DDR3 Memory
Support for Intel® Remote Management Module 4 solutions (Optional).
Support for Intel® Remote Management Module 4 Lite solutions (Optional).
Intel® Light-Guided Diagnostics on field replaceable units.
Support for Intel® System Management Software.
Support for Intel
supply).
®
Intelligent Power Node Manager (Need PMBus*-compliant power
®
Local Control Panel support
Server Chassis
Intel® Server Chassis P4000M Chassis
Page 16
Overview Intel® Server Board S2400SC TPS
4 Intel order number G36516-002 Revision 2.0
2.2 Server Board Layout
2.2.1
Server Board Connector and Component Layout
Figure 1. Intel® Server Board S2400SC Layout
The following figure shows the layout of the server board. Each connector and major component
is identified by a number or letter, and a description is given in the below figure.
Page 17
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 5
Description
Description
A
Slot 2, 32bit/33MHz PCI
AA
System fan 3 header
B
Slot 3, PCI Express* Gen2 x4 (x8
AB
System fan 2 header
C
Slot 4, PCI Express* Gen3 x8
AC
SATA SGPIO header
D
RMM4 Lite header
AD
System fan 1 header
E
Slot 5, PCI Express* Gen3 x8 (open
ended connector, from second
AE
PMBus* header
connector)
Figure 2. Intel® Server Board S2400SC Layout
Table 2. Intel® Server Board S2400SC Component Layout
Page 18
Overview Intel® Server Board S2400SC TPS
6 Intel order number G36516-002 Revision 2.0
Description
Description
processor)
F
Slot 6, PCI Express* Gen3 x16, support
AF
HDD LED header
G
DIMM sockets from Processor 2 socket
AG
Type A USB header
H
Diagnostic and identify LEDs
AH
IPMB header
I
Processor 2 Fan header
AI
Main Power
J
RJ45/USB stack connectors
AJ
SATA port 2
K
VGA
AK
SATA port 3
L
Serial Port A
AL
SATA port 4
M
Processor 2 power
AM
SATA port 5
N
System fan 7 header
AN
SATA port 0
O
DIMM sockets from Processor 1 socket
AO
SATA port 1
P
Processor 1 power
AP
SCU 1
Q
Processor 1 Fan header
AQ
eUSB SSD header
R
BIOS default jumper
AR
Storgae upgrade key
S
CPLD update jumper (not used)
AS
SCU 0
T
TPM header
AT
Password clear jumper
U
LCP header
AU
BIOS recovery jumper
V
System fan 6 header
AV
BMC force update jumper
W
HSBP I2C header
AW
RMM4 header
X
System fan 5 header
AX
Front panel header
Y
System fan 4 header
AY
USB header
Z
ME Force Update jumper
AZ
Serial B header
riser card
(Channel D, E, F)
(Channel A, B, C)
Page 19
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 7
2.2.2
Server Board Mechanical Drawings
Figure 3. Intel® Server Board S2400SC – Mounting Hole Locations (1 of 2 )
Page 20
Overview Intel® Server Board S2400SC TPS
8 Intel order number G36516-002 Revision 2.0
Figure 4. Intel® Server Board S2400SC – Mounting Hole Locations (2 of 2 )
Page 21
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 9
Figure 5. Intel® Server Boards S2400SC – Major Connector Pin-1 Locations (1 of 2)
Page 22
Overview Intel® Server Board S2400SC TPS
10 Intel order number G36516-002 Revision 2.0
Figure 6. Intel® Server Boards S2400SC – Major Connector Pin-1 Locations (2 of 2)
Page 23
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 11
Figure 7. Intel
®
Server Boards S2400SC – Primary Side Keepout Zone
Page 24
Overview Intel® Server Board S2400SC TPS
12 Intel order number G36516-002 Revision 2.0
Figure 8. Intel
®
Server Boards S2400SC – Primary Side Card Side Keepout Zone
Page 25
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 13
Figure 9. Intel® Server Boards S2400SC – Primary Side Air Duct Keepout Zone
Page 26
Overview Intel® Server Board S2400SC TPS
14 Intel order number G36516-002 Revision 2.0
2.2.3
Server Board Rear I/O Layout
Figure 10. Intel
®
Server Boards S2400SC – Second Side Keepout Zone
The following drawing shows the layout of the rear I/O components for the server boards.
Page 27
Intel® Server Board S2400SC TPS Overview
Revision 2.0 Intel order number G36516-002 15
A
Serial Port A
E
Diagnostic LEDs
B
Video
F
ID LED
C
G
System Status LED
D
NIC Port 2 (1 Gb)_USB_2-3
NIC Port 1 (1 Gb)_USB_0-1
Figure 11. Intel® Server Boards S2 4 00 SC Rea r I/O L a yout
Page 28
Functional Architecture Intel® Server Board S2400SC TPS
16 Intel order number G36516-002 Revision 2.0
3.
Functional Architecture
3.1 Processor Support
The architecture and design of the Intel® Server Board S2400SC is based on the Intel® C600
chipset. The chipset is designed for systems based on the Intel
1356 Socket B2 package with Intel
This chapter provides a high-level description of the functionality associated with each chipset
component and the architectural blocks that make up the server boards.
®
QuickPath Interconnect (Intel® QPI).
®
Xeon® processor in an FC-LGA
Figure 12. Intel® Server Board S2400SC Functional Block Diagram
The Intel® Server Board S2400SC includes two Socket-B2 (LGA-1356) processor sockets and
can support the following processor:
Intel
®
Xeon® processor E5-2400 product family, with a Thermal Design Power (TDP) of
up to 95W.
Page 29
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 17
3.1.1
Processor Socket Assembly
3.1.2
Processor Population rules
Intel
®
Xeon® processor E5-2400 v2 product family, with a Thermal Design Power (TDP)
of up to 95W.
Note: Previous generation Intel
®
Xeon® processors are not supported on the Intel® server board
described in this document.
Visit the Intel web site for a complete list of supported processors.
Each processor socket of the server board is pre-assembled with an Independent Latching
Mechanism (ILM) and Back Plate which allow for secure placement of the processor and
processor heat to the server board.
The illustration below identifies each sub-assembly component.
Figure 13. Processor Socket Assembly
Note: Although the server board does support dual-processor configurations consisting of
different processors that meet the defined criteria below, Intel does not perform validation
testing of this configuration. For optimal system performance in dual-processor configurations,
Intel recommends that identical processors be installed.
Page 30
Functional Architecture Intel® Server Board S2400SC TPS
18 Intel order number G36516-002 Revision 2.0
Error
Severity
System Action
Processor family not Identical
Fatal
The BIOS detects the error condition and responds as follows:
When using a single processor configuration, the processor must be installed into the processor
socket labeled “CPU_1”.
When two processors are installed, the following population rules apply:
Both processors must be of the same processor family.
Both processors must have the same number of cores.
Both processors must have the same cache size for all levels of processor cache
memory.
Processors with different speeds can be mixed in a system, given the prior rules are
met. If this condition is detected, all processor speeds are set to the lowest common
denominator (highest common speed) and an error is reported.
Processors which have different Intel Quickpath (QPI) Link Frequencies may operate
together if they are otherwise compatible and if a common link frequency can be
selected. The common link frequency would be the highest link frequency that all
installed processors can achieve.
Processor stepping within a common processor family can be mixed as long as it is
listed in the processor specification updates published by Intel Corporation.
The following table describes mixed processor conditions and recommended actions for all
®
Intel
server boards and Intel server systems designed around the Intel® Xeon® processor E5-
2400 product family and Intel
®
C600 chipset product family architecture. The errors fall into one
of the following two categories:
Fatal: If the system can boot, it pauses at a blank screen with the text “Unrecoverable
fatal error found. System will not boot until the error is resolved” and “Press <F2>
to enter setup” regardless of whether the “Post Error Pause” setup option is enabled or
disabled.
Major: If the “POST Error Pause” option in BIOS Setup is disabled, the system will log
the error to the BIOS Setup Utility Error Manager and then continue to boot. No POST
error message is given. If the “POST Error Pause” option in BIOS Setup is enabled, the
error is logged and the system goes directly to the Error Manager in BIOS Setup.
Minor: The message is displayed on the screen or on the Error manager screen, and
the POST Error code is logged to SEL. The system continues booting in a degraded
state. The user may want to replace the erroneous unit. The POST Error Pause option
setting in the BIOS does not have any effect on this error.
Table 3. Mixed Processor Configurations
Logs the POST Error Code into the System Event Log (SEL).
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0194: Processor family mismat ch detected” messa ge in
the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Page 31
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 19
Error
Severity
System Action
Processor model not Identical
Fatal
The BIOS detects the error condition and responds as follows:
Fatal
The BIOS detects the error condition and responds as follows:
Processor cache not identical
Fatal
The BIOS detects the error condition and responds as follows:
Fatal
The BIOS detects the processor frequency difference, and responds
Logs the POST Error Code into the System Event Log (SEL).
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0196: Processor model mismatch detected” message in
the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Processor cores/threads not identical
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0191: Processor core/thread count mismatch detected”
message in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0192: Processor cache size mismatch detected message
in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Processor frequency (speed) not
identical
as follows:
Adjusts all processor frequencies to the highest common frequency.
No error is generated – th is is not an error con dit ion.
Continues to boot the system successfully .
If the frequencies for all processors cannot be adjusted to be the
same, then this is an error, and the BIOS responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Does not disable the processor.
Displays “0197: Processor speeds unable to synchronize” message
in the Error Manager.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Page 32
Functional Architecture Intel® Server Board S2400SC TPS
20 Intel order number G36516-002 Revision 2.0
Error
Severity
System Action
Processor Intel® QuickPath
Fatal
The BIOS detects the QPI link frequencies and responds as follows:
Minor
The BIOS detects the error condition and responds as follows:
Displays “818x: Processor 0x microcode update not found” message
Processor microcode update failed
Major
The BIOS detects the error condition and responds as follows:
Interconnect link frequencies n ot
identical
Processor microcode update missing
Adjusts all QPI interconnect link frequencies to highest common
frequency.
No error is generated – th is is not an error con dit ion.
Continues to boot the system successfully .
If the link frequencies for all QPI links cannot be adjusted to be the
same, then this is an error, and the BIOS responds as follows:
Logs the POST Error Code into the SEL.
Alerts the BMC to set the System Status LED to steady Amber.
Displays “0195: Processor Intel
synchronize” message in the Error Manager.
Does not disable the processor.
Takes Fatal Error action (see above) and will not boot until the fault
condition is remedied.
Logs the POST Error Code into the SEL.
in the Error Manager or on the screen.
The system continues to boot in a degraded state, regardless of the
setting of POST Error Pause in the Setup.
Logs the POST Error Code into the SEL.
Displays “816x: Processor 0x unable to apply microcode update”
message in the Error Manager or on the screen.
Takes Major Error action. The system may continue to boot in a
degraded state, depending on the setting of POST Error Pause in
Setup, or may halt with the POST Error Code in the Error Manager
waiting for operator intervention.
®
QPI link frequencies unable to
Page 33
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 21
3.2 Processor Function Overview
With the release of the Intel® Xeon® processor E5-2400 product family, several key system
components, including the CPU, Integrated Memory Controller (IMC), and Integrated IO Module
(IIO), have been combined into a single processor package and feature per socket; One Intel
QuickPath Interconnect point-to-point links capable of up to 8.0 GT/s, up to 24 lanes of Gen 3
PCI Express* links capable of 8.0 GT/s, and 4 lanes of DMI2/PCI Express* Gen 1 interface with
a peak transfer rate of 2.5 GT/s. The processor supports up to 46 bits of physical address space
and 48-bit of virtual address space.
The following sections will provide an overview of the key processor features and functions that
help to define the performance and architecture of the server board. For more comprehensive
processor specific information, refer to the Intel
®
Xeon® processor E5-2400 product family
documents listed in the Reference Document list.
Processor Feature Details:
®
Up to 8 execution cores (Intel
Up to 10 execution cores (Intel
Each core supports two threads (Intel
®
Xeon® processor E5-2400 product family)
®
Xeon® processor E5-2400 v2 product family)
®
Hyper-Threading Technology)
46-bit physical addressing and 48-bit virtual addressing
1 GB large page support for server applications
A 32-KB instruction and 32-KB data first-level cache (L1) for each core
A 256-KB shared instruction/data mid-level (L2) cache for each core
Up to 20 MB last level cache (LLC): up to 2.5 MB per core instruction/data last level
Functional Architecture Intel® Server Board S2400SC TPS
22 Intel order number G36516-002 Revision 2.0
3.2.1
Intel® QuickPath Interconnect
3.2.2
Integrated Memory Controller (IMC) and Memory Subsystem
The Intel® QuickPath Interconnect is a high speed, packetized, point-to-point interconnect used
in the processor. The narrow high-speed links stitch together processors in distributed shared
memory and integrated I/O platform architecture. It offers much higher bandwidth with low
latency. The Intel
®
QuickPath Interconnect has an
efficient architecture
allowing more
interconnect performance to be achieved in real systems. It has a snoop protocol optimized for
low latency and high scalability, as well as packet and lane structures enabling quick
completions of transactions. Reliability, availability, and serviceability features (RAS) are built into
the architecture.
The physical connectivity of each interconnect link is made up of twenty differential signal pairs
plus a differential forwarded clock. Each port supports a link pair consisting of two uni-directional
links to complete the connection between two components. This supports traffic i n both
directions simultaneously. To facilitate flexibility and longevity, the interconnect is defined as
having five layers: Physical, Link, Routing, Transport, and Protocol.
The Intel
®
QuickPath Interconnect includes a cache coherency protocol to keep the distributed
memory and caching structures coherent during system operation. It supports both low-latency
source snooping and a scalable home snoop behavior. The coherency protocol provides for
direct cache-to-cache transfers for optimal latency.
Integrated into the processor is a memory controller. Each processor provides three DDR3
channels that support the following:
Unbuffered DDR3 and registered DDR3 DIMMs
Independent channel mode or lockstep mode
Data burst length of eight cycles for all memory organization modes
Memory DDR3 data transfer rates of 800, 1066, 1333, and 1600 MT/s
64-bit wide channels plus 8-bits of ECC support for each channel
DDR3 standard I/O Voltage of 1.5 V and DDR3 Low Voltage of 1.35 V
1-Gb, 2-Gb, and 4-Gb DDR3 DRAM technologies supported for these devices:
o UDIMM DDR3 – SR x8 a nd x16 data widths, DR – x8 data width
o RDIMM DDR3 – SR and DR – x4 and x8 data widths
Up to 4 ranks supported per memory channel, 1 or 2 ranks per DIMM
Open with adaptive idle page close timer or closed page policy
Per channel memory test and initialization engine can initialize DRAM to all logical zeros
with valid ECC (with or without data scrambler) or a predefined test pattern
Isochronous access support for Quality of Service (QoS)
Minimum memory configuration: independent channel support with 1 DIMM populated
Integrated dual SMBus* master controllers
Command launch modes of 1n/2n
RAS Support:
o Rank Level Sparing and Device Tagging
o Demand and Patrol Scrub bing
Page 35
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 23
Ranks
Per
DIMM &
Data
Width
Memory Capacity Per DIMM1
Speed (MT/s) and Voltage Validated by
Slot per Channel (SPC) and DIMM Per Channel (DPC)2,3
1 Slot per Channel
2 Slots per Channel
1DPC
1DPC
2DPC
1.35V
1.5V
1.35V
1.5V
1.35V
1.5V
SRx8
ECC
DRx8
ECC
ECC
SRx8
ECC
1066,
1333
1066,
1333
DRx8
ECC
1066,
1333
1066,
1333
o DRAM Single Device Data Correction (SDDC) for any single x4 or x8 DRAM
Functional Architecture Intel® Server Board S2400SC TPS
24 Intel order number G36516-002 Revision 2.0
Ranks Per
DIMM &
Data
Width
Memory Capacity Per DIMM1
Speed (MT/s) and Voltage Validated by
Slot per Channel (SPC) and DIMM Per Channel (DPC)2,3
1 Slot per Channel
2 Slots per Channel
1DPC
1DPC
2DPC
1.35V
1.5V
1.35V
1.5V
1.35V
1.5V
1066 1333
1600
1066 1333
1600
1066 1333
1600
1600
1600
1600
1600
1600
1600
1066 1333
1600
1066 1333
1600
1066 1333
1600
Table 5. RDIMM Support Guidelines
SRx8 1GB 2GB 4GB 1066 1333
DRx8 2GB 4GB 8GB 1066 1333
SRx4 2GB 4GB 8GB 1066 1333
DRx4 4GB 8GB 16GB 1066 1333
Notes:
1. Supported DRAM Densities are 1Gb, 2Gb and 4Gb. Only 2Gb and 4Gb are validated by Intel.
2. Command Address Timing is 1N.
3. For Memory Population Rules, please refer to the Romley Platform Design Guide.
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
1066 1333
Supported and Validated
Supported but not Validated
3.2.2.2 Memory population rules
Note: Although mixed DIMM configurations may be functional, Intel only performs platform
validation on systems that are configured with identical DIMMs installed.
Each processor provides three channels of memory, each capable of supporting up to two
DIMMs.
DIMMs are organized into physical slots on DDR3 memory channels that belong to
processor sockets.
The memory channels from processor socket 1 are identified as Channel A, B and C.
The memory channels from processor socket 2 are identified as Channel D, E and F.
The silk screened DIMM slot identifiers on the board provide information about the
channel, and therefore the processor to which they belong. For example, DIMM_A1 is
the first slot on Channel A on processor 1; DIMM_D1 is the first DIMM socket on
Channel D on processor 2.
The memory slots associated with a given processor are unavailable if the
corresponding processor socket is not populated.
A processor may be installed without populating the associated memory slots provided a
second processor is installed with associated memory.
shared by the processors.
However, the platform suffers performance degradation and
In this case, the memory is
latency due to the remote memory.
1066 1333
1066 1333
Page 37
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 25
Processor Socket 1
Processor Socket 2
(0)
(1)
(2)
(0)
(1)
(2)
Processor sockets are self-contained and autonomous. However, all memory subsystem
support (such as Memory RAS, Error Management,) in the BIOS setup are applied
commonly across processor sockets.
On the Intel
®
Server Board S2400SC, a total of 8 DIMM slots is provided (two CPUs – 3
Channels/CPU). The nomenclature for DIMM sockets is detailed in the following table:
Table 6. Intel
Channel A
A1 B1 C1 C2 D1 E1 F1 F2
Channel B
®
Server Board S2400SC DIMM Nomenclature
Channel C
Channel D
Channel E
Channel F
Figure 14. Intel® Server Board S2400SC DIMM Slot Layout
Page 38
Functional Architecture Intel® Server Board S2400SC TPS
26 Intel order number G36516-002 Revision 2.0
The following are generic DIMM population requirements that generally apply to the Intel®
Server Board S2400SC.
All DIMMs must be DDR3 DIMMs
Registered DIMMs must be ECC only. Unbuffered DIMMs can be ECC or non-ECC.
However, Intel only validates and supports ECC memory for its server products.
Mixing of Registered and Unbuffered DIMMs is not allowed per platform.
Mixing of DDR3 voltages is not validated within a socket or across sockets by Intel. If
1.35V (DDR3L) and 1.50V (DDR3) DIMMs are mixed, the DIMMs will run at 1.50V.
Mixing of DDR3 operating frequencies is not validated within a socket or across sockets
by Intel. If DIMMs with different frequencies are mixed, all DIMMs will run at the common
lowest frequency.
Quad rank DIMMs are NOT supported.
LR (Load Reduced) DIMMs are NOT supported.
A maximum of 4 logical ranks (ranks seen by the host) per channel is allowed.
Mixing of ECC and non-ECC DIMMs is not allowed per platform.
DIMMs with different timing parameters can be installed on different slots within the
same channel, but only timings that support the slowest DIMM will be applied to all. As a
consequence, faster DIMMs will be operated at timings supported by the slowest DIMM
populated.
When one DIMM is used, it must be populated in the BLUE DIMM slot (farthest away
from the CPU) of a given channel.
When single and dual rank DIMMs are populated for 2DPC, always populate the higher
number rank DIMM first (starting from the farthest slot), for example, first dual rank, and
then single rank DIMM.
DIMM population rules require that DIMMs within a channel be populated starting with the BLUE
DIMM slot or DIMM farthest from the processor in a “fill-farthest” approach. Intel MRC will check
for correct DIMM placement.
3.2.2.3 Publishing System Memory
The BIOS displays the “Total Memory” of the system during POST if Display Logo is
disabled in the BIOS setup. This is the total size of memory discovered by the BIOS
during POST, and is the sum of the individual sizes of installed DDR3 DIMMs in the
system.
The BIOS displays the “Effective Memory” of the system in the BIOS setup. The term
Effective Memory refers to the total size of all DDR3 DIMMs that are active (not
disabled) and not used as redundant units.
The BIOS provides the total memory of the system in the main page of the BIOS setup.
This total is the same as the amount described by the first bullet above.
If Display Logo is disabled, the BIOS displays the total system memory on the diagnostic
screen at the end of POST. This total is the same as the amount described by the first
bullet above.
Page 39
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 27
Note: Some server operating systems do not display the total physical memory installed. What
is displayed is the amount of physical memory minus the approximate memory space used by
system BIOS components. These BIOS components include, but are not limited to:
1. ACPI (may vary depending on the number of PCI devices detected in the system)
2. ACPI NVS table
3. Processor microcode
4. Memory Mapped I/O (MMIO)
5. Manageability Engine (ME)
6. BIOS flash
3.2.2.3.1 RAS Features
The server board supports the following memory RAS modes:
Independent Channel Mode
Rank Sparing Mode
Mirrored Channel Mode
Lockstep Channel Mode
Single Device Data Correctio n (SDDC)
Error Correction Code (ECC) Memory
Demand Scrubbing for ECC Memory
Patrol Scrubbing for ECC Memory
Regardless of RAS mode, the requirements for populating within a channel given in the section
3.2.2.2 must be met at all times. Note that support of RAS modes that require matching DIMM
population between channels (Mirrored and Lockstep) require that ECC DIMMs be populated.
Independent Channel Mode is the only mode that supports non-ECC DIMMs in addition to ECC
DIMMs.
For RAS modes that require matching populations, the same slot positions across channels
must hold the same DIMM type with regards to size and organization. DIMM timings do not
have to match but timings will be set to support all DIMMs populated (that is, DIMMs with slower
timings will force faster DIMMs to the slower common timing modes).
3.2.2.3.2 Independent Channel Mode
In non-ECC and x4 SDDC configurations, each channel is running independently (nonlockstep), that is, each cache-line from memory is provided by a channel. To deliver the 64-byte
cache-line of data, each channel is bursting eight 8-byte chunks. Back to back data transfer in
the same direction and within the same rank can be sent back-to-back without any dead-cycle.
The independent channel mode is the recommended method to deliver most efficient power and
bandwidth as long as the x8 SDDC is not required.
3.2.2.3.3 Rank Spari ng Mode
In Rank Sparing Mode, one rank is a spare of the other ranks on the same channel. The spare
rank is held in reserve and is not available as system memory. The spare rank must have
identical or larger memory capacity than all the other ranks (sparing source ranks) on the same
channel. After sparing, the sparing source rank will be lost.
Page 40
Functional Architecture Intel® Server Board S2400SC TPS
28 Intel order number G36516-002 Revision 2.0
Rank Sparing Mode enhances the system’s RAS capability by “swapping out” failing ranks of
DIMMs. Rank Sparing is strictly channel and rank oriented. Each memory channel is a Sparing
Domain.
For Rank Sparing to be available as a RAS option, there must be 2 or more single rank or dual
rank DIMMs, or at least one quad rank DIMM installed on each memory channel.
Rank Sparing Mode is enabled or disabled in the Memory RAS and Performance Configuration
screen in the <F2> Bios Setup Utility
When Sparing Mode is operational, for each channel, the largest size memory rank is reserved
as a “spare” and is not used during normal operations. The impact on Effective Memory Size is
to subtract the sum of the reserved ranks from the total amount of installed memory.
Hardware registers count the number of Correctable ECC Errors for each rank of memory on
each channel during operations and compare the count against a Correctable Error Threshold.
When the correctable error count for a given rank hits the threshold value, that rank is deemed
to be “failing”, and it triggers a Sparing Fail Over (SFO) event for the channel in which that rank
resides. The data in the failing rank is copied to the Spare Rank for that channel, and the Spare
Rank replaces the failing rank in the IMC’s address translation registers.
An SFO Event is logged to the BMC SEL. The failing rank is then disabled, and any further
Correctable Errors on that now non-redundant channel will be disregarded.
The correctable error that triggered the SFO may be logged to the BMC SEL, if it was the first
one to occur in the system. That first correctable error event will be the only one logged for the
system. However, since each channel is a Sparing Domain, the correctable error counting
continues for other channels which are still in a redundant state. There can be as many SFO
Events as there are memory channels with DIMMs installed.
3.2.2.3.4 M irrored Channel Mode
Channel Mirroring Mode gives the best memory RAS capability by maintaining two copies of the
data in main memory. If there is an Uncorrectable ECC Error, the channel with the error is
disabled and the system continues with the “good” channel, but in a non-redundant
configuration.
For Mirroring mode to be to be available as a RAS option, the DIMM population must be
identical between each pair of memory channels that participate. Not all channel pairs need to
have memory installed, but for each pair, the configuration must match. If the configuration is
not matched up properly, the memory operating mode falls back to Independent Channel Mode.
Mirroring Mode is enabled/disabled in the Memory RAS and Performance Configuration screen
in the <F2> BIOS Setup Utility.
When Mirroring Mode is operational, each channel in a pair is “mirrored” by the other channel.
The impact on Effective Memory size is to reduce by half the total amount of installed memory
available for use.
Page 41
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 29
When Mirroring Mode is operational, the system treats Correctable Errors the same way as it
would in Independent channel mode. There is a correctable error threshold. Correctable error
counts accumulate by rank, and the first event is logged.
What Mirroring primarily pr otects against is the possibility of an Uncorrectable ECC Error
occurring with critical data “in process”. Without Mirroring, the system would be expected to
“Blue Screen” and halt, possibly with serious impact to operations. But with Mirroring Mode in
operation, an Uncorrectable ECC Error from one channel becomes a Mirroring Fail Over (MFO)
event instead, in which the IMC retrieves the correct data from the “mirror image” channel and
disables the failed channel. Since the ECC Error was corrected in the process of the MFO
Event, the ECC Error is demoted to a Correctable ECC Error. The channel pair becomes a
single non-redundant channel, but without impacting operations, and the Mirroring Fail Over
Event is logged to SEL to alert the user that there is memory hardware that has failed and
needs to be replaced.
In Mirrored Channel Mode, the memory contents are mirrored between Channel B and Channel
C and also between Channel E and Channel F. As a result of the mirroring, the total physical
memory available to the system is half of what is populated. Mirrored Channel Mode requires
that Channel B and Channel C, and Channel E and Channel F must be populated identically with
regards to size and organization. DIMM slot populations within a channel do not have to be
identical but the same DIMM slot location across Channel B and Channel C and across Channel
E and Channel F must be populated the same.
3.2.2.3.5 Lockstep Channel Mode
In lockstep channel mode the cache-line is split across channels. This is done to support Single
Device Data Correction (SDDC) for DRAM device s with 8-bit wide data ports. Also, the same
address is used on both channels, such that an address error on any channel is detectable by
bad ECC. The iMC module always accumulates 32-bytes before forwarding data so there is no
latency benefit for disabling ECC.
Lockstep channels must be populated identically. That is, each DIMM in one channel must have
a corresponding DIMM of identical organization (number ranks, number banks, number rows,
number columns). DIMMs may be of different speed grades, but the iMC module will be
configured to operate all DIMMs according to the slowest parameters present by the Memory
Reference Code (MRC).
Performance in lockstep mode cannot be as high as with independent channels. The burst
length for DDR3 DIMMs is eight which is shared between two channels that are in lockstep
mode. Each channel of the pair provides 32 bytes to produce the 64-by te cache-line. DRAMs on
independent channels are configured to deliver a burst length of eight. The maximum read
bandwidth for a given Rank is half of peak. There is another
draw back in using lockstep mode, that is, higher power consumption since the total activation
power is about twice of the independent channel operation if comparing to same type of DIMMs.
In Lockstep Channel Mode, each memory access is a 128-bit data access that spans Channel B
and Channel C, and Channel E and Channel F. Lockstep Channel mode is the only RAS mode
that allows SDDC for x8 devices. Lockstep Channel Mode requires that Channel B and Channel
C, and Channel E and Channel F must be populated identically with regards to size and
organization. DIMM slot populations within a channel do not have to be identical but the same
Page 42
Functional Architecture Intel® Server Board S2400SC TPS
30 Intel order number G36516-002 Revision 2.0
DIMM slot location across Channel B and Channel C and across Channel E and Channel F must
be populated the same.
3.2.2.4 Single Device Data Correction (SDDC)
SDDC – Single Device Data Correction is a technique by which data can be replaced by the
IMC from an entire x4 DRAM device which is failing, using a combination of CRC plus parity.
This is an automatic IMC driven hardware. It can be extended to x8 DRAM technology by
placing the system in Channel Lockstep Mode.
3.2.2.5 Error Correction Code (ECC) Memory
ECC uses “extra bits” – 64-bit data in a 72-bit DRAM array – to add an 8-bit calculated
“Hamming Code” to each 64 bits of data. This additional encoding enables the memory
controller to detect and report single or multiple bit errors when data is read, and to correct
single-bit errors.
3.2.2.5.1 Corr ect able Memory ECC Error Handling
A “Correctable ECC Error” is one in which a single-bit error in memory contents is detected and
corrected by use of the ECC Hamming Code included in the memory data. For a correctable
error, data integrity is preserved, but it may be a warning sign of a true failure to come. Note that
some correctable errors are expected to occur.
The system BIOS has logic to cope with the random factor in correctable ECC errors. Rather
than reporting every correctable error that occurs, the BIOS has a threshold and only logs a
correctable error when a threshold value is reached. Additional correctable errors that occur
after the threshold has been reached are disregarded. In addition, on the expectation the server
system may have extremely long operational runs without being rebooted, there is a “Leaky
Bucket” algorithm incorporated into the correctable error counting and comparing mechanism.
The “Leaky Bucket” algorithm reduces the correctable error count as a function of time – as the
system remains running for a certain amount of time, the correctable error count will “leak out”
of the counting registers. This prevents correctable error count s from building up over an
extended runtime.
The correctable memory error threshold value is a configurable option in the <F2> BIOS Setup
Utility, where you can configure it for 20/10/5/ALL/None
Once a correctable memory error threshold is reached, the event is logged to the System Event
Log (SEL) and the appropriate memory slot fault LED is lit to indicate on which DIMM the
correctable error threshold crossing occurred.
3.2.2.5.2 Uncorrectable Memory ECC Error Handling
All multi-bit “detectable but not correctable“ memory errors are classified as Uncorrectable
Memory ECC Errors. This is generally a fatal error.
However, before returning control to the OS drivers through Machine Check Exception (MCE) or
Non-Maskable Interrupt (NMI), the Uncorrectable Memory ECC Error is logged to the SEL, the
appropriate memory slot fault LED is lit, and the System Status LED state is changed to solid
Amber.
Page 43
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 31
3.2.3
Processor Integrated I/O Module (IIO)
3.2.2.6 Demand Scrubbing for ECC Memory
Demand scrubbing is the ability to write corrected data back to the memory once a correctable
error is detected on a read transaction. This allows for correction of data in memory at detect,
and decrease the chances of a second error on the same address accumulating to cause a
multi-bit error (MBE) condition.
Demand Scrubbing is enabled/disabled (default is enabled) in the Memory Configuration screen
in Setup.
3.2.2.7 Patrol Scrubbing for ECC Memory
Patrol scrubs are intended to ensure that data with a correctable error does not remain in DRAM
long enough to stand a significant chance of further corruption to an uncorrectable stage.
The processor’s integrated I/O module provides features traditionally supported through chipset
components. The integrated I/O module provides the following features:
PCI Express Interfaces:
The integrated I/O module incorporates the PCI Express interface and supports up to 24
lanes of PCI Express. Following are key attributes of the PCI Express interface:
o Gen3 speeds at 8 GT/s (no 8b/10b encoding)
The Intel
®
Server Board S2400SC supports PCIe slots from two processors:
o From first processor:
Slot 4: PCIe Gen3 x4 electrical with x8 physical connector
Slot 6: PCIe Gen3 x16 electrical with x16 physical connector
DMI2 I nterface to the PCH: The platform requires an interface to the legacy
Southbridge (PCH) which provides basic, legacy functions required for the server
platform and operating systems. Since only one PCH is required and allowed for the
system, any sockets which do not connect to PCH would use this port as a standard x4
PCI Express 2.0 interface.
Integrated IOAPIC: Provides support for PCI Express devices implementing legacy
interrupt messages without interrupt sharing.
Non Transparent Bridge: PCI Express non-transparent bridge (NTB) acts as a gateway
that enables high performance, low overhead communication between two intelligent
subsystems; the local and the remote subsystems. The NTB allows a local processor to
independently configure and control the local subsystem, provides isolation of the local
host memory domain from the remote host memory domain while enabling status and
data exchange between the two domains.
®
Intel
QuickData Technology: Used for efficient, high bandwidth data movement
between two locations in memory or from memory to I/O.
Page 44
Functional Architecture Intel® Server Board S2400SC TPS
32 Intel order number G36516-002 Revision 2.0
Figure 15. Functional Block Diagram of Processor IIO Sub-system
The following sub-sections will describe the server board features that are directly supported by
the processor IIO module. These include the Riser Card Slots, Network Interface, and
connectors for the optional I/O modules and SAS Module. Features and functions of the Intel
C600 Series chipset will be described in its own dedicated section.
3.2.3.1 Network Interface
Network connectivity is provided by means of two onboard Intel
providing up to two 10/100/1000 Mb Ethernet ports. The NIC chip is supported by implementing
x1 PCIe Gen1 signals from the Intel
On the Intel
®
Server Board S2400SC, two external 10/100/1000 Mb RJ45 Ethernet ports are
®
C600 PCH.
®
Ethernet Controller 82574L
provided. Each Ethernet port drives two LEDs located on each network interface connector. The
LED at the right of the connector is the link/activity LED and indicates network connection when
on, and transmit/receive activity when blinking. The LED at the left of the connector indicates
link speed as defined in the following table:
Page 45
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 33
LED Color
LED State
NIC State
Off
10 Mbps
Amber
100 Mbps
Green
1000 Mbps
Green (Left)
On
Active Connection
Blinking
Transmit/Receive activity
3.3 Intel® C602-A Chipset Functional Overview
Table 7. External RJ45 NIC Port LED Definition
Green/Amber (Right)
The following sub-sections will provide an overview of the key features and functions of the
®
Intel
C602-A chipset used on the server board. For more comprehensive chipset specific
information, refer to the Intel
list in Chapter 1.
®
C600 Series chipset documents listed in the Reference Document
Figure 16. Functional Block Diagram – Chipset Supported Features and Functions
On the Intel
functions:
Digital Media Interface (DMI)
PCI Express* Interface
Serial ATA (SATA) Controller
Serial Attached SCSI (SAS)/SATA Controller
AHCI
®
Server Boards S2400SC, the chipset provides support for the following on-board
Page 46
Functional Architecture Intel® Server Board S2400SC TPS
34 Intel order number G36516-002 Revision 2.0
3.3.1
Digital Media Interface (DMI)
3.3.2
PCI Express* Interface
3.3.3
Serial ATA (SATA) Controller
Rapid Storage Technology
PCI Interface
Low Pin Count (LPC) interface
Serial Peripheral Interface (SPI)
Compatibility Modules (DMA Controller, Timer/Counters, Interrupt Controller)
Advanced Programmable Interrupt Controller (APIC)
Universal Serial Bus (USB) Controller
Gigabit Ethernet Controller
RTC
GPIO
Enhanced Power Management
Manageability
System Management Bus (SMBus* 2.0)
Intel
Integrated NVSRAM controller
Virtualization Technology for Direct I/O (Intel
JTAG Boundary-Scan
KVM/Serial Over LAN (SOL) Function
®
Active Management Technology (Intel® AMT)
®
VT-d)
Digital Media Interface (DMI) is the chip-to-chip connection between the processor and C600
chipset. This high-speed interface integrates advanced priority-based servicing allowing for
concurrent traffic and true isochronous transfer capabilities. Base functionality is completely
software-transparent, permitting current and legacy software to operate normally.
The C600 chipset provides up to 8 PCI Express* Root Ports, supporting the PCI Express Base Specification, Revision 2.0. Each Root Port x1 lane supports up to 5 Gb/s bandwidth in each
direction (10 Gb/s concurrent). PCI Express* Root Ports 1-4 or Ports 5-8 can independently be
configured to support four x1s, two x2s, one x2 and two x1s,or one x4 port widths.
The C600 chipset has two integrated SATA host controllers that support independent DMA
operation on up to six ports and supports data transfer rates of up to 6.0 Gb/s (600 MB/s) on up
to two ports (Port 0 and 1 Only) while all ports support rates up to 3.0 Gb/s (300 MB/s) and up to
1.5 Gb/s (150 MB/s). The SATA controller contains two modes of operation – a legacy mode
using I/O space, and an AHCI mode using memory space. Software that uses legacy mode will
not have AHCI capabilities. The C600 chipset supports the Serial ATA Specification, Revision
3.0. The C600 also supports several optional sections of the Serial ATA II: Extensions to Serial ATA 1.0 Specification, Revision 1.0 (AHCI support is required for some elements).
Page 47
Intel® Server Board S2400SC TPS Functional Architecture
The C600 chipset provides hardware support for Advanced Host Controller Interface (AHCI), a
standardized programming interface for SATA host controllers. Platforms supporting AHCI may
take advantage of performance features such as no master/slave designation for SATA
devices—each device is treated as a master—and hardware assisted native command queuing.
AHCI also provides usability enhancements such as Hot-Plug. AHCI requires appropriate
software support (for example, an AHCI driver) and for some features, hardware support in the
SATA device or additional platform hardware.
The C600 chipset provides support for Intel® Rapid Storage Technology, providing both AHCI
(see above for details on AHCI) and integrated RAID functionality. The industry-leading RAID
capability provides high-performance RAID 0, 1, 5, and 10 functionality on up to 6 SATA ports of
the C600 chipset. Matrix RAID support is provided to allow multiple RAID levels to be combined
on a single set of hard drives, such as RAID 0 and RAID 1 on two disks. Other RAID features
include hot-spare support, SMART alerting, and RAID 0 auto replace. Software components
include an Option ROM for pre-boot configuration and boot functionality, a Microsoft Windows*
compatible driver, and a user interface for configuration and management of the RAID capability
of the C600 chipset.
The C600 chipset PCI interface provides a 33 MHz, Revision 2.3 implementation. The C600
chipset integrates a PCI arbiter that supports up to four external PCI bus masters in addition to
the internal C600 chipset requests. This allows for combinations of up to four PCI down devices
and PCI slots.
The C600 chipset implements an LPC Interface as described in the LPC 1.1 Specification. The
Low Pin Count (LPC) bridge function of the C600 resides in PCI Device 31: Function 0. In
addition to the LPC bridge interface function, D31:F0 contains other functional units including
DMA, interrupt controllers, timers, power management, system management, GPIO, and RTC.
The C600 chipset implements an SPI Interface as an alternative interface for the BIOS flash
device. An SPI flash device can be used as a replacement for the FWH, and is required to
support Gigabit Ethernet and Intel
up to two SPI flash devices with speeds up to 50 MHz, utilizing two chip select pins.
The DMA controller incorporates the logic of two 82C37 DMA controllers, with seven
independently programmable channels. Channels 0–3 are hardwired to 8-bit, count-by-byte
transfers, and channels 5–7 are hardwired to 16-bit, count-by-word transfers. Any two of the
seven DMA channels can be programmed to support fast Type-F transfers. Channel 4 is
reserved as a generic bus master request.
®
Active Management Technology. The C600 chipset supports
Page 48
Functional Architecture Intel® Server Board S2400SC TPS
36 Intel order number G36516-002 Revision 2.0
3.3.10
Advanced Programmable Interrupt Controller (APIC)
3.3.11
Universal Serial Bus (USB) Controller
3.3.12
Gigabit Ethernet Controller
The C600 chipset supports LPC DMA, which is similar to ISA DMA, through the C600 chipset’s
DMA controller. LPC DMA is handled through the use of the LDRQ# lines from peripherals and
special encoding on LAD[3:0] from the host. Single, Demand, Verify, and Increment modes are
supported on the LPC interface.
The timer/counter block contains three counters that are equivalent in function to those found in
one 82C54 programmable interval timer. These three counters are combined to provide the
system timer function, and speaker tone. The 14.31818 MHz oscillator input provides the clock
source for these three counters.
The C600 chipset provides an ISA-Compatible Programmable Interrupt Controller (PIC) that
incorporates the functionality of two, 82C59 interrupt controllers. The two interrupt controllers
are cascaded so that 14 external and two internal interrupts are possible. In addition, the C600
chipset supports a serial interrupt scheme.
In addition to the standard ISA compatible Programmable Interrupt controller (PIC) described in
the previous section, the C600 incorporates the Advanced Programmable Interrupt Controller
(APIC).
The C600 chipset has up to two Enhanced Host Controller Interface (EHCI) host controllers that
support USB high-speed signaling. High-speed USB 2.0 allows data transfers up to 480 Mb/s
which is 40 times faster than full-speed USB. The C600 chipset supports up to fourteen USB 2.0
ports. All fourteen ports are high-speed, full-speed, and low-speed capable.
The Gigabit Ethernet Controller provides a system interface using a PCI function. The controller
provides a full memory-mapped or IO mapped interface along with a 64 bit address master
support for systems using more than 4 GB of physical memory and DMA (Direct Memory
Addressing) mechanisms for high performance data transfers. Its bus master capabilities enable
the component to process high-level commands and perform multiple operations; this lowers
processor utilization by off-loading communication tasks from the processor. Two large
configurable transmit and receive FIFOs (up to 20 KB each) help prevent data underruns and
overruns while waiting for bus accesses. This enables the integrated LAN controller to transmit
data with minimum interframe spacing (IFS).
The LAN controller can operate at multiple speeds (10/100/1000 MB/s) and in either full duplex
or half duplex mode. In full duplex mode the LAN controller adheres with the IEEE 802.3x Flow Control Specification. Half duplex performance is enhanced by a proprietary collision reduction
mechanism.
Page 49
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 37
3.3.13
RTC
3.3.14
GPIO
3.3.15
Enhanced Power Management
3.3.16
Manageability
The C600 chipset contains a Motorola MC146818B-compatible real-time clock with 256 bytes of
battery-backed RAM. The real-time clock performs two key functions: keeping track of the time
of day and storing system data, even when the system is powered down. The RTC operates on
a 32.768 KHz crystal and a 3 V battery. The RTC also supports two lockable memory ranges.
By setting bits in the configuration space, two 8-byte ranges can be locked to read and write
accesses. This prevents unauthorized reading of passwords or other system security
information. The RTC also supports a date alarm that allows for scheduling a wake up event up
to 30 days in advance, rather than just 24 hours in advance.
Various general purpose inputs and outputs are provided for custom system design. The
number of inputs and outputs varies depending on the C600 chipset configuration.
The C600 chipset’s power management functions include enhanced clock control and various
low-power (suspend) states (for example, Suspend-to-RAM and Suspend-to-Disk). A hardwarebased thermal management circuit permits software-independent entrance to low-power states.
The C600 chipset contains full support for the Advanced Configuration and Power Interface (ACPI) Specificatio n , Revision 4.0a.
The chipset integrates several functions designed to manage the system and lower the total
cost of ownership (TCO) of the system. These system management functions are designed to
report errors, diagnose the system, and recover from system lockups without the aid of an
external microcontroller.
TCO Timer. The chipset’s integrated programmable TCO timer is used to detect system
locks. The first expiration of the timer generates an SMI# tha t the system can use to
recover from a software lock. The second expiration of the timer causes a system reset
to recover from a hardware lock.
Processor Present Indicator. The chipset looks for the processor to fetch the first
instruction after reset. If the processor does not fetch the first instruction, the chipset will
reboot the system.
ECC Error Reporting. W hen detecting an ECC error, the host controller has the ability
to send one of several messages to the chipset. The host controller can instruct the
chipset to generate SMI#, NMI, SERR#, or TCO interrupt.
Function Disable. The chipset provides the ability to disable the following integrated
functions: LAN, USB, LPC, SATA, PCI Express* or SMBus*. Once disabled, these
functions no longer decode I/O, memory, or PCI configuration space. Also, no interrupts
or power management events are generated from the disabled functions.
Intruder Detect. The chipset provides an input signal (INTRUDER#) that can be attached
to a switch that is activated by the system case being opened. The chipset can be
programmed to generate an SMI# or TCO interrupt due to an active INTRUDER# signal.
Page 50
Functional Architecture Intel® Server Board S2400SC TPS
38 Intel order number G36516-002 Revision 2.0
3.3.17
System Management Bus (SMBus* 2.0)
3.3.18
Intel® Active Management Technology (Intel® AMT)
3.3.19
Integrated NVSRAM Controller
3.3.20
Intel® Virtualization Technology for Direct I/O (Intel® VT-d)
3.3.21
JTAG Boundary-Scan
3.3.22
KVM/Serial Over LAN (SOL) Function
The C600 chipset contains a SMBus* Host interface that allows the processor to communicate
with SMBus* slaves. This interface is compatible with most I2C devices. Special I2C commands
are implemented. The C600 chipset’s SMBus* host controller provides a mechanism for the
processor to initiate communications with SMBus* peripherals (slaves). Also, the C600 chipset
supports slave functionality, including the Host Notify protocol. Hence, the host controller
supports eight command protocols of the SMBus* interface (see System Management Bus (SMBus*) Specification, Version 2.0): Quick Command, Send Byte, Receive Byte, Write
Byte/Word, Read Byte/Word, Process Call, Block Read/Write, and Host Notify.
The C600 chipset’s SMBus* also implements hardware-based Packet Error Checking for data
robustness and the Address Resolution Protocol (ARP) to dynamically provide address to all
SMBus* devices.
Intel® Active Management Technology (Intel® AMT) is the next generation of client
manageability using the wired network. Intel AMT is a set of advanced manageability features
developed as a direct result of IT customer feedback gained through Intel market research. With
the new implementation of System Defense in C600 chipset, the advanced manageability
feature set of Intel AMT is further enhanced.
The C600 chipset has an integrated NVSRAM controller that supports up to 32KB external
device. The host processor can read and write data to the NVSRAM component.
The C600 chipset provides hardware support for implementation of Intel® Virtualization
Technology with Directed I/O (Intel
support the virtualization of platforms based on Intel
Technology enables multiple operating systems and applications to run in independent
partitions. A partition behaves like a virtual machine (VM) and provides isolation and protection
across partitions. Each partition is allocated its own subset of host physical memory.
The C600 chipset adds the industry standard JTAG interface and enables Boundary-Scan in
place of the XOR chains used in previous generations of chipsets. Boundary-Scan can be used
to ensure device connectivity during the board manufacturing process. The JTAG interface
allows system manufacturers to improve efficiency by using industry available tools to test the
C600 chipset on an assembled board. Since JTAG is a serial interface, it eliminates the need to
create probe points for every pin in an XOR chain. This eases pin breakout and trace routing
and simplifies the interface between the system and a bed-of-nails tester.
®
VT-d). Intel VT-d consists of technology components that
®
Architecture Processors. Intel VT-d
These functions support redirection of keyboard, mouse, and text screen to a terminal window
on a remote console. The keyboard, mouse, and text redirection enables the control of the client
Page 51
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 39
3.3.23
On-board Serial Attached SCSI (SAS)/Serial ATA (SATA) Support and Options
Product Code
Color
On-Server Board SATA/SAS Capable Controller
On-Server Board AHCI Capable SATA Controller
No Key
N/A
Intel® RSTE 4 ports SATA R0,1,10,5
Intel® RSTE SATA R0,1,10,5
RKSATA4R5
Black
Intel® RSTE 4 ports SATA R0,1,10,5
Intel® RSTE SATA R0,1,10,5
RKSATA8
Blue
Intel® RSTE 8 ports SATA R0,1,10,5
Intel® RSTE SATA R0,1,10,5
RKSATA8R5
White
Intel® RSTE 8 ports SATA R0,1,10,5
Intel® RSTE SATA R0,1,10,5
RKSAS4
Green
Intel® RSTE 4 ports SAS R0,1,10
Intel® RSTE SATA R0,1,10,5
machine through the network without the need to be physically near that machine. Text, mouse,
and keyboard redirection allows the remote machine to control and configure the client by
entering BIOS setup. The KVM/SOL function emulates a standard PCI serial port and redirects
the data from the serial port to the management console using LAN. KVM has additional
requirements of internal graphics and SOL may be used when KVM is not supported.
The Intel® C602-A chipset provides storage support by means of two integrated controllers:
AHCI and SCU. By default the server board will support up to 10 SATA ports: Two single
6Gb/sec SATA ports routed from the AHCI controller to the two white SATA connectors labeled
“SATA_0” and “SATA_1”, four 3Gb/sec SATA ports routed from the AHCI controller to the four
black SATA connectors labeled “SATA_2” to “SATA_5”, and four 3Gb/sec SATA ports routed
from the SCU to the SFF8087 miniSAS port labeled “SCU_0”.
Note: The miniSAS connector labeled “SCU 1” is NOT functional by default and is only enabled
with the addition of an Intel
Standard are two embedded software RAID options using the storage ports configured from the
SCU only:
Intel
Intel
®
Embedded Server RAID Technology 2 (ESRT2) based on LSI* MegaRAID SW
RAID technology supporting SATA RAID levels 0,1,10
®
Rapid Storage Technology (RSTe) supporting SATA RAID levels 0,1,5,10
The server board is capable of supporting additional chipset embedded SAS and RAID options
from the SCU controller when configured with one of several available Intel
®
RAID C600
Upgrade Keys. Upgrade keys install onto a 4-pin connector on the server board labeled
“STOR_UPG_KEY”. The following table identifies available upgr ade key options and their
supported features.
Table 8. Intel® RAID C600 Upgrade Key Options
or Intel® ESRT2 4 ports SATA R0,1,10
or Intel® ESRT2 4 ports SATA R0,1,10,5
or Intel® ESRT2 8 ports SATA R0,1,10
or Intel® ESRT2 8 ports SATA R0,1,10,5
or Intel® ESRT2 4 ports SAS R0,1,10
or Intel® ESRT2 SATA R0,1,10
or Intel® ESRT2 SATA R0,1,10,5
or Intel® ESRT2 SATA R0,1,10
or Intel® ESRT2 SATA R0,1,10,5
or Intel® ESRT2 SATA R0,1,10
Page 52
Functional Architecture Intel® Server Board S2400SC TPS
40 Intel order number G36516-002 Revision 2.0
Product Code
Color
On-Server Board SATA/SAS Capable Controller
On-Server Board AHCI Capable SATA Controller
RKSAS4R5
Yellow
Intel® RSTE 4 ports SAS R0,1,10
Intel® RSTE SATA R0,1,10,5
RKSAS8
Orange
Intel® RSTE 8 ports SAS R0,1,10
Intel® RSTE SATA R0,1,10,5
RKSAS8R5
Purple
Intel® RSTE 8 ports SAS R0,1,10
Intel® RSTE SATA R0,1,10,5
or Intel® ESRT2 4 ports SAS R0,1,10,5
or Intel® ESRT2 8 ports SAS R0,1,10
or Intel® ESRT2 8 ports SAS R0,1,10,5
or Intel® ESRT2 SATA R0,1,10,5
or Intel® ESRT2 SATA R0,1,10
or Intel® ESRT2 SATA R0,1,10,5
Additional information for the on-board RAID features and functionality can be found in the IntelRAID Software Users Guide (Intel Document Number D29305-018).
The system includes support for two embedded software RAID options:
Intel
®
Embedded Server RAID Technology 2 (ESRT2) based on LSI* MegaRAID SW
RAID technology
Intel
®
Rapid Storage Technology (RSTe)
Using the <F2> BIOS Setup Utility, accessed during system POST, options are available to
enable/disable SW RAID, and select which embedded software RAID option to use.
3.3.23.1 Intel
Features of the embedded software RAID option Intel
®
Embedded Server RAID Technology 2 (ESRT2)
®
Embedded Server RAID Technology 2
(ESRT2) include the following:
®
Based on LSI* MegaRAID Software Stack
Software RAID with system providing memory and CPU utilization
Supported RAID Levels – 0,1,5,10
o 4 & 8 Port SATA RAID 5 support provided with appropriate Intel
Upgrade Key
o 4 & 8 Port SAS RAID 5 support provided with appropriate Intel
®
RAID C600
®
RAID C600
Upgrade Key
Maximum drive support = Eight (with or without SAS expander option installed)
Open Source Compliance = Binary Driver (includes Partial Source files) or Open Source
using MDRAID layer in Linux*.
OS Support = Windows 7*, Windows 2008*, Windows 2003*, RHEL*, SLES*, other
Linux* variants using partial source builds.
Utilities = Windows* GUI and CLI, Linux* GUI and CLI, DOS CLI, and EFI CLI
3.3.23.2 Intel® Rapid Storage Technology (RSTe)
®
Features of the embedded software RAID option Intel
include the following:
Rapid Storage Technology (RSTe)
Page 53
Intel® Server Board S2400SC TPS Functional Architecture
Software RAID with system providing memory and CPU utilization
Supported RAID Levels – 0,1,5,10
o 4 Port SATA RAID 5 available standard (no option key required)
o 8 Port SATA RAID 5 support provided with appropriate Intel
®
RAID C600
Upgrade Key
o No SAS RAID 5 support
Maximum drive support = 32 (in arrays with 8 port SAS), 16 (in arrays with 4 port SAS),
128 (JBOD)
Open Source Compliance = Yes (uses MDRAID)
OS Support = Windows 7*, Windows 2008*, Windows 2003*, RHEL* 6.2 and later,
SLES* 11 w/SP2 and later, VMWare* 5.x.
Utilities = Windows* GUI and CLI, Linux* CLI, DOS CLI, and EFI CLI
Uses Matrix Storage Manager for Windows*
MDRAID supported in Linux* (does not require a driver)
Note: No boot drive support to targets attached through SAS expander card
The server board utilizes the I/O controller, Graphics Controller, and Baseboard Management
features of the Server Engines* Pilot-III Server Management Controller. The following is an
overview of the features as implemented on the server board from each embedded controller.
Functional Architecture Intel® Server Board S2400SC TPS
42 Intel order number G36516-002 Revision 2.0
3.4.1
Super I/O Controller
Figure 18. Integrated BMC Hardware
The integrated super I/O controller provides support for the following features as implemented
on the server board:
Two Fully Functional Serial Ports, compatible with the 16C550
Serial IRQ Support
Up to 16 Shared direct GPIO’s
Serial GPIO support for 80 general purpose inputs and 80 general purpose outputs
available for host processor
Programmable Wake-up Event Support
Plug and Play Register Set
Power Supply Control
Host SPI bridge for system BIOS support
Page 55
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 43
3.4.1.1
Keyboard and Mouse Support
3.4.1.2
Wake-up Control
3.4.2
Graphics Controller and Video Support
2D Mode
2D Video Mode Support
8 bpp
16 bpp
24 bpp
32 bpp
X X X
X
X X X
X
X X X
X
X X X
X
X X X
X
X X
The server board does not support PS/2 interface keyboards and mice. However, the system
BIOS recognizes USB specification-compliant keyboard and mice.
The super I/O contains functionality that allows various events to power on and power off th e
system.
The integrated graphics controller provides support for the following features as implemented on
the server board:
Integrated Graphics Core with 2D Hardware accelerator
DDR-3 memory interface supporting up to 128MB of memory, 16MB allocated to graphic
Supports display resolutions up to 1600 x 1200 16bpp @ 60Hz
High speed Integrated 24-bit RAMDAC
Single lane PCI-Express host interface running at Gen 1 speed
The integrated video controller supports all standard IBM VGA modes. The following table
shows the 2D modes supported for both CRT and LCD:
** Video resolutions at 1600x1200 and higher are only supp o r ted through the
external video connector located on the rear I/O section of the server board.
Utilizing the optional front panel video connector may result in lower video
resolutions.
The server board provides two video interfaces. The primary video interface is accessed using a
standard 15-pin VGA connector found on the back edge of the server board. In addition, video
signals are routed to a 14-pin header labeled “FP_Video” on the leading edge of the server
board, allowing for the option of cabling to a front panel video connector. Attaching a monitor to
the front panel video connector will disable the primary external video connector on the back
edge of the board.
The BIOS supports dual-video mode when an add-in video card is installed.
In the single mode (dual monitor video = disabled), the on-board video controller is
disabled when an add-in video card is detected.
Page 56
Functional Architecture Intel® Server Board S2400SC TPS
44 Intel order number G36516-002 Revision 2.0
Disabled
3.4.3
Baseboard Management Controller
In the dual mode (on-board video = enabled, dual monitor video = enabled), the on-
board video controller is enabled and is the primary video device. The add-in video card
is allocated resources and is considered the secondary video device. The BIOS Setup
utility provides options to configure the feature as follows:
Table 10. Video mode
On-board Video Enabled
Disabled
Dual Monitor Video Enabled
Shaded if on-board video is set to "Disabled"
The server board utilizes the following features of the embedded baseboard management
controller.
IPMI 2.0 Compliant
400MHz 32-bit ARM9 processor with memory management unit (MMU)
Two independent10/100/1000 Ethernet Controllers with RMII/RGMII support
DDR2/3 16-bit interface with up to 800 MHz operation
12 10-bit ADCs
Fourteen fan tachometers
Eight Pulse Width Modulators (PWM)
Chassis intrusion log ic
JTAG Master
Eight I2C interfaces with master-slave and SMBus* timeout support. All interfaces are
SMBus* 2.0 compliant.
Parallel general-purpose I/O Ports (16 direct, 32 shared)
Serial general-purpose I/O Ports (80 in and 80 out)
Three UARTs
Platform Environmental Control Interface (PECI)
Six general-purpose timers
Interrupt controller
Multiple SPI flash interfaces
NAND/Memory interface
Sixteen mailbox registers for communication between the BMC and host
LPC ROM interface
BMC watchdog timer capability
SD/MMC card controller with DMA support
LED support with programmable blink rate controls on GPIOs
Port 80h snooping capability
Secondary Service Processor (SSP), which provides the HW capability of offloading time
critical processing tasks from the main ARM core.
Page 57
Intel® Server Board S2400SC TPS Functional Architecture
Revision 2.0 Intel order number G36516-002 45
3.4.3.1
Remote Keyboard, Video, Mouse, and Stoerage (KVMS) Support
3.4.3.2
Integrated BMC Embedded LAN Channel
USB 2.0 interface for Keyboard, Mouse and Remote storage such as CD/DVD ROM and
floppy
USB 1.1/USB 2.0 interface for PS2 to USB bridging, remote Keyboard and Mouse
Hardware Based Video Compression and Redirection Logic
Supports both text and Graphics redirection
Hardware assisted Video redirection using the Frame Processing Engine
Direct interface to the Integrated Graphics Controller registers and Frame buffer
Hardware-based encryption engine
The Integrated BMC hardware includes two dedicated 10/100 network interfaces. These
interfaces are not shared with the host system. At any time, only one dedicated interface may
be enabled for management traffic. The default active interface is the NIC 1 port.
For these channels, support can be enabled for IPMI-over-LAN and DHCP. For security
reasons, embedded LAN channels have the following default settings:
IP Address: Static.
All users disabled.
Page 58
System Security Intel® Server Board S2400SC TPS
46 Intel order number G36516-002 Revision 2.0
4.
System Security
4.1 BIOS Password Protection
The BIOS uses passwords to prevent unauthorized tampering with the server setup. Passwords
can restrict entry to the BIOS Setup, restrict use of the Boot Popup menu, and suppress
automatic USB device reordering.
There is also an option to require a Power On password entry in order to boot the system. If the
Power On Password function is enabled in Setup, the BIOS will halt early in POST to request a
password before continuing POST.
Both Administrator and User passwords are supported by the BIOS. An Administrator password
must be installed in order to set the User password. The maximum length of a password is
14 characters. A password can have alphanumeric (a-z, A-Z, 0-9) characters and it is case
sensitive. Certain special characters are also allowed, from the following set:
! @ # $ % ^ & * ( ) - _ + = ?
The Administrator and User passwords must be different from each other. An error message will
be displayed if there is an attempt to enter the same password for one as for the other.
The use of “Strong Passwords” is encouraged, but not required. In order to meet the criteria for
a “Strong Password”, the password entered must be at least 8 characters in length, and must
include at least one each of alphabetic, numeric, and special characters. If a “weak” password is
entered, a popup warning message will be displayed, although the weak password will be
accepted.
Once set, a password can be cleared by changing it to a null string. This requires the
Administrator password, and must be done through BIOS Setup or other explicit means of
changing the passwords. Clearing the Administrator password will also clear the User password.
Alternatively, the passwords can be cleared by using the Password Clear jumper if necessary.
Resetting the BIOS configuration settings to default values (by any method) has no effect on the
Administrator and User passwords.
Entering the User password allows the user to modify only the System Time and System Date in
the Setup Main screen. Other setup fields can be modified only if the Administrator password
has been entered. If any password is set, a password is required to enter the BIOS setup.
The Administrator has control over all fields in the BIOS setup, including the ability to clear the
User password and the Administrator password.
It is strongly recommended that at least an Administrator Password be set, since not having set
a password gives everyone who boots the system the equivalent of Administrative access.
Unless an Administrator password is installed, any User can go into Setup and change BIOS
settings at will.
Page 59
Intel® Server Board S2400SC TPS System Security
Revision 2.0 Intel order number G36516-002 47
4.2 Trusted Platform Module (TPM) Support
4.2.1
TPM security BIOS
In addition to restricting access to most Setup fields to viewing only when a User password is
entered, defining a User password imposes restrictions on booting the system. In order to
simply boot in the defined boot order, no password is required. However, the F6 Boot popup
prompts for a password, and can only be used with the Administrator password. Also, when a
User password is defined, it suppresses the USB Reordering that occurs, if enabled, when a
new USB boot device is attached to the system. A User is restricted from booting in anything
other than the Boot Order defined in the Setup by an Administrator.
As a security measure, if a User or Administrator enters an incorrect password three times in a
row during the boot sequence, the system is placed into a halt state. A system reset is required
to exit out of the halt state. This feature makes it more difficult to guess or break a password.
In addition, on the next successful reboot, the Error Manager displays a Major Error code 0048,
which also logs a SEL event to alert the authorized user or administrator that a password
access failure has occurred
Trusted Platform Module (TPM) option is a hardware-based security device that addresses the
growing concern on boot process integrity and offers better data pr otection. TPM protects the
system start-up process by ensuring it is tamper-free before releasing system control to the
operating system. A TPM device provides secured storage to store data, such as security keys
and passwords. In addition, a TPM device has encryption and hash functions. The server board
implements TPM as per TPM PC Client Specifications, revision 1.2, by the Trusted Computing
Group (TCG).
A TPM device is optionally installed onto a high density 14-pin connector labeled “TPM” and is
secured from external software attacks and physical theft. A pre-boot environment, such as the
BIOS and operating system loader, uses the TPM to collect and store unique measurements
from multiple factors within the boot process to create a system fingerprint. This unique
fingerprint remains the same unless the pre-boot environment is tampered with. Therefore, it is
used to compare to future measurements to verify the integrity of the boot process.
After the system BIOS completes the measurement of its boot process, it hands off control to
the operating system loader and in turn to the operating system. If the operating system is TPMenabled, it compares the BIOS TPM measurements to those of previous boots to make sure the
system was not tampered with before continuing the operating system boot process. Once the
operating system is in operation, it optionally uses TPM to provide additional system and data
security (for example, Microsoft Vista* supports Bitlocker drive encryption).
The BIOS TPM support conforms to the TPM PC Client Specific – Implementation Specification
for Conventional BIOS, version 1.2, and to the TPM Interface Specification, version 1.2. The
BIOS adheres to the Microsoft Vista* BitLocker requirement. The role of the BIOS for TPM
security includes the following:
Measures and stores the boot process in the TPM microcontroller to allow a TPM
enabled operating system to verify system boot integrity.
Page 60
System Security Intel® Server Board S2400SC TPS
48 Intel order number G36516-002 Revision 2.0
4.2.2
Physical Presence
4.2.3
TPM Security Setup Options
Produces EFI and legacy interfaces to a TPM-enabled operating system for using TPM.
Produces ACPI TPM device and methods to allow a TPM-enabled operating system to
send TPM administrative command requests to the BIOS.
Verifies operator physical presence. Confirms and executes operating system TPM
administrative command requests.
Provides BIOS Setup options to change TPM security states and to clear TPM
ownership.
For additional details, refer to the TCG PC Client Specific Implementation Specification, the
TCG PC Client Specific Physical Presence Interface Specification, and the Microsoft BitLocker*
Requirement documents.
Administrative operations to the TPM require TPM ownership or physical presence indication by
the operator to confirm the execution of administrative operations. The BIOS implements the
operator presence indication by verifying the setup Administrator password.
A TPM administrative sequence invoked from the operating system proceeds as follows:
1. User makes a TPM administrative request through the operating system’s security software.
2. The operating system requests the BIOS to execute the TPM administrative command
through TPM ACPI methods and then resets the system.
3.
The BIOS verifies the physical presence and confirms the command with the operator.
4. T
he BIOS executes TPM administrative command(s), inhibits BIOS Setup entry and boots
directly to the operating system which requested the TPM command(s).
The BIOS TPM Setup allows the operator to view the current TPM state and to carry out
rudimentary TPM administrative operations. Performing TPM administrative options through the
BIOS setup requires TPM physical presence verification.
Using BIOS TPM Setup, the operator can turn ON or OFF TPM functionality and clear the TPM
ownership contents. After the requested TPM BIOS Setup operation is carried out, the option
reverts to No Operation.
The BIOS TPM Setup also displays the current state of the TPM, whether TPM is enabled or
disabled and activated or deactivated. Note that while using TPM, a TPM-enabled operating
system or application may change the TPM state independent of the BIOS setup. When an
operating system modifies the TPM state, the BIOS Setup displays the updated TPM state.
The BIOS Setup TPM Clear option allows the operator to clear the TPM ownership key and
allows the operator to take control of the system with TPM. You use this option to clear security
settings for a newly initialized system or to clear a system for which the TPM ownership security
key was lost.
Page 61
Intel® Server Board S2400SC TPS System Security
Revision 2.0 Intel order number G36516-002 49
Main
Advanced
Security
Server Management
Boot Options
Boot Manager
Administrator Password Status
User Password Status
Disabled
Activated/Disabled & Deactivated>
No Operation/
4.2.3.1 Security Screen
To enter the BIOS Setup, press the F2 function key during boot time when the OEM or Intel logo
displays. The following message displays on the diagnostics screen and under the Quiet Boot
logo screen:
Press <F2> to enter setup
When the Setup is entered, the Main screen displays. The BIOS Setup utility provides the
Security screen to enable and set the user and administrative passwords and to lock out the
front panel buttons so they cannot be used. The Intel
®
Server Board S5520URT provides TPM
settings through the security screen.
To access this screen from the Main screen, select the Security option.
A disabled TPM device will not
execute commands that use TPM
functions and TPM security
operations will not be available.
An enabled and deactivated TPM is
in the same state as a disabled
TPM except setting of TPM
ownership is allowed if not present
already.
An enabled and activated TPM
executes all commands that use
TPM functions and TPM security
state.
[Turn On] - Enables and activates
TPM.
[Turn Off] - Disables and deactivates
TPM.
[Clear Ownership] - Remov es the TPM
ownership authentication and return s
the TPM to a factory default state.
Note: The BIOS setting returns to [No
Operation] on every boot cycle by
The Intel® Xeon® Processor E5-4600/2600/2400/1600 Product Families support Intel® Trusted
Execution Technology (Intel
protect against software-based attacks, Intel
security features and capabilities into the processor, chipset and other platform components.
When used in conjunction with Intel
Technology provides hardware-rooted trust for your virtual applications.
This hardware-rooted security provides a general-purpose, safer computing environment
capable of running a wide variety of operating systems and applications to increase the
confidentiality and integrity of sensitive information without compromising the usability of the
platform.
®
Intel
Trusted Execution Technology requires a computer system with Intel® Virtualization
Technology enabled (both VT-x and VT-d), an Inte l
processor, chipset and BIOS, Authenticated Code Modules, and an Intel
Technology compatible measured launched environment (MLE). The MLE could consist of a
virtual machine monitor, an OS or an application. In addition, Intel
®
TXT), which is a robust security environment. Designed to help
The Intel® Xeon® Processor E5 4600/2600/2400/1600 Product Families support Intel® Trusted
Execution Technology (Intel
protect against software-based attacks. Intel
security features and capabilities into the processor, chipset and other platform components.
When used in conjunction with Intel
with an active TPM, Intel
®
TXT), which is a robust security environment designed to help
®
®
Trusted Execution Technology provides hardware-rooted trust for
Virtualization Technology and Intel® VT for Directed IO,
®
Trusted Execution Technology integrates new
your virtual applications.
Intel® Virtualization Technology consists of three components which are integrated and
interrelated, but which address different areas of Virtualization.
Intel
Intel
Intel
®
Intel
resources. Each software environment may consist of OS and applications. The Intel
Virtualization Technology features can be enabled or disabled in the BIOS setup. The default
behavior is disabled.
®
Intel
VT-d is supported jointly by the Intel® Xeon® Processor E5 4600/2600/2400/1600 Product
Families and the C600 chipset. Both support DMA remapping from inbound PCI Express*
memory Guest Physical Address (GPA) to Host Physical Address (HPA). PCI devices are
directly assigned to a virtual machine leading to a robust and efficient virtualization.
The Intel
table in the ACPI Tables. For each DMA Remapping Engine in the platform, one exact entry of
DRHD (DMA Remapping Hardware Unit Definition) structure is added to the DMAR. The DRHD
structure in turn contains a Device Scope structure that describes the PCI endpoints and/or subhierarchies handled by the particular DMA Remapping Engine.
Similarly, there are reserved memory regions typically allocated by the BIOS at boot time. The
BIOS marks these regions as either reserved or unavailable in the system address memory
map reported to the OS. Some of these regions can be a target of DMA requests from one or
more devices in the system, while the OS or executive is active. The BIOS reports each such
memory region using exactly one RMRR (Reserved Memory Region Reporting) structure in the
DMAR. Each RMRR has a Device Scope listing the devices in the system that can cause a
DMA request to the region.
For more information on the DMAR table and the DRHD entry format, refer to the IntelVirtualization Technology for Directed I/O Architecture Specification. For more general
®
Virtualization Technology (VT-x) is processor-related and provides capabilities
needed to provide hardware assist to a Virtual Machine Monitor (VMM).
®
Virtualization Technology for Directed I/O (VT-d) is primarily concerned with
virtualizing I/O efficiently in a VMM environment. This would generally be a chipset I/O
feature, but in the Second Generation Intel
®
Core™ Processor Family there is an
Integrated I/O unit embedded in the processor, and the IIO is also enabled for VT-d.
®
Virtualization Technology for Connectivity (VT-c) is primarily concerned I/O
hardware assist features, complementary to but independent of VT-d.
VT-x is designed to support multiple software environments sharing same hardware
®
S4600/S2600/S2400/S1600/S1400 Server Board Family BIOS publishes the DMAR
®
®
Page 65
Intel® Server Board S2400SC TPS Technology Support
Revision 2.0 Intel order number G36516-002 53
5.3 Intel® Intelligent Power Node Manager
IT Challenge
Requirement
Over-allocation of power
Ability to monitor actual power consumption
Under-population of rack space
Control capability that can maintain a power budget to enable increased
High energy costs
Control capability that can maintain a power budget to ensure that a set
Capacity planning
Ability to monitor actua l pow er con sum ptio n to enable power usage
Detection and correction of hot spots
Control capability that reduces platform power consumption to protect a
information about VT-x, VT-d, and VT-c, a good reference is Enabling Intel® Virtualization
Technology Features and Benefits White Paper.
Data centers are faced with power and cooling challenges that are driven by increasing
numbers of servers deployed and server density in the face of several data center power and
cooling constraints. In this type of environment, Information Technology (IT) needs the ability to
monitor actual platform power consumption and control power allocation to servers and racks in
order to solve specific data center problems including the following issues.
Table 12. Intel® Intelligent Power Node Manager
Control capability that can maintain a power budget to enable dynamic
power allocation to each server
rack population.
energy cost can be achieved
modeling over time and a given planning period
Ability to understand cooling demand from a temperature and airflow
perspective
server in a hot-spot
Ability to monitor server inlet temperatures to enable greater rack utilization
in areas with adequate cooling.
The requirements listed above are those that are addressed by the C600 chipset Management
Engine (ME) and Intel
®
Intelligent Power Node Manager (NM) technology. The ME/NM
combination is a power and thermal control capability on the platform, which exposes external
interfaces that allow IT (through external management software) to query the ME about platform
power capability and consumption, thermal characteristics, and specify policy directives (for
example, set a platform power budget).
Node Manager (NM) is a platform resident technology that enforces power capping and thermaltriggered power capping policies for the platform. These policies are applied by exploiting
subsystem knobs (such as processor P and T states) that can be used to control power
consumption. NM enables data center power management by exposing an external interface to
management software through which platform policies can be specified. It also implements
specific data center power management usage models such as power limiting, and thermal
monitoring.
The NM feature is implemented by a complementary architecture utilizing the ME, BMC, BIOS,
and an ACPI-compliant OS. The ME provides the NM policy engine and power control/limiting
functions (referred to as Node Manager or NM) while the BMC provides the external LAN link by
which external management software can interact with the feature. The BIOS provides system
power information utilized by the NM algorithms and also exports ACPI Source Language (ASL)
code used by OS-Directed Power Management (OSPM) for negotiating processor P and T state
Page 66
Technology Support Intel® Server Board S2400SC TPS
54 Intel order number G36516-002 Revision 2.0
5.3.1
Hardware Requirements
changes for power limiting. PMBus*-compliant power supplies provide the capability to
monitoring input power consumption, which is necessary to support NM.
Below are the some of the applications of Intel
®
Intelligent Power Node Manager technology.
Platform Power Monitoring and Limiting: The ME/NM monitors platform power
consumption and hold average power over duration. It can be queried to return actual
power at any given instance. The power limiting capability is to allow external
management software to address key IT issues by setting a power budget for each
server. For example, if there is a physical limit on the power available in a room, then IT
can decide to allocate power to different servers based on their usage – servers running
critical systems can be allowed more power than servers that are running less critical
workload.
Inlet Air Temperature Monitoring: The ME/NM monitors server inlet air temperatures
periodically. If there is an alert threshold in effect, then ME/NM issues an alert when the
inlet (room) temperature exceeds the specified value. The threshold value can be set by
policy.
Memory Subsystem Power Limiting: The ME/NM monitors memory power
consumption. Memory power consumption is estimated using average bandwidth
utilization information
Processor Power monitoring and limiting: The ME/NM monitors processor or socket
power consumption and holds average power over duration. It can be queried to return
actual power at any given instant. The monitoring process of the ME will be used to limit
the processor power consumption through processor P-states and dynamic core
allocation
Core allocation at boot time: Restrict the number of cores for OS/VMM use by limiting
how many cores are active at boot time. After the cores are turned off, the CPU will limit
how many working cores are visible to BIOS and OS/VMM. The cores that are turned off
cannot be turned on dynamically after the OS has started. It can be changed only at the
next system reboot.
Core allocation at run-time: This particular use case provides a higher level processor
power control mechanism to a user at run-time, after booting. An external agent can
dynamically use or not use cores in the processor subsystem by requesting ME/NM to
control them, specifying the number of cores to use or not use.
NM is supported only on platforms that have the NM FW functionality loaded and enabled on
the Management Engine (ME) in the SSB and that have a BMC present to support the external
LAN interface to the ME. NM power limiting features requires a means for the ME to monitor
input power consumption for the platform. This capability is generally provided by means of
PMBus*-compliant power supplies although an alternative model using a simpler SMBus* power
monitoring device is possible (there is potential loss in accuracy and responsiveness using nonPMBus* devices). The NM SmaRT/CLST feature does specifically require PMBus*-compliant
power supplies as well as additional hardware on the baseboard.
Page 67
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 55
6.
Platform Management Functional Overview
6.1 Baseboard Management Controller (BMC) Firmware Feature Support
6.1.1
IPMI 2.0 Features
Platform management functionality is supported by several hardware and software components
integrated on the server board that work together to control system functions, monitor and report
system health, and control various thermal and performance features in order to maintain (when
possible) server functionality in the event of component failure and/or environmentally stressed
conditions.
This chapter provides a high level overview of the platform management features and
functionality implemented on the server board. For more in depth and design level Platform
Management information, please reference the BMC Core Firmware External Product Specification (EPS) and BIOS Core External Product Specification (EPS) for Intel
products based on the Intel
®
Xeon® processor E5-2400 product families.
®
Server
The following sections outline features that the integrated BMC firmware can support. Support
and utilization for some features is dependent on the server platform in which the server board
is integrated and any additional system level components and options that may be installed.
Baseboard management controller (BMC)
IPMI Watchdog timer
Messaging support, including command bridging and user/session support
Chassis device functionality, including power/reset control and BIOS boot flags support
Event receiver device: The BMC receives and processes events from other platform
subsystems.
Field Replaceable Unit (FRU) inventory device functionality: The BMC supports access
to system FRU devices using IPMI FRU commands.
System Event Log (SEL) device functionality: The BMC supports and provides access to
a SEL.
Sensor Data Record (SDR) repository device functionality: The BMC supports storage
and access of system SDRs.
Sensor device and sensor scanning/monitoring: The BMC provides IPMI management of
sensors. It polls sensors to monitor and report system health.
IPMI interfaces
o Host interfaces include system management software (SMS) with receive
message queue support, and server management mode (SMM)
o IPMB interface
o LAN interface that supports the IPMI-over-LAN protocol (RMCP, RMCP+)
Serial-over-LAN (SOL)
ACPI state synchronization: The BMC tracks ACPI state changes that are provided by
the BIOS.
BMC self test: The BMC performs initialization and run-time self-tests and makes results
available to external entities.
Page 68
Platform Management Functional Overview Intel® Server Board S2400SC TPS
56 Intel order number G36516-002 Revision 2.0
6.1.2
Non IPMI Features
See also the Intelligent Platform Management Interface Specification Second Generation
v2.0.
The BMC supports the following non-IPMI fea tu res.
o Redundant BMC boot blocks to avoid possibility of a corrupted boot block
resulting in a scenario that prevents a user from updating the BMC.
o BMC System Management Health Monitoring
Fault resilient booting (FRB): FRB2 is supported by the watchdog timer functionality.
Enable/Disable of System Re s et Du e CPU Errors
Chassis intrusion detection
Fan speed control
Fan redundancy monitoring and support
Hot-swap fan support
Power Supply Fan Sensors
System Airflow Monitoring
Exit Air Temperature Monitoring
Acoustic management: Support for multiple fan profiles
Ethernet Controller Thermal Monitoring
Global Aggregate Temperature Margin Sensor
Platform environment control interface (PECI) thermal management support
Memory Thermal Management
DIMM temperature monitoring: New sensors and improved acoustic management using
closed-loop fan control algorithm taking into account DIMM temperature readings.
Power supply redundancy monitoring and support
Power unit management: Support for power unit sensor. The BMC handles power-good
dropout conditions.
Intel
Signal testing support: The BMC provides test commands for setting and getting
®
Intelligent Power Node Manager support
platform signal states.
The BMC generates diagnostic beep codes for fault conditions.
System GUID storage and retrieval
Front panel management: The BMC controls the system status LED and chassis ID
LED. It supports secure lockout of certain front panel functionality and monitors button
presses. The chassis ID LED is turned on using a front panel button or a command.
Local Control Display Panel support
Power state retention
Power fault analysis
Intel
®
Light-Guided Diagnostics
Page 69
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 57
6.1.3
New Manageability Features
Address Resolution Protocol (ARP): The BMC sends and responds to ARPs (supported
on embedded NICs).
Dynamic Host Configuration Protocol (DHCP): The BMC performs DHCP (supported on
embedded NICs).
E-mail alerting
Embedded web server
o Support for embedded web server UI in Basic Manageability feature set.
o Human-readable SEL
o Additional system configurability
o Additional system monitoring capability
o Enhanced on-line help
Integrated KVM
Integrated Remote Media Redirection
Local Directory Access Protocol (LDAP) support
Sensor and SEL logging additions/enhancements (for example, additional thermal
monitoring capability)
SEL Severity Tracking and the Extended SEL
Embedded platform debug feature which allows capture of detailed data for later
analysis.
Provisioning and inventory enhancements:
o Inventory data/system information export (partial SMBIOS table)
DCMI 1.1 compliance (product-specific).
Management support for PMBus* rev1.2 compliant power supplies
Energy Star Server Support
Smart Ride Through (SmaRT)/Closed Loop System Throttling (CLST)
Power Supply Cold Redundancy
Power Supply FW Update
Power Supply Compatibility Check
Intel® S1400/S1600/S2400/S2600 Server Platforms offer a number of changes and additions to
the manageability features that are supported on the previous generation of servers. The
following is a list of the more significant changes that are common to this generation Integrated
BMC based Intel
Sensor and SEL logging additions/enhancements (for example, additional thermal
®
Server boards:
monitoring capability)
SEL Severity Tracking and the Extended SEL
Embedded platform debug feature which allows capture of detailed data for later
analysis.
Provisioning and inventory enhancements:
o Inventory data/system information export (partial SMBIOS table)
Page 70
Platform Management Functional Overview Intel® Server Board S2400SC TPS
58 Intel order number G36516-002 Revision 2.0
6.2 Basic and Advanced Features
Feature
Basic
Advanced
IPMI 2.0 Feature Support
X
X
In-circuit BMC Firmware Update
X X FRB 2
X X Chassis Intrusion Detection
X X Fan Redundancy Monitoring
X X Hot-Swap Fan Support
X
X
Enhancements to fan speed control.
DCMI 1.1 compliance (product-specific).
Support for embedded web server UI in Basic Manageability feature set.
Enhancements to embedded web server
o Human-readable SEL
o Additional system configurability
o Additional system monitoring capability
o Enhanced on-line help
Enhancements to KVM redirection
o Support for higher resolution
Support for EU Lot6 compliance
Management support for PMBus* rev1.2 compliant power supplies
BMC Data Repository (Managed Data Region Feature)
Local Control Display Panel
System Airflow Monitoring
Exit Air Temperature Monitoring
Ethernet Controller Thermal Monitoring
Global Aggregate Temperature Margin Sensor
Memory Thermal Management
Power Supply Fan Sensors
Energy Star Server Support
Smart Ride Through (SmaRT)/ Closed Loop System Throttling (CLST)
Power Supply Cold Redundancy
Power Supply FW Update
Power Supply Compatibility Check
BMC FW reliability enhancements:
o Redundant BMC boot blocks to avoid possibility of a corrupted boot block
resulting in a scenario that prevents a user from updating the BMC.
o BMC System Management Health Monitoring
The bellowing table lists basic and advanced feature support. Individual features may vary by
platform. See the appropriate Platform Specific EPS addendum for more information.
Table 13. Basic and Advanced Features
Page 71
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 59
Feature
Basic
Advanced
Acoustic Management
X X Diagnostic Beep Code Support
X X Power State Retention
X X ARP/DHCP Support
X
X
PECI Thermal Management Support
X X E-mail Alerting
X X Embedded Web Server
X X SSH Support
X X Integrated KVM
X Integrated Remote Media Redirection
X Lightweight Directory Access Protocol (LDAP)
X X Intel® Intelligent Power Node Manager Support
X
X
SMASH CLP
X
X
6.3 Integrated BMC Hardware: Emulex* Pilot III
6.3.1
Emulex* Pilot III Baseboard Management Controller Functionality
The Integrated BMC is provided by an embedded ARM9 controller and associated peripheral
functionality that is required for IPMI-based server management. Firmware usage of these
hardware features is platform dependent.
The following is a summary of the Integrated BMC management hardware features that
comprise the BMC:
400MHz 32-bit ARM9 processor with memory management unit (MMU)
Two independent10/100/1000 Ethernet Controllers with Reduced Media Independent
Interface (RMII)/ Reduced Gigabit Media Independent Interface (RGMII) support
DDR2/3 16-bit interface with up to 800 MHz operation
16 10-bit ADCs
Sixteen fan tachometers
Eight Pulse Width Modulators (PWM)
Chassis intrusion log ic
JTAG Master
Eight I
2
C interfaces with master-slave and SMBus* timeout support. All interfaces are
SMBus* 2.0 compliant.
Parallel general-purpose I/O Ports (16 direct, 32 shared)
Serial general-purpose I/O Ports (80 in and 80 out)
Three UARTs
Platform Environmental Control Interface (PECI)
Six general-purpose timers
Interrupt controller
Multiple Serial Peripheral Interface (SPI) flash interfaces
NAND/Memory interface
Sixteen mailbox registers for communication between the BMC and host
LPC ROM interface
BMC watchdog timer capability
SD/MMC card controller with DMA support
LED support with programmable blink rate controls on GPIOs
Page 72
Platform Management Functional Overview Intel® Server Board S2400SC TPS
60 Intel order number G36516-002 Revision 2.0
6.4 Advanced Configuration and Power Interface (ACPI)
State
Supported
Description
S0
Yes
Working.
S1
Yes
Sleeping. Hardware context is maintained; equates to processor and chipset clocks being
S2
No
Not supported.
S3
No
Supported only on Workstation platforms. See appropriate Platform Specific Information for
S4
No
Not supported.
S5
Yes
Soft off.
6.5 Power Control Sources
Source
External Signal Name or
Internal Subsystem
Capabilities
Power button
Front panel power button
Turns power on or off
BMC watchdog timer
Internal BMC timer
Turns power off, or power cycle
Port 80h snooping capability
Secondary Service Processor (SSP), which provides the HW capability of offloading time
critical processing tasks from the main ARM core.
Emulex* Pilot III contains an integrated SIO, KVMS subsystem and graphics controller with the
following features:
The server board has support for the following ACPI states:
Table 14. ACPI Power States
The front panel power LED is on (not controlled by the BMC).
The fans spin at the normal speed, as determined by sensor inputs.
Front panel buttons work normally.
stopped.
The front panel power LED blinks at a rate of 1 Hz with a 50% duty cycle (not controlled
by the BMC).
The watchdog timer is stopped.
The power, reset, front panel NMI, and ID buttons are unprotected.
Fan speed control is determined by available SDRs. Fans may be set to a fixed state, or
basic fan management can be applied.
The BMC detects that the system has exited the ACPI S1 sleep state when the BIOS SMI
handler notifies it.
more information.
The front panel buttons are not locked.
The fans are stopped.
The power-up process goes through the normal boot process.
The power, reset, front panel NMI, and ID buttons are unlocked.
The server board supports several power control sources which can initiate a power-up or
power-down activity.
Table 15. Power Control Initiators
Page 73
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 61
Source
External Signal Name or
Internal Subsystem
Capabilities
Command
Routed through command processor
Turns power on or off, or power cycle
Power state retention
Implemented by means of BMC
Turns power on when AC power returns
Chipset
Sleep S4/S5 signal (same as
Turns power on or off
CPU Thermal
CPU Thermtrip
Turns power off
WOL(Wake On LAN)
LAN
Turns power on
6.6 BMC Watchdog
6.7 Fault Resilient Booting (FRB)
internal logic
POWER_ON)
The BMC FW is increasingly called upon to perform system functions that are time-critical in
that failure to provide these functions in a timely manner can result in system or component
damage. Intel
feature to provide a safe-guard against this scenario by providing an automatic recovery
mechanism. It also can provide automatic recovery of functionality that has failed due to a fatal
FW defect triggered by a rare sequence of events or a BMC hang due to some type of HW
glitch (for example, power).
This feature is comprised of a set of capabilities whose purpose is to detect misbehaving
subsections of BMC firmware, the BMC CPU itself, or HW subsystems of the BMC component,
and to take appropriate action to restore proper operation. The action taken is dependent on the
nature of the detected failure and may result in a restart of the BMC CPU, one or more BMC
HW subsystems, or a restart of malfunctioning FW subsystems.
The BMC watchdog feature will only allow up to three resets of the BMC CPU ( such as HW
reset) or entire FW stack (such as a SW reset) before giving up and remaining in the uBOOT
code. This count is cleared upon cycling of power to the BMC or upon continuous operation of
the BMC without a watchdog-generated reset occurring for a period of > 30 minutes. The BMC
FW logs a SEL event indicating that a watchdog-generated BMC reset (either soft or hard reset)
has occurred. This event may be logged after the actual reset has occurred. Refer sensor
section for details for the related sensor definition. The BMC will also indicate a degraded
system status on the Front Panel Status LED after an BMC HW reset or FW stack reset. This
state (which follows the state of the associated sensor) will be cleared upon system reset or (AC
or DC) power cycle.
Note: A reset of the BMC may result in the following system degradations that will require a
system reset or power cycle to correct:
1. Timeout value for the rotation period can be set using this parameter. Potentially, there
will be incorrect ACPI Power State reported by the BMC.
2. Reversion of temporary test modes for the BMC back to normal operational modes.
3. FP status LED and DIMM fault LEDs may not reflect BIOS detected errors.
®
S1400/S1600/S2400/S2600/S4600 Server Platforms introduce a BMC watchdog
Fault resilient booting (FRB) is a set of BIOS and BMC algorithms and hardware support that
allow a multiprocessor system to boot even if the bootstrap processor (BSP) fails. Only FRB2 is
supported using watchdog timer commands.
Page 74
Platform Management Functional Overview Intel® Server Board S2400SC TPS
62 Intel order number G36516-002 Revision 2.0
6.8 Sensor Monitoring
6.9 Field Replaceable Unit (FRU) Inventory Device
FRB2 refers to the FRB algorithm that detects system failures during POST. The BIOS uses the
BMC watchdog timer to back up its operation during POST. The BIOS configures the watchdog
timer to indicate that the BIOS is using the timer for the FRB2 phase of the boot operation.
After the BIOS has identified and saved the BSP information, it sets the FRB2 timer use bit and
loads the watchdog timer with the new timeout interval.
If the watchdog timer expires while the watchdog use bit is set to FRB2, the BMC (if so
configured) logs a watchdog expiration event showing the FRB2 timeout in the event data bytes.
The BMC then hard resets the system, assuming the BIOS-selected reset as the watchdog
timeout action.
The BIOS is responsible for disabling the FRB2 timeout before initiating the option ROM scan
and before displaying a request for a boot password. If the processor fails and causes an FRB2
timeout, the BMC resets the system.
The BIOS gets the watchdog expiration status from the BMC. If the status shows an expired
FRB2 timer, the BIOS enters the failure in the system event log (SEL). In the OEM bytes entry
in the SEL, the last POST code generated during the previous boot attempt is written. FRB2
failure is not reflected in the processor status sensor value.
The FRB2 failure does not affect the front panel LEDs.
The BMC monitors system hardware and reports system health. Some of the sensors include
those for monitoring
Component, board, and platform temperatures
Board and platform voltages
System fan presence and tach
Chassis intrusion
Front Panel NMI
Front Panel Power and System Reset Buttons
SMI timeout
Processor errors
The information gathered from physical sensors is translated into IPMI sensors as part of the
“IPMI Sensor Model”. The BMC also reports various system state changes by maintaining
virtual sensors that are not specifically tied to physical hardware.
See Appendix B – Integrated BMC Sensor Tables for additional sensor information.
The BMC implements the interface for logical FRU inventory devices as specified in the
Intelligent Platform Management Interface Specification, Version 2.0. This functionality provides
commands used for accessing and managing the FRU inventory information. These commands
can be delivered through all interfaces.
Page 75
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 63
6.10 System Event Log (SEL)
6.11 System Fan Management
6.11.1
Thermal and Acoustic Management
The BMC provides FRU device command access to its own FRU device and to the FRU
devices throughout the server. The FRU device ID mapping is defined in the Platform Specific
Information. The BMC controls the mapping of the FRU device ID to the physical device
The BMC implements the system event log as specified in the Intelligent Platform Management Interface Specification, Version 2.0. The SEL is accessible regardless of the system power state
through the BMC's in-band and out-of-band interfaces.
The BMC allocates 65,502 bytes (approximately 64 KB) of non-volatile storage space to store
system events. The SEL timestamps may not be in order. Up to 3,639 SEL records can be
stored at a time. Any command that results in an overflow of the SEL beyond the allocated
space is rejected with an “Out of Space” IPMI completion code (C4h).
Events logged to the SEL can be viewed using Intel’s SELVIEW utility, Embedded Web Server,
and Active System Console.
The BMC controls and monitors the system fans. Each fan is associated with a fan speed
sensor that detects fan failure and may also be associated with a fan presence sensor for hotswap support. For redundant fan configurations, the fan failure and presence status determines
the fan redundancy sensor state.
The system fans are divided into fan domains, each of which has a separate fan speed control
signal and a separate configurable fan control policy. A fan domain can have a set of
temperature and fan sensors associated with it. These are used to determine the current fan
domain state.
A fan domain has three states: sleep, nominal, and boost. The sleep and boost states have
fixed (but configurable through OEM SDRs) fan speeds associated with them. The nominal
state has a variable speed determined by the fan domain policy. An OEM SDR record is used to
configure the fan domain policy.
System fan speeds are controlled through pulse width modulation (PWM) signals, which are
driven separately for each domain by integrated PWM hardware. Fan speed is changed by
adjusting the duty cycle, which is the percentage of time the signal is driven high in each pulse
The S2400SC offers multiple thermal and acoustic management features to maintain
comprehensive thermal protection as well as intelligent fan speed control. The features can be
adjusted in BIOS interface with path BIOS > Advanced > System Acoustic and Performance Configuration.
This feature refers to enhanced fan management to keep the system optimally cooled while
reducing the amount of noise generated by the system fans. Aggressive acoustics standards
might require a trade-off between fan speed and system performance parameters that
contribute to the cooling requirements, primarily memory bandwidth. The BIOS, BMC, and
SDRs work together to provide control over how this trade-off is determined.
Page 76
Platform Management Functional Overview Intel® Server Board S2400SC TPS
64 Intel order number G36516-002 Revision 2.0
6.11.2
Setting Throttling Mode
6.11.3
Altitude
6.11.4
Set Fan Profile
6.11.5
Fan PWM Offset
6.11.6
Quiet Fan Idle Mode
This capability requires the BMC to access temperature sensors on the individual memory
DIMMs. Additionally, closed-loop thermal throttling is only supported with buffered DIMMs.
Select the most appropriate memory thermal throttling mechanism for memory sub-system from
[Auto], [DCLTT], [SCLTT] and [SOLTT].
The default setting is [Auto]
Select the proper altitude that the system is distributed from [300m or less], [301m-900m],
[901m-1500m], [Above 1500m] options. Lower altitude selection can lead to potential thermal
risk. And higher altitude selection provides better cooling but with undesired acoustic and fan
power consumption. If the altitude is known, higher altitude is recommended in order to provide
sufficient cooling. The default setting is [301m – 900m].
[Performance] and [Acoustic] fan profiles are available to select. The Acoustic mode offers best
acoustic experience and appropriate cooling capability covering mainstream and majority of the
add-in cards. Performance mode is designed to provide sufficient cooling capability covering all
kinds of add-in cards on the market. The default setting is [Performance]
[Auto] – BIOS automatically detect and identify the appropriate thermal throttling mechanism
based on DIMM type, airflow input, DIMM sensor availability.
[DCLTT] – Dynamic Closed Loop Thermal Throttling: for the SOD DIMM with system airflow
input
[SCLTT] – Static Close Loop Thermal Throttling: for the SOD DIMM without system airflow
input
[SOLTT] – Static Open Loop Thermal Throttling: for the DIMMs without sensor on DIMM
(SOD)
The [Performance] and [Acoustic] fan profiles in BIOS must be selected in
[BIOS>Advanced> System Acoustic and Performance Configuration>Set Fan Profile]. The
Acoustic mode offers best acoustic experience and appropriate cooling capability covering
mainstream and majority of the add-in cards with 100LFM thermal requirements. For any
add-in card requiring more than 100LFM , performance mode must be selected to provide
sufficient cooling capability.
This feature is reserved for manual adjustment to the minimum fan speed curves. The valid
range i s from [0 to 10 0] which stands for 0% to 100% PWM adding to the minimum fan speed.
This feature is valid when Quiet Fan Idle Mode is at Enabled state. The default setting is [0]
This feature can be [Enabled] or [Disabled]. If enabled, the fan will either stopped or shift to a
lower speed when the aggregate sensor temperatures are satisfied indicating the system is at
ideal thermal/light loading conditions. When th e aggregate sensor tem peratures not satisfied,
the fan will shift back to normal control curves. If disabled, the fan will never stopped or shift into
lower fan speed whatever the aggregate sensor temperatures are satisfied or not. The default
setting is [Disabled]
Page 77
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 65
6.11.7
Fan Profiles
Type
Profile
Details
OLTT
0
Acoustic, 300M altitude
OLTT
1
Performance, 300M altitude
OLTT
2
Acoustic, 900M altitude
OLTT
3
Performance, 900M altitude
OLTT
4
Acoustic, 1500M altitude
OLTT
5
Performance, 1500M altitude
OLTT
6
Acoustic, 3000M altitude
OLTT
7
Performance, 3000M altitude
CLTT
0
Acoustic, 300M altitude
CLTT
1
Performance, 300M altitude
CLTT
2
Acoustic, 900M altitude
CLTT
3
Performance, 900M altitude
CLTT
4
Acoustic, 1500M altitude
CLTT
5
Performance, 1500M altitude
CLTT
6
Acoustic, 3000M altitude
CLTT
7
Performance, 3000M altitude
Note:
1. The above features may or may not be in effective depends on the actual thermal
characters of a specific system.
2. Refer to the Intel server system TPS for the board in Intel chassis thermal and acoustic
management
3. Refer to Fan Control Whitepaper for the board in 3rd party chassis fan speed control
customization.
The server system supports multiple fan control profiles to support acoustic targets and
American Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE)
compliance. The BIOS Setup utility can be used to choose between meeting the target acoustic
level or enhanced system performance. This is accomplished through fan profiles. The BMC
supports eight fan profiles, numbered from 0 to 7.
Table 16. Fan Profiles
Each group of profiles allows for varying fan control policies based on the altitude. For a given
altitude, the Tcontrol SDRs associated with an acoustics-optimized profile generate less noise
than the equivalent performance-optimized profile by driving lower fan speeds, and the BIOS
reduces thermal management requirements by configuring more aggressive memory throttling.
The BMC only supports enabling a fan profile through the command if that profile is supported
on all fan domains defined for the given system. It is important to configure platform Sensor Data Records (SDRs) so that all desired fan profiles are supported on each fan domain. If
no single profile is supported across all domains, the BMC, by default, uses profile 0 and does
not allow it to be changed.
Page 78
Platform Management Functional Overview Intel® Server Board S2400SC TPS
66 Intel order number G36516-002 Revision 2.0
6.11.8
Thermal Sensor Input to Fan Speed Control
The BMC uses various IPMI sensors as input to the fan speed control. Some of the sensors are
IPMI models of actual physical sensors whereas some are “virtual” sensors whose values are
derived from physical sensors using calculations and/or tabular information.
The following IPMI thermal sensors are used as input to the fan speed control:
Front Panel Temperature Sensor
Baseboard Temperature Sensor
CPU Margin Sensors
3,5,6
DIMM Thermal Margin Sensors
Exit Air Temperature Sensor
PCH Tem perature Sensor
4,6
On-board Ethernet Controller Temperature Sensors
Add-In Intel SAS/IO Module Temperature Sensors
PSU Thermal Sensor
4, 9
CPU VR Temperature Sensors
DIMM VR Temperature Sensors
iBMC Temperature Sensor
4, 7
Global Aggregate Thermal Margin Sensors
1, 4, 8
3,5
4, 7
2
4, 7
1
4, 6
4, 6
3, 8
Note:
1. For fan speed control in Intel chassis
2. For fan speed control in 3rd party chassis
3. Temperature margin from throttling threshold
4. Absolute temperature
5. PECI value
6. On-die sensor
7. On-board sensor
8. Virtual sensor
9. Available only when PSU has PMBus*
The following illustration provides a simple model showing the fan speed control structure that
implements the resulting fan speeds.
Page 79
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 67
6.11.9
Memory Thermal Throttling
Figure 20. Fan Speed Control Process
The server board provides support for system thermal management through open loop throttling
(OLTT) and closed loop throttling (CLTT) of system memory. Normal system operation uses
closed-loop thermal throttling (CLTT) and DIMM temperature monitoring as major factors in
overall thermal and acoustics management. In the event that BIOS is unable to configure the
system for CLTT, it defaults to open-loop thermal throttling (OLTT). In the OLTT mode, it is
assumed that the DIMM temperature sensors are not available for fan speed control.
Throttling levels are changed dynamically to cap throttling based on memory and system
thermal conditions as determined by the system and DIMM power and thermal parameters. The
BMC’s fan speed control functionality is linked to the memory throttling mechanism used.
The following terminology is used for the various memory throttling options:
Static Open Loop Thermal Throttling (Static-OLTT): OLTT control registers that are
configured by BIOS MRC remain fixed after post. The system does not change any of the
throttling control registers in the embedded memory controller during runtime.
Static Closed Loop Thermal Throttling (Static-CLTT): CLTT control registers are
configured by BIOS MRC during POST. The memory throttling is run as a closed-loop
system with the DIMM temperature sensors as the control input. Otherwise, the system
does not change any of the throttling control registers in the embedded memory controller
during runtime.
Dynamic Open Loop Thermal Throttling (Dynamic-OLTT): OLTT control regis ters are
configured by BIOS MRC during POST. Adjustments are made to the throttling during
runtime based on changes in system cooling (fan speed).
Dynamic Closed Loop Thermal Throttling (Dynamic-CLTT): CLTT control registers are
configured by BIOS MRC during POST. The memory throttling is run as a closed-loop
system with the DIMM temperature sensors as the control input. Adjustments are made to
the throttling during runtime based on changes in system cooling (fan speed).
Both Static and Dynamic CLTT modes implement a Hybrid Closed Loop Thermal Throttling
mechanism whereby the Integrated Memory Controller estimates the DRAM temperature in
between actual reads of the memory thermal sensors.
Page 80
Platform Management Functional Overview Intel® Server Board S2400SC TPS
68 Intel order number G36516-002 Revision 2.0
6.12 Messaging Interfaces
Channel ID
Interface
Supports
Sessions
0
Primary IPMB
No
1
LAN 1
Yes
2
LAN 2
Yes
3
LAN3
1
(Provided by the Intel® Dedicated Server Management NIC)
Yes
4
Reserved
Yes
5
USB
No
6
Secondary IPMB
No
7
SMM
No
8– 0Dh
Reserved
–
0Eh
Self 2 – 0Fh
SMS/Receive Message Queue
No
6.12.1
User Model
6.12.2
IPMB Communication Interface
The B MC supports the following communications interfaces:
Host SMS interface by means of low pin count (LPC)/keyboard controller style (KCS)
interface
Host SMM interface by means of low pin count (LPC)/keyboard controller style (KCS)
interface
Intelligent Platform Management Bus (IPMB) I2C interface
LAN interface using the IPMI-over-LAN protocols
Every messaging interface is assigned an IPMI channel ID by IPMI 2.0.
Table 17. Messaging Interfaces
Notes:
1. Optional hardware supported by the server system.
2. Refers to the actual channel used to send the request.
The BMC supports the IPMI 2.0 user model. 15 user IDs are supported. These 15 users can be
assigned to any channel. The following restrictions are placed on user-related operations:
1. User names for User IDs 1 and 2 cannot be changed. These are always “” (Null/blank)
and “root” respectively.
2. User 2 (“root”) always has the administrator privilege level.
3. All user passwords (including passwords for 1 and 2) may be modified.
User IDs 3-15 may be used freely, with the condition that user names are unique. Therefore, no
other users can be named “” (Null), “root,” or any other existing user name.
The IPMB communication interface uses the 100 KB/s version of an I2C bus as its physical
2
medium. For more information on I
C specifications, see The I2C Bus and How to Use It. The
IPMB implementation in the BMC is compliant with the IPMB v1.0, revision 1.0.
Page 81
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 69
6.12.3
LAN Interface
The BMC IPMB slave address is 20h.
The BMC both sends and receives IPMB messages over the IPMB interface. Non-IPMB
messages received by means of the IPMB interface are discarded.
Messages sent by the BMC can either be originated by the BMC, such as initialization agent
operation, or by another source. One example is KCS-IPMB bridging.
The BMC implements both the IPMI 1.5 and IPMI 2.0 messaging models. These provide out-ofband local area network (LAN) communication between the BMC and the network.
See the Intelligent Platform Management Interface Specification Second Generation v2.0 for
details about the IPMI-over-LAN protocol.
Run-time determination of LAN channel capabilities can be determined by both standard IPMI
defined mechanisms.
6.12.3.1 RMCP/ASF Messaging
The B MC supports RMCP ping discovery in which the BMC responds with a pong message to
an RMCP/ASF ping request. This is implemented per the Intelligent Platform Management Interface Specification Second Generation v2.0.
6.12.3.2 BMC LAN Channels
The BMC supports three RMII/RGMII ports that can be used for communicating with Ethernet
devices. Two ports are used for communication with the on-board NICs and one is used for
communication with an Ethernet PHY located on an optional RMM4 add-in module.
6.12.3.2.1 Baseboard NICs
The on-board Ethernet controller provides support for a Net work Controller Sideband Interface
(NC-SI) manageability interface. This provides a sideband high-speed connection for
manageability traffic to the BMC while still allowing for a simultaneous host access to the OS if
desired.
The NC-SI is a DMTF industry standard protocol for the side band management LAN interface.
This protocol provides a fast multi-drop interface for management traffic.
The baseboard NIC(s) are connected to a single BMC RMII/RGMII port that is configured for
RMII operation. The NC-SI protocol is used for this connection and provides a 100 Mb/s fullduplex multi-drop interface which allows multiple NICs to be connected to the BMC. The
physical layer is based upon RMII, however RMII is a point-to-point bus whereas NC-SI allows 1
master and up to 4 slaves. The logical layer (configuration commands) is incompatible with
RMII.
The server board will provide support for a dedicated management channel that can be
configured to be hidden from the host and only used by the BMC. This mode of operation is
configured through a BIOS setup option.
Page 82
Platform Management Functional Overview Intel® Server Board S2400SC TPS
70 Intel order number G36516-002 Revision 2.0
6.12.3.2.2 Dedicated Management Channel
An additional LAN channel dedicated to BMC usage and not available to host SW is supported
by an optional RMM4 add-in card. There is only a PHY device present on the RMM4 add-in
card. The BMC has a built-in MAC module that uses the RGMII interface to link with the card’s
PHY. Therefore, for this dedicated management interface, the PHY and MAC are located in
different devices.
The PHY on the RMM4 connects to the BMC’s other RMII/RGMII interface (that is, the one that
is not connected to the baseboard NICs). This BMC port is configured for RGMII usage.
In addition to the use of an RMM4 add-in card for a dedicated management channel, on
systems that support multiple Ethernet ports on the baseboard, the system BIOS provides a
setup option to allow one of these baseboard ports to be dedicated to the BMC for
manageability purposes. When this is enabled, that port is hidden from the OS.
6.12.3.2.3 Concurrent Server Management Use of Multiple Ethernet Controllers
The BMC FW supports concurrent OOB LAN management sessions for the following
combination:
Two on-board NIC ports
One on-board NIC and the optional dedicated RMM4 add-in management NIC.
Two on-board NICs and optional dedicated RMM4 add-in management NIC.
All NIC ports must be on different subnets for the above concurrent usage models.
MAC addresses are assigned for management NICs from a pool of up to three MAC addresses
allocated specifically for manageability. The total number of MAC addresses in the pool is
dependent on the product HW constraints (for example, a board with two NIC ports available for
manageability would have a MAC allocation pool of 2 addresses). For these channels, support
can be enabled for IPMI-over-LAN and DHCP.
For security reasons, embedded LAN channels have the following default settings:
IP Address: Static
All users disabled
IPMI-enabled network interfaces may not be placed on the same subnet. This includes the
®
Intel
Dedicated Server Management NICand either of the BMC’s embedded network
interfaces.
Host-BMC communication over the same physical LAN connection – also known as “loopback”
– is not supported. This includes “ping” operations.
On server boards with more than two onboard NIC ports, only the first two ports can be used as
BMC LAN channels. The remaining ports have no BMC connectivity.
Maximum bandwidth supported by BMC LAN channels are as follows:
BMC LAN1 (Baseboard NIC port) ----- 100Mb (10Mb in DC off state)
BMC LAN 2 (Baseboard NIC port) ----- 100Mb (10Mb in DC off state)
BMC LAN 3 (Dedicated NIC) ----- 100Mb
Page 83
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 71
6.12.3.3 IPV6 Support
In addition to IPv4, the server board has support for IPv6 for manageability channels.
Configuration of IPv6 is provided by extensions to the IPMI Set & Get LAN Configuration
Parameters commands as well as through a Web Console IPv6 configuration web page.
The BMC supports IPv4 and IPv6 simultaneously so they are both configured separately and
completely independently. For example, IPv4 can be DHCP configured while IPv6 is statically
configured or vice versa.
The parameters for IPv6 are similar to the parameters for IPv4 with the following differences:
An IPv6 address is 16 bytes vs. 4 bytes for IPv4.
An IPv6 prefix is 0 to 128 bits whereas IPv4 has a 4 byte subnet mask.
The IPv6 Enable parameter must be set before any IPv6 packets will be sent or received
on that channel.
There are two variants of automatic IP Address Source configuration vs. just DHCP for
IPv4.
The three possible IPv6 IP Address Sources for configuring the BMC are:
Static (Manual): The IP, Prefix, and Gateway parameters are manually configured by the user.
The BMC ignores any Router Advertisement messages received over the network.
DHCPv6: The IP comes from running a DHCPv6 client on the BMC and receiving the IP from a
DHCPv6 server somewhere on the network. The Prefix and Gateway are configured by Router
Advertisements from the local router. The IP, Prefix, and Gateway are read-only parameters to
the BMC user in this mode.
Stateless auto-config: The Prefix and Gateway are configured by the router through Router
Advertisements. The BMC derives its IP in two parts: the upper network portion comes from the
router and the lower unique portion comes from the BMC’s channel MAC address. The 6-byte
MAC address is converted into an 8-byte value per the EUI-64* standard. For example, a MAC
value of 00:15:17:FE:2F:62 converts into a EUI-64 value of 215:17ff:fefe:2f62. If the BMC
receives a Router Advertisement from a router at IP 1:2:3:4::1 with a prefix of 64, it would then
generate for itself an IP of 1:2:3:4:215:17ff:fefe:2f62. The IP, Prefix, and Gateway are read-only
parameters to the BMC user in this mode.
IPv6 can be used with the BMC’s Web Console, JViewer (remote KVM and Media), and
Systems Management Architecture for Server Hardware – Command Line Protocol (SMASHCLP) interface (ssh). There is no standard yet on how IPMI RMCP or RMCP+ should operate
over IPv6 so that is not currently supported.
6.12.3.4 LAN Failover
The BMC FW provides a LAN failover capability such that the failure of the system HW
associated with one LAN link will result in traffic being rerouted to an alternate link. This
functionality is configurable by IPMI methods as well as through th e BMC’s Embedded UI,
allowing for user to specify the physical LAN links constitute the redundant network paths or
physical LAN links constitute different network paths. BMC will support only a all or nothing”
approach – that is, all interfaces bonded together, or none are bonded together.
Page 84
Platform Management Functional Overview Intel® Server Board S2400SC TPS
72 Intel order number G36516-002 Revision 2.0
The LAN Failover feature applies only to BMC LAN traffic. It bonds all available Ethernet
devices but only one is active at a time. When enabled, If the active connection’s leash is lost,
one of the secondary connections is automatically configured so that it has the same IP
address. Traffic immediately resumes on the new active connection.
The LAN Failover enable/disable command may be sent at any time. After it has been enabled,
standard IPMI commands for setting channel configuration that specify a LAN channel other
than the first will return an error code.
6.12.3.5 BMC IP Address Configuration
Enabling the BMC’s network interfaces requires using the Set LAN Configuration Parameter
command to configure LAN configuration parameter 4, IP Address Source. The BMC supports
this parameter as follows:
1h, static address (manually configured): Supported on all management NICs. This is the
BMC’s default value.
2h, address obtained by BMC running DHCP: Supported only on embedded management
NICs.
IP Address Source value 4h, address obtained by BMC running other address assignment
protocol, is not supported on any management NIC.
Attempting to set an unsupported IP address source value has no effect, and the BMC returns
error code 0xCC, Invalid data field-in request. Note that values 0h and 3h are no longer
supported, and will return a 0xCC error completion code.
6.12.3.5.1 Static IP Address (IP Address Source Values 0h, 1h, and 3h)
The BMC supports static IP address assignment on all of its management NICs. The IP address
source parameter must be set to “static” before the IP address; the subnet mask or gateway
address can be manually set.
The BMC takes no special action when the following IP address source is specified as the IP
address source for any management NIC: 1h – Static address (manually configured)
The Set LAN Configuration Parameter command must be used to configure LAN configuration
parameter 3, IP Address, with an appropriate value.
The BIOS does not monitor the value of this parameter, and it does not execute DHCP for the
BMC under any circumstances, regardless of the BMC configuration.
6.12.3.5.2 Static LAN Configuration Parameters
When the IP Address Configuration parameter is set to 01h (static), the following parameters
may be changed by the user:
LAN configuration parameter 3 (IP Address)
LAN configuration parameter 6 (Subnet Mask)
LAN configuration parameter 12 (Default Gateway Address)
Page 85
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 73
When changing from DHCP to Static configuration, the initial values of these three parameters
will be equivalent to the existing DHCP-set parameters. Additionally, the BMC observes the
following network safety precautions:
1. The user may only set a subnet mask that is valid, per IPv4 and RFC 950 (Internet Standard Subnetting Procedure). Invalid subnet values return a 0xCC (Invalid Data Field
in Request) completion code, and the subnet mask is not set. If no valid mask has been
previously set, default subnet mask is 0.0.0.0.
2. The user may only set a default gateway address that can potentially exist within the
subnet specified above. Default gateway addresses outside the BMC’s subnet are
technically unreachable and the BMC will not set the default gateway address to an
unreachable value. The BMC returns a 0xCC (Invalid Data Field in Request) completion
code for default gateway addresses outside its subnet.
3. If a command is issued to set the default gateway IP address before the BMC’s IP
address and subnet mask are set, the default gateway IP address is not updated and the
BMC returns 0xCC.
If the BMC’s IP address on a LAN channel changes while a LAN session is in progress over that
channel, the BMC does not take action to close the session except through a normal session
timeout. The remote client must re-sync with the new IP address. The BMC’s new IP address is
only available in-band through the “Get LAN Configuration Parameters” command.
The BMC DHCP feature is activated by using the Set LAN Configuration Parameter command
to set LAN configuration parameter 4, IP Address Source, to 2h: “address obtained by BMC
running DHCP”. Once this parameter is set, the BMC initiates the DHCP process within
approximately 100 ms.
If the BMC has previously been assigned an IP address through DHCP or the Set LAN Configuration Parameter command, it requests that same IP address to be reassigned. If the
BMC does not receive the same IP address, system management software must be
reconfigured to use the new IP address. The new address is only available in-band, through the
IPMI Get LAN Configuration Parameters command.
Changing the IP Address Source parameter from 2h to any other supported value will cause the
BMC to stop the DHCP process. The BMC uses the most recently obtained IP address until it is
reconfigured.
If the physical LAN connection is lost (that is, the cable is unplugged), the BMC will not reinitiate the DHCP process when the connection is re-established.
6.12.3.5.4 DHCP-related LAN Configuration Parameters
Users may not change the following LAN parameters while the DHCP is enabled:
LAN configuration parameter 3 (IP Address)
LAN configuration parameter 6 (Subnet Mask)
LAN configuration parameter 12 (Default Gateway Address)
To prevent users from disrupting the BMC’s LAN configuration, the BMC treats these
parameters as read-only while DHCP is enabled for the associated LAN channel. Using the Set LAN Configuration Parameter command to attempt to change one of these parameters under
Page 86
Platform Management Functional Overview Intel® Server Board S2400SC TPS
74 Intel order number G36516-002 Revision 2.0
6.12.4
Address Resolution Protocol (ARP)
6.12.5
Internet Control Message Protocol (ICMP)
6.12.6
Virtual Local Area Network (VLAN)
such circumstances has no effect, and the BMC returns error code 0xD5, “Cannot Execute
Command. Command, or request parameter(s) are not supported in present state.”
6.12.3.6 DHCP BMC Hostname
The BMC allows setting a DHCP Hostname using the Set/Get LAN Configuration Parameters
command.
DHCP Hostname can be set regardless of the IP Address source configured on the BMC.
But this parameter is only used if the IP Address source is set to DHCP.
When Byte 2 is set to “Update in progress”, all the 16 Block Data Bytes (Bytes 3 – 18)
must be present in the request.
When Block Size < 16, it must be the last Block request in this series. In other words Byte
2 is equal to “Update is complete” on that request.
Whenever Block Size < 16, the Block data bytes must end with a NULL Character or Byte
(=0).
All Block write requests are updated into a local Memory byte array. When Byte 2 is set to
“Update is Complete”, the Local Memory is committed to the NV Storage. Local Memory is
reset to NULL after changes are committed.
When Byte 1 (Block Selector = 1), firmware resets all the 64 bytes local memory. This can
be used to undo any changes after the last “Update in Progress”.
User should always set the hostname starting from block selector 1 after the last “Update
is complete”. If the user skips block selector 1 while setting the hostname, the BMC will
record the hostname as “NULL,” because the first block contains NULL data.
This scheme effectively does not allow a user to make a partial Hostname change. Any
Hostname change needs to start from Block 1.
Byte 64 ( Block Selector 04h byte 16) is always ignored and set to NULL by BMC which
effectively means we can set only 63 bytes.
User is responsible for keeping track of the Set series of commands and Local Memory
contents.
While BMC firmware is in “Set Hostname in Progress” (Update not complete), the firmware
continues using the Previous Hostname for DHCP purposes.
The BMC can receive and respond to ARP requests on BMC NICs. Gratuitous ARPs are
supported, and disabled by default.
The BMC supports the following ICMP message types targeting the BMC over integrated NICs:
The BMC supports VLAN as defined by IPMI 2.0 specifications. VLAN is supported internally by
the BMC, not through switches. VLAN provides a way of grouping a set of systems together so
that they form a logical network. This feature can be used to set up a management VLAN where
only devices which are members of the VLAN will receive packets related to management and
Echo request (ping): The BMC sends an Echo Reply.
Destination unreachable: If message is associated with an active socket connection within
the BMC, the BMC closes the socket.
Page 87
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 75
6.12.7
Secure Shell (SSH)
6.12.8
Serial-over-LAN (SOL 2.0)
members of the VLAN will be isolated from any other network traffic. Please note that VLAN
does not change the behavior of the host network setting, it only affects the BMC LAN
communication.
LAN configuration options are now supported (by means of the Set LAN Config Parameters
command, parameters 20 and 21) that allow support for 802.1Q VLAN (Layer 2). This allows
VLAN headers/packets to be used for IPMI LAN sessions. VLAN ID’s are entered and enabled
by means of parameter 20 of the Set LAN Config Parameters IPMI command. When a VLAN ID
is configured and enabled, the BMC only accepts packets with that VLAN tag/ID. Conversely, all
BMC generated LAN packets on the channel include the given VLAN tag/ID. Valid VLAN ID’s
are 1 through 4094, VLAN ID’s of 0 and 4095 are reserved, per the 802.1Q VLAN specification.
Only one VLAN can be enabled at any point in time on a LAN channel. If an existing VLAN is
enabled, it must first be disabled prior to configuring a new VLAN on the same LAN channel.
Parameter 21 (VLAN Priority) of the Set LAN Config Parameters IPMI command is now
implemented and a range from 0-7 will be allowed for VLAN Priorities. Please note that bits 3
and 4 of Parameter 21 are considered Reserved bits.
Parameter 25 (VLAN Destination Address) of the Set LAN Config Parameters IPMI command is
not supported and returns a completion code of 0x80 (parameter not supported) for any
read/write of parameter 25.
If the BMC IP address source is DHCP, then the following behavior is seen:
If the BMC is first configured for DHCP (prior to enabling VLAN), when VLAN is enabled,
the BMC performs a discovery on the new VLAN in order to obtain a new BMC IP address.
If the BMC is configured for DHCP (before disabling VLAN), when VLAN is disabled, the
BMC performs a discovery on the LAN in order to obtain a new BMC IP address.
If the BMC IP address source is Static, then the following behavior is seen:
If the BMC is first configured for static (prior to enabling VLAN), when VLAN is enabled,
the BMC has the same IP address that was configured before. It is left to the management
application to configure a different IP address if that is not suitable for VLAN.
If the BMC is configure for static (prior to disabling VLAN), when VLAN is disabled, the
BMC has the same IP address that was configured before. It is left to the management
application to configure a different IP address if that is not suitable for LAN.
Secure Shell (SSH) connections are supported for SMASH-CLP sessions to the BMC.
The BMC supports IPMI 2.0 SOL.
IPMI 2.0 introduced a standard serial-over-LAN feature. This is implemented as a standard
payload type (01h) over RMCP+.
Three commands are implemented for SOL 2.0 configuration.
“Get SOL 2.0 Configuration Parameters” and “Set SOL 2 .0 Configuration Parameters”:
These commands are used to get and set the values of the SOL configuration parameters.
The parameters are implemented on a per-channel basis.
Page 88
Platform Management Functional Overview Intel® Server Board S2400SC TPS
76 Intel order number G36516-002 Revision 2.0
6.12.9
Platform Event Filter (PEF)
Event Filter
Number
Offset Mask
Events
1
Non-critical, critical and non-recoverable
Temperature sensor out of range
2
Non-critical, critical and non-recoverable
Voltage sensor out of range
3
Non-critical, critical and non-recoverable
Fan failure
4
General chassis intrusion
Chassis intrusion (security violation)
5
Failure and predictive failure
Power supply failure
6
Uncorrectable ECC
BIOS
7
POST error
BIOS: POST code error
8
FRB2
Watchdog Timer expiration for FRB2
9
Policy Correction Time
Node Manager
10
Power down, power cycle, and reset
Watchdog timer
11
OEM system boot event
System restart (reboot)
12
Drive Failure, Predicted Failure
Hot Swap Controller
6.12.10 LAN Alerting
“Activating SOL”: This command is not accepted by the BMC. It is sent by the BMC when
SOL is activated to notify a remote client of the switch to SOL.
Activating a SOL session requires an existing IPMI-over-LAN session. If encryption is
used, it should be negotiated when the IPMI-over LAN session is established.
The BMC includes the ability to generate a selectable action, such as a system power-off or
reset, when a match occurs to one of a configurable set of events. This capability is called
Platform Event Filtering, or PEF. One of the available PEF actions is to trigger the BMC to send
a LAN alert to one or more destinations.
The BMC supports 20 PEF filters. The first twelve entries in the PEF filter table are preconfigured (but may be changed by the user). The remaining entries are left blank, and may be
configured by the user.
Table 18. Factory Configured PEF Table Entries
Additionally, the BMC supports the following PEF actions:
Power off
Power cycle
Reset
OEM action
Alerts
The “Diagnostic interrupt” action is not supported.
The BMC supports sending embedded LAN alerts, called SNMP PET (Platform Event traps),
and SMTP email alerts.
The BMC supports a minimum of four LAN alert destinations.
6.12.10.1 SNMP Platform Event Traps (PETs)
This feature enables a target system to send SNMP traps to a designated IP address by means
of LAN. These alerts are formatted per the Intelligent Platform Management Interface Specification Second Generation v2.0. A Modular Information Block (MIB) file associated with
Page 89
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 77
6.12.11 Alert Policy Table
6.12.12 SM-CLP (SM-CLP Lite)
the traps is provided with the BMC firmware to facilitate interpretation of the traps by external
software. The format of the MIB file is covered under RFC 2578.
Associated with each PEF entry is an alert policy that determines which IPMI channel the alert
is to be sent. There is a maximum of 20 alert policy entries. There are no pre-configured entries
in the alert policy table because the destination types and alerts may vary by user. Each entry in
the alert policy table contains four bytes for a maximum table size of 80 bytes.
6.12.11.1 E-mail Alerting
The Embedded Email Alerting feature allows the user to receive e-mails alerts indicating issues
with the server. This allows e-mail alerting in an OS-absent (for example, Pre-OS and OS-Hung)
situation. This feature provides support for sending e-mail by means of SMTP, the Simple Mail
Transport Protocol as defined in Internet RC 821. The e-mail alert provides a text string that
describes a simple description of the event. SMTP alerting is configured using the embedded
web server.
SMASH refers to Systems Management Architecture for Server Hardware. SMASH is defined
by a suite of specifications, managed by the DMTF, that standardize the manageability
interfaces for server hardware. CLP refers to Command Line Protocol. SM-CLP is defined by
the Server Management Command Line Protocol Specification (SM-CLP) ver1.0, which is part
of the SMASH suite of specifications. The specifications and further information on SMASH can
be found at the DMTF website (http://www.dmtf.org/
The BMC provides an embedded “lite” version of SM-CLP that is syntax-compatible but not
considered fully compliant with the DMTF standards.
The SM-CLP utilized by a remote user by connecting a remote system through one of the
system NICs. It is possible for third party management applications to create scripts using this
CLP and execute them on server to retrieve information or perform management tasks such as
reboot the server, configure events, and so on.
The BMC embedded SM-CLP feature includes the following capabilities:
Power on/off/reset the server.
Get the system power state.
Clear the System Event Log (SEL).
Get the interpreted SEL in a readable format.
Initiate/terminate an Serial Over LAN session.
Support “help” to provide helpful information
Get/set the sy s tem ID LED.
Get the system GUID
Get/set configuration of user accounts.
Get/set configuration of LAN parameters.
Embedded CLP communication should support SSH connection.
Provide current status of platform sensors including current values. Sensors include
voltage, temperature, fans, power supplies, and redundancy (power unit and fan
redundancy).
).
Page 90
Platform Management Functional Overview Intel® Server Board S2400SC TPS
78 Intel order number G36516-002 Revision 2.0
6.12.13 Embedded Web Server
The embedded web server is supported over any system NIC port that is enabled for server
management capabilities.
BMC Base manageability provides an embedded web server and an OEM-customizable web
GUI which exposes the manageability features of the BMC base feature set. It is supported over
all on-board NICs that have management connectivity to the BMC as well as an optional RMM4
dedicated add-in management NIC. At least two concurrent web sessions from up to two
different users is supported. The embedded web user interface shall support the following client
web browsers:
Microsoft Internet Explorer 7.0*
Microsoft Internet Explorer 8.0*
Microsoft Internet Explorer 9.0*
Mozilla Firefox 3.0*
Mozilla Firefox 3.5*
Mozilla Firefox 3.6*
The embedded web user interface supports strong security (authentication, encryption, and
firewall support) since it enables remote server configuration and control. Embedded web server
uses ports #80 and #443. The user interface presented by the embedded web user interface
shall authenticate the user before allowing a web session to be initiated. Encryption using 128bit SSL is supported. User authentication is based on user id and password.
The GUI presented by the embedded web server authenticates the user before allowing a web
session to be initiated. It presents all functions to all users but grays-out those functions that the
user does not have privilege to execute. (for example, if a user does not have privilege to power
control, then the item shall be displayed in grey-out font in that user’s UI display). The web GUI
also provides a launch point for some of the advanced features, such as KVM and media
redirection. These features are grayed out in the GUI unless the system has been updated to
support these advanced features.
Additional features supported by the web GUI includes:
Presents all the Basic features to the users.
Power on/off/reset the server and view current power state.
Displays BIOS, BMC, ME and SDR version information.
Display overall system health.
Configuration of various IPMI over LAN parameters for both IPV4 and IPV6
Configuration of alerting (SNMP and SMTP).
Display system asset information for the product, board, and chassis.
Display of BMC-owned sensors (name, status, current reading, enabled thresholds),
including color-code status of sensors.
Provides ability to filter sensors based on sensor type (Voltage, Temperature, Fan &
Power supply related)
Automatic refresh of sensor data with a configurable refresh rate.
On-line help.
Display/clear SEL (display is in easily understandable human readable format).
Supports major industry-standard browsers (Microsoft Internet Explorer* and Mozilla
Firefox*).
Automatically logs out after user-configurable inactivity period.
Page 91
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 79
6.12.14 Virtual Front Panel
The GUI session automatically times-out after a user-configurable inactivity period. By
default, this inactivity period is 30 minutes.
Embedded Platform Debug feature - Allow the user to initiate a “diagnostic dump” to a
file that can be sent to Intel for debug purposes.
Virtual Front Panel. The Virtual Front Panel provides the same functionality as the local
front panel. The displayed LEDs match the current state of the local panel LEDs. The
displayed buttons (for example, power button) can be used in the same manner as the
local buttons.
Severity level indication of SEL events. The web server UI displays the severity level
associated with each event in the SEL. The severity level correlates with the front panel
system status LED ( “OK”, “Degraded”, “Non-Fatal”, or “Fatal”).
Display of ME sensor data. Only sensors that have associated SDRs loaded will be
displayed.
Ability to save the SEL to a file.
Ability to force HTTPS connectivity for greater security. This is provided through a
configuration option in the UI.
Display of processor and memory information as is available over IPMI over LAN.
Ability to get and set Node Manager (NM) power policies.
Display of power consumed by the server.
Ability to view and configure VLAN settings.
Warn user the reconfiguration of IP address will cause disconnect.
Capability to block logins for a period of time after several consecutive failed login
attempts. The lock-out period and the number of failed logins that initiates the lock-out
period are configurable by the user.
Server Power Control – Ability to force into Setup on a reset.
Virtual Front Panel is the module present as “Virtual Front Panel” on the left side in the
embedded web server when "remote Control" tab is clicked.
Main Purpose of the Virtual Front Panel is to provide the front panel functionality virtually.
Virutal Front Panel (VFP) will mimic the status LED and Power LED status and Chassis ID
alone. It is automatically in sync with BMC every 40 seconds.
For any abnormal status LED state, Virtual Front Panel will get the reason behind the
abnormal or status LED changes and displayed in VFP side.
As Virtual Front Panel uses the chassis control command for power actions. It won’t log
the Front button press event since Logging the front panel press event for Virtual Front
Panel press will mislead the administrator.
For Reset from Virtual Front Panel, the reset will be done by a “Chassis control” command.
For Reset from Virtual Front Panel, t he restart cause will be because of “Chassis control”
command.
During Power action, Power button/Reset button should not accept the next action until
current Power action is complete and the acknowledgment from BMC is received.
EWS will provide a valid message during Power action until it completes the current Power
action.
The VFP does not have any effect on whether the front panel is locked by “Set Front Panel
Enables” command.
The chassis ID LED provides a visual indication of a system being serviced. The state of
the chassis ID LED is affected by the following actions:
Toggled by turning the chassis ID button on or off.
Page 92
Platform Management Functional Overview Intel® Server Board S2400SC TPS
80 Intel order number G36516-002 Revision 2.0
6.12.15 Embedded Platform Debug
There is no precedence or lock-out mechanism for the control sources. When a new
request arrives, previous requests are terminated. For example, if the chassis ID button is
pressed, then the chassis ID LED changes to solid on. If the button is pressed again, then
the chassis ID LED turns off.
Note that the chassis ID will turn on because of the original chassis ID button press and
will reflect in the Virtual Front Panel after VFP sync with BMC. Virtual Front Panel won’t
reflect the chassis LED software blinking by a software command as there is no
mechanism to get the chassis ID Led status.
Only Infinite chassis ID ON/OFF by a s oftware command will reflect in EWS during
automatic /manual EWS sync up with BMC.
Virtual Front Panel help should available for virtual panel module.
At present, NMI button in VFP is disabled in Romley. It can be used in future.
The Embedded Platform Debug feature supports capturing low-level diagnostic data (applicable
MSRs, PCI config-space regi sters, and so on). This feature allows a user to export this data into
a file that is retrievable from the embedded web GUI, as well as through host and remote IPMI
methods, for the purpose of sending to an Intel engineer for an enhanced debugging capability.
The files are compressed, encrypted, and password protected. The file is not meant to be
viewable by the end user but rather to provide additional debugging capability to an Intel support
engineer.
A list of data that may be captured using this feature includes but is not limited to:
Platform sensor readings – This includes all “readable” sensors that can be accessed by
the BMC FW and have associated SDRs populated in the SDR repository. This does not
include any “event-only” sensors. (All BIOS sensors and some BMC and ME sensors are
“event-only”; meaning that they are not readable using an IPMI Get Sensor Reading
command but rather are used just for event logging purposes).
SEL – The current SEL contents are saved in both hexadecimal and text format.
CPU/memory register data – useful for diagnosing the cause of the following system
errors: CATERR, ERR[2], SMI timeout, PERR, and SERR. The debug data is saved and
timestamped for the last 3 occurrences of the error conditions.
o PCI error registers
o MSR registers
o MCH registers
BMC configuration data
o BMC FW debug log (that is, SysLog) – Captures FW debug messages.
o Non-volat ile st orage of captured data. Some of the captured data will be stored
persistently in the BMC’s non-volatile flash memory and preserved across AC
power cycles. Due to size limitations of the BMC’s flash memory, it is not feasible
to store all of the data persistently.
SMBIOS table data. The entire SMBIOS table is captured from the last boot.
PCI configuration data for on-board devices and add-in cards. The first 256 bytes of PCI
configuration data is captured for each device for each boot.
System memory ma p. The system memory map is provided by BIOS on the current boot.
This includes the EFI memory map and the Legacy (E820) memory map depending on the
current boot.
Power supplies debug capability.
Page 93
Intel® Server Board S2400SC TPS Platform Management Functional Overview
Revision 2.0 Intel order number G36516-002 81
Category
Data
Internal BMC Data
BMC uptime/load
Process list
Free Memory
Detailed Memory List
Filesystem List/Info
BMC Network Info
BMC Syslog
BMC Configuration Data
External BMC Data
Hex SEL listing
Human-readable SEL listing
oCapture of power supply “black box” data and power supply asset information.
Power supply vendors are adding the capability to store debug data within the
power supply itself. The platform debug feature provides a means to capture this
data for each installed power supply. The data can be analyzed by Intel for failure
analysis and possibly provided to the power supply vendor as well. The BMC
gets this data from the power supplies from the PMBus* manufacturer-specific
commands.
oStorage of system identification in power supply.The BMC copies board and
system serial numbers and part numbers into the power supply whenever a new
power supply is installed in the system or when the system is first powered on.
This information is included as part of the power supply black box data for each
installed power supply.
Accessibility from IPMI interfaces. The platform debug file can be accessed from an
external IPMI interface (KCS or LAN).
POST code sequence for the two most recent boots. This is a best-effort data collection by
the BMC as the BMC real-time response cannot guarantee that all POST codes are
captured.
Support for multiple debug files. The platform debug feature provides the ability to save
data to 2 separate files that are encrypted with different passwords.
o File #1 is s trictly for viewing by Intel engineering and may contain BMC log
messages (that is, syslog) and other debug data that Intel FW developers deem
useful in addition to the data specified in this document.
o File #2 can be viewed by Intel partners who have signed an NDA with Intel and
its contents are restricted to specific data items specified in this with the
exception of the BMC syslog messages and power supply “black box” data.
6.12.15.1 Output Data Format
The diagnostic feature shall output a password-protected compressed HTML file containing
specific BMC and system information. This file is not intended for end-customer usage, this file
is for customer support and engineering only.
6.12.15.2 Output Data Availability
The diagnostic data shall be available on-demand from the embedded web server, KCS, or IPMI
over LAN commands.
6.12.15.3 Output Data Categories
The following tables list the data to be provided in the diagnostic output.
Table 19. Diagnostic Data
Page 94
Platform Management Functional Overview Intel® Server Board S2400SC TPS
82 Intel order number G36516-002 Revision 2.0
Category
Data
Human-readable sensor listing
External BIOS Data
BIOS configuration settings
POST codes for the two most recent boots
System Data
SMBIOS table for the current boot
256 bytes of PCI config data for each PCI device
Memory Map (EFI and Legacy) for current boot
Category
Data
System Data
First 256 bytes of PCI config data for each PCI
device
The DCMI Specification is an emerging standard that is targeted to provide a simplified
management interface for Internet Portal Data Center (IPDC) customers. It is expected to
become a requirement for server platforms which are targeted for IPDCs. DCMI is an IPMIbased standard that builds upon a set of required IPMI standard commands by adding a set of
DCMI-specific IPMI OEM commands. Intel
be implementing the mandatory DCMI features in the BMC firmware (DCMI 1.1 Errata 1
compliance). Please refer to DCMI 1.1 errata 1 spec for details. Only mandatory commands will
be supported. No support for optional DCMI commands. Optional power management and SEL
roll over feature is not supported. DCMI Asset tag will be independent of baseboard FRU asset
Tag. Please refer table DCMI Group Extension Commands for more details on DCMI
commands.
®
S1400/S1600/S2400/S2600 Server Platforms will
The Lightweight Directory Access Protocol (LDAP) is an application protocol supported by the
BMC for the purpose of authentication and authorization. The BMC user connects with an LDAP
server for login authentication. This is only supported for non-IPMI logins including the
embedded web UI and SM-CLP. IPMI users/passwords and sessions are not supported over
LDAP.
LDAP can be configured (IP address of LDAP server, port, and so on) from the BMC’s
Embedded Web UI. LDAP authentication and authorization is supported over the any NIC
configured for system management. The BMC uses a standard Open LDAP implementation for
Linux*.
Only open LDAP is supported by BMC, Windows* and Novell* LDAP are not supported.
Page 95
Intel® Server Board S2400SC TPS Advanced Management Feature Support (RMM4)
Revision 2.0 Intel order number G36516-002 83
7.
Advanced Management Feature Support (RMM4)
Intel Product
Code
Description
Kit Contents
Benefits
AXXRMM4LITE
Intel® Remote Management Module 4
RMM4 Lite Activation Key
Enables KVM & media
onboard NIC.
AXXRMM4
Intel® Remote Management Module 4
RMM4 Lite Activation Key
100Mbe NIC.
The integrated baseboard management controller has support for advanced management
features which are enabled when an optional Intel
®
Remote Management Module 4 (RMM4) is
installed.
RMM4 is comprised of two boards – RMM4 lite and the optional Dedicated Server Management
NIC (DMN).
Table 21. RMM4 Option Kits
Lite
Dedicated NIC Port Module
redirection from the
Dedicated NIC for
management traffic.
Higher bandwidth
connectivity for KVM &
media Redirection with
On the server board each Intel® RMM4 component is installed at the following locations.
Figure 21. Intel® RMM4 Lite Activation Key Installation
Page 96
Advanced Management Feature Support (RMM4) Intel® Server Board S2400SC TPS
84 Intel order number G36516-002 Revision 2.0
7.1 Keyboard, Video, Mouse (KVM) Redirection
Figure 22. Intel® RMM4 Dedicated Management NIC Installation
If the optional Dedicated Server Management NIC is not used then the traffic can only go
through the onboard Integrated BMC-shared NIC and will share network bandwidth with the
host system. Advanced manageability features are supported over all NIC ports enabled for
server manageability.
The BMC firmware supports keyboard, video, and mouse redirection (KVM) over LAN. This
feature is available remotely from the embedded web server as a Java applet. This feature is
only enabled when the Intel
Environment (JRE) version 6.0 or later to run the KVM or media redirection applets.
The BMC supports an embedded KVM application (Remote Console) that can be launched from
the embedded web server from a remote console. USB1.1 or USB 2.0 based mouse and
keyboard redirection are supported. It is also possible to use the KVM-redirection (KVM-r)
session concurrently with media-redirection (media-r). This feature allows a user to interactively
use the keyboard, video, and mouse (KVM) functions of the remote server as if the user were
physically at the managed server.
KVM redirection console support the following keyboard layouts: English, Dutch, French,
German, Italian, Russian, and Spanish.
KVM redirection includes a “soft keyboard” function. T he “soft keyboard” is used to simulate an
entire keyboard that is connected to the remote system. The “soft keyboard” functionality
supports the following layouts: English, Dutch, French, German, Italian, Russian, and Spanish.
®
RMM4 lite is present. The client system must have a Java Runtime
Page 97
Intel® Server Board S2400SC TPS Advanced Management Feature Support (RMM4)
Revision 2.0 Intel order number G36516-002 85
7.1.1
Remote Console
7.1.2
Performance
7.1.3
Security
The KVM-redirection feature automatically senses video resolution for best possible screen
capture and provides high-performance mouse tracking and synchronization. It allows remote
viewing and configuration in pre-boot POST and BIOS setup, once BIOS has initialized video.
Other attributes of this feature include:
Encryption of the redirected screen, keyboard, and mouse
Compression of the redirected screen.
Ability to select a mouse configuration based on the OS type.
supports user definable keyboard macros.
KVM redirection feature supports the following resolutions and refresh rates:
640x480 at 60Hz, 72Hz, 75Hz, 85Hz, 100Hz
800x600 at 60Hz, 72Hz, 75Hz, 85Hz
1024x768 at 60Hx, 72Hz, 75Hz, 85Hz
1280x960 at 60Hz
1280x1024 at 60Hz
1600x1200 at 60Hz
1920x1080 (1080p),
1920x1200 (WUXGA)
1650x1080 (WSXGA+)
The Remote Console is the redirected screen, keyboard and mouse of the remote host system.
To use the Remote Console window of your managed host system, the browser must include a
Java* Runtime Environment plug-in. If the browser has no Java support, such as with a small
handheld device, the user can maintain the remote host system using the administration forms
displayed by the browser.
The Remote Console window is a Java Applet that establishes TCP connections to the BMC.
The protocol that is run over these connections is a unique KVM protocol and not HTTP or
HTTPS. This protocol uses ports #7578 for KVM, #5120 for CDROM media redirection, and
#5123 for Floppy/USB media redirection. When encryption is enabled, the protocol uses ports
#7582 for KVM, #5124 for CDROM media redirection, and #5127 for Floppy/USB media
redirection. The local network environment must permit these connections to be made, that is,
from the firewall and, in case of a private internal network, the NAT (Network Address
Translation) settings have to be configured accordingly.
The remote display accurately represents the local display. The feature adapts to chan ges to
the video resolution of the local display and continues to work smoothly when the system
transitions from graphics to text or vice-versa. The responsiveness may be slightly delayed
depending on the bandwidth and latency of the network.
Enabling KVM and/or media encryption will degrade performance. Enabling video compression
provides the fastest response while disabling compression provides better video quality.
For the best possible KVM performance, a 2Mb/sec link or higher is recommended.
The re direction of KVM over IP is performed in parallel with the local KVM without affecting the
local KVM operation.
The KVM redirection feature supports multiple encryption algorithms, including RC4 and AES.
The actual algorithm that is used is negotiated with the client based on the client’s capabilities.
Page 98
Advanced Management Feature Support (RMM4) Intel® Server Board S2400SC TPS
86 Intel order number G36516-002 Revision 2.0
7.1.4
Availability
7.1.5
Usage
7.1.6
Force-enter BIOS Setup
7.2 Media Redirection
The remote KVM session is available even when the server is powered-off (in stand-by mode).
No re-start of the remote KVM session shall be required during a server reset or power on/off.
An BMC reset (for example, due to an BMC Watchdog initiated reset or BMC reset after BMC
FW update) will require the session to be re-established.
KVM sessions persist across system reset, but not across an AC power loss.
As the server is powered up, the remote KVM session displays the complete BIOS boot
process. The user is able interact with BIOS setup, change and save settings as well as enter
and interact with option ROM configuration screens.
At least two concurrent remote KVM sessions are supported. It is possible for at least two
different users to connect to same server and start remote KVM sessions
KVM redirection can present an option to force-enter BIOS Setup. This enables the system to
enter F2 setup while booting which is often missed by the time the remote console redirects the
video.
The embedded web server provides a Java applet to enable remote media redirection. This may
be used in conjunction with the remote KVM feature, or as a standalone applet.
The media redirection feature is intended to allow system administrators or users to mount a
remote IDE or USB CD-ROM, floppy drive, or a USB flash disk as a remote device to the server.
Once mounted, the remote device appears just like a local device to the server, allowing system
administrators or users to install software (including operating systems), copy files, update
BIOS, and so on, or boot the server from this device.
The following capabilities are supported:
The operation of remotely mounted devices is independent of the local devices on the
server. Both remote and local devices are useable in parallel.
Either IDE (CD-ROM, floppy) or USB devices can be mounted as a remote device to the
server.
It is possible to boot all supported operating systems from the remotely mounted device
and to boot from disk IMAGE (*.IMG) and CD-ROM or DVD-ROM ISO files. See the
Tested/supported Operating System List for more information.
Media redirection supports redirection for both a virtual CD device and a virtual
Floppy/USB device concurrently. The CD device may be either a local CD drive or else an
ISO image file; the Floppy/USB device may be either a local Floppy drive, a local USB
device, or else a disk image file.
The media redirection feature supports multiple encryption algorithms, including RC4 and
AES. The actual algorithm that is used is negotiated with the client based on the client’s
capabilities.
A remote media session is maintained even when the server is powered-off (in standby
mode). No restart of the remote media session is required during a server reset or power
on/off. An BMC reset (for example, due to an BMC reset after BMC FW update) will require
the session to be re-established
Page 99
Intel® Server Board S2400SC TPS Advanced Management Feature Support (RMM4)
Revision 2.0 Intel order number G36516-002 87
7.2.1
Availability
7.2.2
Network Port Usage
The mounted device is visible to (and useable by) managed system’s OS and BIOS in both
pre-boot and post-boot states.
The mounted device shows up in the BIOS boot order and it is possible to change the
BIOS boot order to boot from this remote device.
It is possible to install an operating system on a bare metal server (no OS present) using
the remotely mounted device. This may also require the use of KVM-r to configure the OS
during install.
USB storage devices will appear as floppy disks over media redirection. This allows for the
installation of device drivers during OS installation.
If either a virtual IDE or virtual floppy device is remotely attached during system boot, both the
virtual IDE and virtual floppy are presented as bootable devices. It is not possible to present
only a single-mounted device type to the system BIOS.
The default inactivity timeout is 30 minutes and is not user-configurable. Media redirection
sessions persist across system reset but not across an AC power loss or BMC reset.
The KVM and media redirection features use the following ports:
5120 – CD Redirection
5123 – FD Redirection
5124 – CD Redirection (Secure)
5127 – FD Redirection (Secure)
7578 – Video Redirection
7582 – Video Redirection (Secure)
Page 100
On-board Connector/Header Overview Intel® Server Board S2400SC TPS
Intel order number G36516-002 Revision 2.0
88
8.
On-board Connector/Header Overview
8.1 Board Connector Information
Connector
Quantity
Reference Designators
Connector Type
Pin Count
J1H3
J1J2
Main power
P/S aux/IPMB
24
5
CPU
2
U6G1, U7C1
CPU sockets
1356
Main memory
8
J8G1,J8G2,J8G3,J9G1,J4D1,J4D2,J5D1,J5D2
DIMM sockets
240
PCI Express* x8
3
J3C1,J3C2,J2C1
Card edge
98
PCI Express* x16
1
J4C1
Card edge
164
32-bit PCI
1
J1C2,
Card edge
124
Intel® RMM4
1
J1C7
Connector
30
Intel® RMM4 Lite
1
J3D1
Connector
7
Key
System fans
6
J3J3,J2J8,J2J6,J3J2,J2J5,J2J7
Header
6
System fans
1
J9A1
Header
4
CPU fans
2
J5J1,J7A1
Header
4
Battery
1
BT3G2
Battery holder
2
USB
Video
1
J7A2
External DSub
15
Serial port A
1
J8A2
Connector
9
Serial port B
1
J1B1
Header
9
Front panel
1
J1C3
Header
24
Internal USB
2
J1C2
Header
10
Drive
Internal USB
1
J1J1
Type-A USB
4
HDD activity
1
J2H2
Header
2
Serial ATA
6
J1G1,J1F1,J2H1,J1H2,J1H1,J1G2
Header
7
SAS
2
J1D3, J1E1
SFF8087 miniSAS
36
HSBP_I2C
1
J3J1
Header
3
SATA SGPIO
1
J2J4
Header
4
LCP
1
J3J4
Header
7
The following section provides detailed information regarding all connectors, headers, and
jumpers on the server boards.
The following table lists all connector types available on the board and the corresponding
preference designators printed on the silkscreen.
Table 22. Board Connector Matrix
Power supply 4
Storage Upgrade
Stacked
RJ45/2xUSB
J5J2
J8A1
1 J1D2 Header 4
2 U6A1, U7A1
CPU 1 power
CPU 2 Power
External LAN built-in
magnetic and dual
8
8
22
USB Solid State
1 J2E1 Header 9
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.