No part of this document may be reproduced or transmitted in any form or by any means without prior written
consent of Huawei Technologies Co., Ltd.
Trademarks and Permissions
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Huawei Technologies Co., Ltd.
Address:Huawei Industrial Base
Bantian, Longgang
Shenzhen 518129
People's Republic of China
This document describes the CH121 V5 compute node (CH121 V5) in terms of its
components, common operations, basic configuration, hardware installation, troubleshooting,
software and configuration utilities, electrostatic discharge (ESD) prevention, and product
specifications.
About This Document
About This Document
Intended Audience
This document is intended for:
lEnterprise administrators
lEnterprise end users
Symbol Conventions
The symbols that may be found in this document are defined as follows:
Symbol
Description
Indicates an imminently hazardous situation which, if not
avoided, will result in death or serious injury.
Indicates a potentially hazardous situation which, if not
avoided, could result in death or serious injury.
Indicates a potentially hazardous situation which, if not
avoided, may result in minor or moderate injury.
Indicates a potentially hazardous situation which, if not
avoided, could result in equipment damage, data loss,
performance deterioration, or unanticipated results.
NOTICE is used to address practices not related to personal
injury.
2.1 Front Panel......................................................................................................................................................................5
2.1.2 Indicators and Buttons................................................................................................................................................. 6
3.3 Removing a CH121 V5................................................................................................................................................ 28
3.4 Installing a CH121 V5..................................................................................................................................................31
3.5 Removing the Cover.....................................................................................................................................................35
3.6 Installing the Cover...................................................................................................................................................... 35
3.7 Removing the Air Ducts............................................................................................................................................... 36
3.8 Installing the Air Duct.................................................................................................................................................. 37
3.9 Removing the PCIe Card Panel.................................................................................................................................... 39
3.10 Installing the PCIe Card Panel....................................................................................................................................40
3.11 Removing a PCIe Card............................................................................................................................................... 41
3.12 Installing a PCIe Card.................................................................................................................................................43
3.13 Removing a SAS/SATA Drive....................................................................................................................................46
3.14 Installing a SAS/SATA Drive.....................................................................................................................................47
3.15 Removing an NVMe Drive.........................................................................................................................................49
3.16 Installing an NVMe Drive ......................................................................................................................................... 62
4 Installation and Configuration................................................................................................. 71
4.1.1 Space and Airflow Requirements.............................................................................................................................. 71
4.1.2 Temperature and Humidity Requirements................................................................................................................. 71
4.2.4 Installing the Compute Node.....................................................................................................................................75
4.2.5 Connecting to Network..............................................................................................................................................75
4.3.3.1 Changing the Initial Password of the Default iBMC User..................................................................................... 78
4.3.4 Checking the Compute Node.....................................................................................................................................80
4.3.5 Setting the iBMC IP Address.................................................................................................................................... 82
4.3.7 Configuring the BIOS................................................................................................................................................83
4.3.7.1 Entering the BIOS.................................................................................................................................................. 84
4.3.7.1.1 Entering the BIOS (Skylake)...............................................................................................................................84
4.3.7.1.2 Entering the BIOS (Cascade Lake)..................................................................................................................... 86
4.3.7.2 Setting the System Boot Sequence......................................................................................................................... 88
4.3.7.3 Setting PXE for a NIC............................................................................................................................................ 89
4.3.7.4 Setting the BIOS Password.....................................................................................................................................90
FusionServer Pro CH121 V5 Compute Node
User GuideContents
4.3.7.5 Selecting a Language..............................................................................................................................................90
4.3.7.6 Restarting the Compute Node.................................................................................................................................91
4.3.8 Installing an OS......................................................................................................................................................... 91
5 Optional Part Installation.......................................................................................................... 93
5.2.1 Installing an M.2 FRU............................................................................................................................................... 93
5.2.2 Installing the Screw-in RAID Controller Card..........................................................................................................95
5.2.3 Installing the Avago SAS3004iMR PCIe RAID Control Card................................................................................. 97
5.2.4 Installing an M.2 FRU on the Avago SAS3004iMR PCIe RAID Controller Card................................................... 98
5.2.5 Installing the Supercapacitor..................................................................................................................................... 98
5.2.6 Installing a Mezzanine Card.................................................................................................................................... 101
5.2.7 Installing a Processor...............................................................................................................................................103
5.2.8 Installing a Memory Module................................................................................................................................... 110
5.2.9 Installing the TPM................................................................................................................................................... 113
6 Software and Hardware Compatibility.................................................................................115
8 Software and Configuration Utilities....................................................................................117
8.1 Upgrading the System.................................................................................................................................................117
9.2 Maintenance and Warranty......................................................................................................................................... 126
12 Common Operations...............................................................................................................134
12.1 Querying the iBMC IP Address................................................................................................................................134
12.2 Logging In to the iBMC WebUI............................................................................................................................... 135
12.3 Logging In to the MM910 WebUI............................................................................................................................ 139
12.4 Collecting Log Information on the MM910 WebUI.................................................................................................144
12.5 Logging In to a Compute Node Using MM910 SOL...............................................................................................145
FusionServer Pro CH121 V5 Compute Node
User GuideContents
12.6 Managing the E9000 Server Using the Local KVM................................................................................................ 146
12.6.1 Logging In to the MM910 CLI..............................................................................................................................146
12.6.2 Logging In to the Operating System of a Compute Node..................................................................................... 147
12.6.3 Mounting the DVD Drive to a Compute Node......................................................................................................148
12.6.4 Logging In to a Compute Node or a Switch Module over SOL............................................................................149
12.7 Logging In to the Desktop of a Server..................................................................................................................... 151
12.7.1 Using the Remote Virtual Console........................................................................................................................151
12.7.2 Using the Independent Remote Console............................................................................................................... 155
12.8 Logging In to the CLI...............................................................................................................................................161
12.8.1 Logging In to the CLI Using PuTTY over a Network Port...................................................................................161
12.10 Clearing Data from a Storage Device.....................................................................................................................164
A More Information..................................................................................................................... 167
The CH121 V5 is a half-width compute node powered by Intel® Xeon® Scalable processors.
It delivers supreme computing power, large memory capacity, and outstanding scalability.
The CH121 V5 provides dense computing capability and an ultra-large memory. It is
optimized for virtualization, cloud computing, high-performance computing, and computeintensive enterprise applications.
The CH121 V5 compute nodes are installed in an E9000 chassis and are centrally managed by
the management module.
l a: The drive slots support 2.5-inch SAS/SATA/NVMe drives and M.2 modules and
mixed configuration of them.
l b: An M.2 module is a 2.5-inch drive module that consists of one M.2 adapter and two
M.2 FRUs.
1.3 Logical Structure
Figure 1-3 CH121 V5 logical structure
1 Overview
lThe server supports one or two Intel® Xeon® Scalable processors.
lThe server supports up to 24 memory modules.
lThe CPUs (processors) interconnect with each other through two UPI links at a speed of
up to 10.4 GT/s.
lThe mezzanine cards connect to the processors through PCIe buses to provide service
ports.
lThe Platform Controller Hub (PCH) has a built-in MAC chip and provides two 10 Gbit/s
ports.
lThe storage module, consisting of a RAID controller card and a drive backplane,
connects to the CPUs through PCIe buses.
lThe BMC provides device management functions, such as compute node power control,
slot number acquisition, PSU detection, and KVM over IP.
Table 2-1 Indicators and buttons on the front panel
2 Hardware Description
SilkscreenIndicator/
Button
Power button/
indicator
UIDUID button/
indicator
Description
Power indicator:
l Off: The device is not powered on.
l Steady yellow: The device is powered on.
l Blinking yellow: The power button is locked. The
power button is locked when the iBMC is starting.
l Steady green: The device is ready to power on.
Power button:
l When the device is powered on, you can press this
button to gracefully shut down the OS.
l When the device is powered on, holding down this
button for 6 seconds will forcibly power off the
device.
l When the power indicator is steady green, you can
press this button to power on the device.
The UID button/indicator helps identify and locate a
device.
UID indicator:
l Off: The device is not being located.
l Blinking blue: The device has been located and is
differentiated from other devices that have also
been located.
l Steady blue: The device is being located.
UID button:
l You can turn on or off the UID indicator by
pressing the UID button or by using the iBMC or
MM910 CLI or WebUI.
l You can press this button to turn on or off the UID
indicator.
l You can press and hold down this button for 4 to 6
l Off: The device is powered off or is faulty.
l Blinking red at 1 Hz: A major alarm has been
generated for the device.
l Blinking red at 5 Hz: A critical alarm has been
generated for the device, or the device is not
securely installed.
l Steady green: The device is operating properly.
FusionServer Pro CH121 V5 Compute Node
User Guide
2.1.3 Ports
Port Positions
Figure 2-3 Ports on the front panel
2 Hardware Description
1USB 3.0 ports2-
Port Description
Port
USB portUSB 3.02Used to connect to a USB device.
TypeQuantityDescription
2.1.4 Installation Positions
The CH121 V5 is installed in a half-width slot in the front of the E9000 chassis. An E9000
chassis can house a maximum of 16 CH121 V5 compute nodes.
NOTICE
Before connecting an external USB
device, check that the USB device
functions properly. The server may
operate abnormally if an abnormal
USB device is connected.
lThe server supports one or two processors.
lIf only one processor is required, install it in socket CPU1.
lThe same model of processors must be used in a server.
lContact your local Huawei sales representative or use the Intelligent Computing
Compatibility Checker to determine the components to be used.
l 1R: single-rank
l 2R: dual-rank
l 4R: quad-rank
l 8R: octal-rank
l X4: 4-bit
l X8: 8-bit
l PC3: DDR3
l PC4: DDR4
l 2133 MT/S
l 2400 MT/S
l 2666 MT/S
l 2933 MT/S
6Column Access Strobe (CAS)
latency.
7DIMM type.
2.3.2 Memory Subsystem Architecture
The CH121 V5 provides 24 memory slots. Each processor integrates six memory channels.
Install the memory modules in the primary memory channels first. If the primary memory
channel is not populated, the memory modules in secondary memory channels cannot be used.
lThe total memory capacity is the sum of the capacity of all DDR4 memory modules.
The total memory capacity cannot exceed the maximum memory capacity supported by
the CPUs.
lUse the Intelligent Computing Compatibility Checker to determine the capacity type
of a single memory module.
lThe maximum number of memory modules supported by a server varies depending on
the CPU type, memory type, rank quantity, and operating voltage.
NOTE
Each memory channel supports a maximum of 8 ranks. The number of memory modules
supported by each channel varies depending on the number of ranks supported by each channel:
Number of memory modules supported by each channel ≤ Number of ranks supported by each
memory channel/Number of ranks supported by each memory module
lA memory channel supports more than eight ranks for LRDIMMs.
NOTE
A quad-rank LRDIMM generates the same electrical load as a single-rank RDIMM on a memory
bus.
lDDR4 memory modules of different specifications (capacity, bit width, rank, and height)
can be used together for capacity expansion purposes. However, the memory Reliability,
Availability, and Serviceability (RAS) features may be affected.
l a: If the Cascade Lake processor is used, the rated speed and maximum operating speed
of the RDIMM can reach 2933 MT/s. If the Skylake processor is used, the rated speed
and maximum operating speed of a DDR4 memory module can reach 2666 MT/s only.
l b: The maximum number of DDR4 memory modules is based on dual-processor
configuration. The value is halved for a server with only one processor.
l c: The maximum DDR4 memory capacity varies depending on the processor type. The
value listed in this table is based on the assumption that twenty-four 128 GB DDR4
memory modules are configured.
2.3.4 Memory Installation Guidelines
lObserve the following when configuring DDR4 memory modules:
– Install memory modules only when corresponding processors are installed.
– Do not install LRDIMMs and RDIMMs in the same server.
– Install filler memory modules in vacant slots.
lObserve the following when configuring DDR4 memory modules in specific operating
mode:
– Memory sparing mode
nComply with the general installation guidelines.
nEach memory channel must have a valid online spare configuration.
nThe channels can have different online spare configurations.
nEach populated channel must have a spare rank.
– Memory mirroring mode
nComply with the general installation guidelines.
nInstall memory modules for channels 1 and 2 or channels 3 and 4. The
memory modules installed must be identical in size and organization.
nFor a multi-processor configuration, each processor must have a valid memory
mirroring configuration.
– Memory scrubbing mode
nComply with the general installation guidelines.
2 Hardware Description
2.3.5 Memory Installation Positions
A CH121 V5 supports a maximum of 24 DDR4 memory modules. Balanced memory
configuration is recommended for optimal memory performance.
The following memory protection technologies are supported:
lECC
lFull mirroring
lAddress range mirroring
lRank sparing mode
lFaulty DIMM isolation
lMemory thermal throttling
lMemory address parity protection
lAdaptive double device data correction (ADDDC)
lMemory demand/patrol scrubbing
lData scrambling
lADDDC+1
2 Hardware Description
2.4 Storage
2.4.1 Drive Configurations
Table 2-4 Drive configurations
Configuration
Front drives
l a: Only 2.5-inch drives fit into the front slots.
l b: Mixed configuration of M.2 modules and SAS/SATA/NVMe drives is supported.
l Contact your local Huawei sales representative or use the Intelligent Computing
Compatibility Checker to determine the components to be used.
Steady green/OffSteady yellowThe NVMe SSD is faulty.
lIf the VMD function is disabled, NVMe SSDs support only orderly hot swap.
Table 2-8 NVMe SSD indicators (VMD disabled)
Green Indicator
OffOffThe NVMe SSD cannot be detected.
Steady greenOffThe NVMe SSD is working properly.
Blinking green at 2HzOffData is being read from or written to the
OffBlinking yellow at
OffBlinking yellow at
Steady green/OffSteady yellowThe NVMe SSD is faulty.
Yellow IndicatorDescription
2 Hz
0.5 Hz
The data on the secondary NVMe drive is
being rebuilt.
NVMe SSD.
The NVMe SSD is being located or hotswapped.
The hot removal process is complete, and
the NVMe SSD is removable.
2.4.4 RAID Controller Card
The RAID controller card supports RAID configuration, RAID level migration, and drive
roaming.
lContact your local Huawei sales representative or use the Intelligent Computing
Compatibility Checker to determine the components to be used.
Table 2-9 RAID levels supported by the CH121 V5
RAID
Level
RAID 0LowHighHigh2100%
RAID 1HighLowLow250%
ReliabilityRead
2.5 Network
Performan
ce
Write
Performan
ce
Minimum
Number of
Drives
Drive
Utilization
2.5.1 LOMs
The LOM is a network interface module (X722) integrated in the PCH. It can be connected to
an I/O module (switch module). The LOM provides two 10GE network ports for connecting
to the Base network ports of the switch modules in slots 2X and 3X. The LOM supports Wake
on LAN (WOL) and PXE functions.
Figure 2-15 Connections between the LOM and I/O modules
2 Hardware Description
NOTE
l In addition to the LOM, the compute node connects to the Fabric ports of the switch modules
through the network ports on mezzanine cards.
l Powering off the compute node forcibly will make the WOL function of the LOM ports invalid.
l If flow control is enabled for a LOM port, the switch module connected to the LOM port must also
have flow control enabled.
Table 2-10 I/O modules supported by the LOM
I/O Module
CX9162X/3X√-
CX9202X/3X√-
I/O Module
Slot
1E/4E×The LOM cannot communicate
1E/4E×The LOM cannot communicate
LOMRemarks
with the I/O modules in slots 1E
and 4E.
with the I/O modules in slots 1E
and 4E.
2.6 I/O Expansion
2.6.1 PCIe Cards
lThe server supports a range of PCIe cards to provide diverse expandability and ease of