HP P01880-B21 User Manual

HPE ProLiant DL360 Gen10 Server User Guide

Abstract
This document is for the person who installs, administers, and troubleshoots servers and storage systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards in products with hazardous energy levels.
Part Number: 869840-008 Published: July 2019 Edition: 8
©
Copyright 2018-2019 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website.
Acknowledgments
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

Contents

Component identification.......................................................................7
Front panel components............................................................................................................... 7
Front panel LEDs and buttons...................................................................................................... 9
UID button functionality.................................................................................................... 11
Front panel LED power fault codes.................................................................................. 11
Systems Insight Display LEDs.................................................................................................... 11
Systems Insight Display combined LED descriptions................................................................. 13
Rear panel components..............................................................................................................15
Rear panel LEDs.........................................................................................................................15
System board components......................................................................................................... 17
System maintenance switch descriptions........................................................................ 18
NMI functionality...............................................................................................................18
DIMM slot locations..........................................................................................................18
DIMM label identification.................................................................................................. 19
NVDIMM identification......................................................................................................20
NVDIMM LED identification..............................................................................................22
HPE Persistent Memory module label identification........................................................ 23
Device numbers.......................................................................................................................... 23
Hot-plug drive LED definitions.................................................................................................... 25
NVMe SSD LED definitions........................................................................................................ 26
uFF drive components and LEDs............................................................................................... 27
Hot-plug fans...............................................................................................................................28
HPE Smart Array P824i-p MR Gen10 Controller........................................................................ 30
HPE InfiniBand HDR/Ethernet 940QSFP 56x16 adapter LEDs..................................................31
Operations............................................................................................. 32
Power up the server....................................................................................................................32
Power down the server............................................................................................................... 32
Extend the server from the rack..................................................................................................32
Remove the server from the rack................................................................................................33
Remove the access panel...........................................................................................................33
Install the access panel...............................................................................................................33
Remove the hot-plug fan.............................................................................................................34
Remove the primary PCI riser cage............................................................................................35
Install the primary PCI riser cage................................................................................................36
Remove the secondary PCI riser cage....................................................................................... 37
Install the secondary PCI riser cage........................................................................................... 38
Remove the 8 SFF drive backplane............................................................................................39
Release the cable management arm ......................................................................................... 39
Setup...................................................................................................... 41
Optional service.......................................................................................................................... 41
Optimum environment.................................................................................................................41
Space and airflow requirements.......................................................................................41
Temperature requirements...............................................................................................42
Power requirements......................................................................................................... 42
Electrical grounding requirements....................................................................................43
Connecting a DC power cable to a DC power source......................................................43
3
Server warnings and cautions.....................................................................................................44
Rack warnings.............................................................................................................................45
Identifying the contents of the server shipping carton.................................................................46
Installing hardware options ........................................................................................................ 46
Installing the server into the rack................................................................................................ 46
Operating system........................................................................................................................47
Installing the operating system with Intelligent Provisioning............................................ 47
Selecting boot options in UEFI Boot Mode................................................................................. 47
Selecting boot options.................................................................................................................48
Registering the server.................................................................................................................48
Hardware options installation..............................................................49
Hewlett Packard Enterprise product QuickSpecs....................................................................... 49
Introduction................................................................................................................................. 49
Installing a redundant hot-plug power supply............................................................................. 49
Memory options...........................................................................................................................50
DIMM and NVDIMM population information.....................................................................50
DIMM-processor compatibility..........................................................................................50
HPE SmartMemory speed information.............................................................................51
Installing a DIMM..............................................................................................................51
HPE 16GB NVDIMM option............................................................................................. 52
HPE Persistent Memory option........................................................................................ 56
Installing a high-performance fan................................................................................................59
Drive options............................................................................................................................... 61
Hot-plug drive guidelines..................................................................................................61
Removing the hard drive blank........................................................................................ 61
Installing a hot-plug SAS or SATA drive........................................................................... 62
Removing a hot-plug SAS or SATA hard drive.................................................................63
Installing the NVMe drives............................................................................................... 63
Removing and replacing an NVMe drive..........................................................................65
Installing a uFF drive and SCM drive carrier....................................................................65
Removing and replacing a uFF drive............................................................................... 66
Installing an 8 SFF optical drive....................................................................................... 67
Universal media bay options.......................................................................................................68
Installing a 2 SFF SAS/SATA drive cage..........................................................................68
Installing a 2 SFF NVMe drive cage option......................................................................71
Installing a 2 SFF HPE Smart Carrier M.2 (SCM) drive cage.......................................... 74
Installing an 8 SFF display port/USB/optical blank option................................................76
Installing the 4 LFF optical drive option...................................................................................... 78
Installing the rear drive riser cage option.................................................................................... 81
Primary PCI riser cage options................................................................................................... 84
Installing an optional primary PCI riser board ................................................................. 84
Installing the SATA M.2 2280 riser option........................................................................ 86
Installing an expansion board in the primary riser cage...................................................88
Installing an accelerator or GPU in the primary riser cage...............................................90
Secondary PCI riser options....................................................................................................... 91
Installing a secondary full-height PCI riser cage option................................................... 91
Installing a secondary low-profile PCIe slot riser cage option..........................................95
Installing an expansion board in the secondary riser cage.............................................. 96
Installing an accelerator or GPU in the secondary riser cage.......................................... 99
Controller options......................................................................................................................101
Installing an HPE Smart Array P408i-a SR Gen10 Controller option.............................102
Installing an HPE Smart Array P408i-p SR Gen10 Controller option.............................105
Installing an HPE Smart Array P816i-a SR Gen10 Controller option.............................108
Installing an HPE Smart Array P824i-p MR Gen10 controller in a configured server.....111
4
Installing the operating system with the HPE Smart Array MR Gen10 P824i-p
controller driver...............................................................................................................112
Processor and heatsink options................................................................................................ 113
Installing a processor heatsink assembly.......................................................................113
Installing a high-performance heatsink...........................................................................115
Processor, heatsink, and socket components................................................................119
Installing the Systems Insight Display power module............................................................... 120
Installing the 4 LFF display port/USB module...........................................................................124
Installing the serial cable option................................................................................................126
Installing the Chassis Intrusion Detection switch option........................................................... 128
Installing a FlexibleLOM option.................................................................................................129
Energy pack options................................................................................................................. 131
HPE Smart Storage Battery........................................................................................... 131
HPE Smart Storage Hybrid Capacitor............................................................................132
Minimum firmware versions............................................................................................132
Energy pack option configurations................................................................................. 132
HPE Trusted Platform Module 2.0 Gen10 option......................................................................137
Overview........................................................................................................................ 137
HPE Trusted Platform Module 2.0 Guidelines................................................................138
Installing and enabling the HPE TPM 2.0 Gen10 Kit..................................................... 138
Cabling................................................................................................. 144
Cabling overview ......................................................................................................................144
SFF cables................................................................................................................................144
SFF configuration cable routing..................................................................................... 145
Additional SFF cabling................................................................................................... 149
LFF cables................................................................................................................................ 150
LFF configuration cable routing......................................................................................150
Additional LFF cabling....................................................................................................150
Software and configuration utilities.................................................. 152
Server mode..............................................................................................................................152
Product QuickSpecs................................................................................................................. 152
Active Health System Viewer....................................................................................................152
Active Health System..................................................................................................... 153
HPE iLO 5................................................................................................................................. 154
iLO Federation............................................................................................................... 154
iLO Service Port............................................................................................................. 154
iLO RESTful API.............................................................................................................155
RESTful Interface Tool................................................................................................... 155
iLO Amplifier Pack..........................................................................................................155
Integrated Management Log.....................................................................................................156
Intelligent Provisioning.............................................................................................................. 156
Intelligent Provisioning operation................................................................................... 156
Management Security............................................................................................................... 157
Scripting Toolkit for Windows and Linux................................................................................... 157
UEFI System Utilities................................................................................................................ 158
Selecting the boot mode ............................................................................................... 158
Secure Boot................................................................................................................... 159
Launching the Embedded UEFI Shell ........................................................................... 159
HPE Smart Storage Administrator............................................................................................ 160
HPE MR Storage Administrator................................................................................................ 161
HPE InfoSight for servers ........................................................................................................ 161
StorCLI......................................................................................................................................161
5
USB support..............................................................................................................................162
External USB functionality..............................................................................................162
Redundant ROM support.......................................................................................................... 162
Safety and security benefits........................................................................................... 162
Keeping the system current...................................................................................................... 162
Updating firmware or system ROM................................................................................ 162
Drivers............................................................................................................................165
Software and firmware................................................................................................... 165
Operating system version support................................................................................. 165
HPE Pointnext Portfolio..................................................................................................165
Proactive notifications.................................................................................................... 166
Troubleshooting.................................................................................. 167
Troubleshooting resources........................................................................................................167
Removing and replacing the system battery....................................168
Specifications......................................................................................170
Environmental specifications.................................................................................................... 170
Server specifications.................................................................................................................170
Power supply specifications......................................................................................................171
HPE 500W Flex Slot Platinum Hot-plug Low Halogen Power Supply............................171
HPE 800W Flex Slot Platinum Hot-plug Low Halogen Power Supply............................172
HPE 800W Flex Slot Titanium Hot-plug Low Halogen Power Supply............................173
HPE 800W Flex Slot Universal Hot-plug Low Halogen Power Supply...........................174
HPE 800W Flex Slot -48VDC Hot-plug Low Halogen Power Supply.............................175
HPE 1600W Flex Slot Platinum Hot-plug Low Halogen Power Supply..........................176
Hot-plug power supply calculations.......................................................................................... 177
Websites.............................................................................................. 178
Support and other resources.............................................................179
Accessing Hewlett Packard Enterprise Support....................................................................... 179
Accessing updates....................................................................................................................179
Customer self repair..................................................................................................................180
Remote support........................................................................................................................ 180
Warranty information.................................................................................................................180
Regulatory information..............................................................................................................181
Documentation feedback.......................................................................................................... 181
Acronyms and abbreviations.............................................................182
6

Component identification

Front panel components

8 SFF
Item Description
1 Serial label pull tab
2 Display port (optional)
3 Optical drive (optional)
4 USB 2.0 port (optional)
5 USB 3.0 port
6 iLO Service Port
The operating system does not recognize this port as a USB port.
7 SAS/SATA drive bays
4 LFF
Item Description
1 Optical drive blank (optional)
2 Serial label pull tab
3 Display port (optional)
4 USB 2.0 port (optional)
Table Continued
Component identification 7
Item Description
5 iLO Service Port
The operating system does not recognize this port as a USB port.
6 USB 3.0 port
7 SAS/SATA drive bays
10 SFF NVMe/SAS Combo
Item Description
1 Serial label pull tab
2 Systems Insight Display (optional)
3 USB 3.0 port
4 SAS/SATA/NVMe drive bays
When the 10 SFF NVMe/SAS backplane option is installed, NVMe drives must be installed in bays 9 and 10. The other bays support a mix of NVMe and SAS drives.
8 Component identification

Front panel LEDs and buttons

8 SFF/10 SFF
Item Description Status
1 UID button/LED
1
Solid blue = Activated
2 Power On/Standby button and
system power LED
3 Health LED
1
4 NIC status LED
1
1
Flashing blue:
1 Hz = Remote management or firmware upgrade in progress
4 Hz = iLO manual reboot sequence initiated
8 Hz = iLO manual reboot sequence in progress
Off = Deactivated
Solid green = System on
Flashing green = Performing power on sequence
Solid amber = System in standby
Off = No power present
2
Solid green = Normal
Flashing green = iLO is rebooting.
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = Link to network
Flashing green = Network active
Off = No network activity
Component identification 9
1
When all four LEDs described in this table flash simultaneously, a power fault has occurred.
2
Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has occurred, or the power button cable is disconnected.
3
If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health status.
4 LFF
Item Description Status
1 UID button/LED
1
Solid blue = Activated.
Flashing blue:
1 Hz = Remote management or firmware upgrade in progress.
4 Hz = iLO manual reboot sequence initiated.
8 Hz = iLO manual reboot sequence in progress.
Off = Deactivated.
2 NIC status LED
1
Solid green = Link to network.
Flashing green = Network active.
Off = No network activity.
Table Continued
10 Component identification
Item Description Status
3 Health LED
1
Solid green = Normal.
Flashing green = iLO is rebooting.
Flashing amber = System degraded.
Flashing red = System critical.
4 Power On/Standby button and
system power LED
1
Solid green = System on.
Flashing green = Performing power on sequence.
Solid amber = System in standby.
Off = No power present.
1
When all four LEDs described in this table flash simultaneously, a power fault has occurred.
2
To identify components in a degraded or critical state, see the Systems Insight Display LEDs, check iLO/BIOS logs, and reference the server troubleshooting guide.
3
Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has occurred, or the power button cable is disconnected.

UID button functionality

The UID button can be used to display the Server Health Summary when the server will not power on. For more information, see the latest HPE iLO 5 User Guide on the Hewlett Packard Enterprise website.

Front panel LED power fault codes

The following table provides a list of power fault codes, and the subsystems that are affected. Not all power faults are used by all servers.
2
3
Subsystem LED behavior
System board 1 flash
Processor 2 flashes
Memory 3 flashes
Riser board PCIe slots 4 flashes
FlexibleLOM 5 flashes
Removable HPE Smart Array SR Gen10 controller 6 flashes
System board PCIe slots 7 flashes
Power backplane or storage backplane 8 flashes
Power supply 9 flashes

Systems Insight Display LEDs

The Systems Insight Display LEDs represent the system board layout. The display enables diagnosis with the access panel installed.
Component identification 11
Description Status
Processor LEDs
DIMM LEDs
Fan LEDs
NIC LEDs
1
Power supply LEDs
Off = Normal
Amber = Failed processor
Off = Normal
Amber = Failed DIMM or configuration issue
Off = Normal
Amber = Failed fan or missing fan
Off = No link to network
Solid green = Network link
Flashing green = Network link with activity
If power is off, the front panel LED is not active. For status, see Rear panel LEDs on page 15.
Off = Normal
Solid amber = Power subsystem degraded, power supply failure, or input power lost.
PCI riser LED
Over temp LED
12 Component identification
Off = Normal
Amber = Incorrectly installed PCI riser cage
Off = Normal
Amber = High system temperature detected
Table Continued
Description Status
Amp Status LED
Power cap LED
1
For Networking Choice (NC) server models, the embedded NIC ports are not equipped on the server. Therefore, the NIC LEDs on the Systems Insight Display will flash based on the FlexibleLOM network port activity. In the case of a dual-port FlexibleLOM, only NIC LED 1 and 2 will illuminate to correspond with the activity of the respective network ports.
When the health LED on the front panel illuminates either amber or red, the server is experiencing a health event. For more information on the combination of these LEDs, see Systems Insight Display combined LED descriptions on page 13).
Off = AMP modes disabled
Solid green = AMP mode enabled
Solid amber = Failover
Flashing amber = Invalid configuration
Off = System is in standby, or no cap is set.
Solid green = Power cap applied

Systems Insight Display combined LED descriptions

The combined illumination of the following LEDs indicates a system condition:
Systems Insight Display LEDs
System power LED
Health LED
Systems Insight Display LED and color
Processor (amber) Red Amber One or more of the following
Processor (amber) Amber Green Processor in socket X is in a pre-
DIMM (amber) Red Green One or more DIMMs have failed.
Health LED
System power LED
Status
conditions may exist:
Processor in socket X has failed.
Processor X is not installed in the socket.
Processor X is unsupported.
ROM detects a failed processor during POST.
failure condition.
DIMM (amber) Amber Green DIMM in slot X is in a pre-failure
condition.
Table Continued
Component identification 13
Systems Insight Display LED and color
Over temp (amber) Amber Green The Health Driver has detected a
Over temp (amber) Red Amber The server has detected a hardware
PCI riser (amber) Red Green The PCI riser cage is not seated
Fan (amber) Amber Green One fan has failed or has been
Fan (amber) Red Green Two or more fans have failed or been
Power supply (amber) Red Amber One or more of the following
Health LED
System power LED
Status
cautionary temperature level.
critical temperature level.
properly.
removed.
removed.
conditions may exist:
Only one power supply is installed and that power supply is in standby.
Power supply fault
System board fault
Power supply (amber) Amber Green One or more of the following
conditions may exist:
Redundant power supply is installed and only one power supply is functional.
AC power cord is not plugged into redundant power supply.
Redundant power supply fault
Power supply mismatch at POST or power supply mismatch through hot-plug addition
Power cap (off) Amber Standby
Power cap (green) Flashing
green
Power cap (green) Green Power is available.
Power cap (flashing amber) Amber Power is not available.
IMPORTANT: If more than one DIMM slot LED is illuminated, further troubleshooting is required. Test each bank of DIMMs by removing all other DIMMs. Isolate the failed DIMM by replacing each DIMM in a bank with a known working DIMM.
Waiting for power
14 Component identification

Rear panel components

Item Description
1 Slot 1 PCIe3
2 Slot 2 PCIe3
3 Slot 3 PCIe3 (optional - requires second processor)
4 Power supply 2 (PS2)
5 Power supply 1 (PS1)
6 Video port
7 NIC ports (if equipped)
8 iLO Management Port
9 Serial port (optional)
10 USB 3.0 ports
11 FlexibleLOM (optional)

Rear panel LEDs

Component identification 15
Item Description Status
1 UID LED Solid blue = Identification is activated.
Flashing blue = System is being managed remotely.
Off = Identification is deactivated.
2R iLO 5/standard
NIC activity LED
2L iLO 5/standard
NIC link LED
3 Power supply 2
LED
4 Power supply 1
LED
Solid green = Activity exists.
Flashing green = Activity exists.
Off = No activity exists.
Solid green = Link exists.
Off = No link exists.
Solid green = Normal
Off = One or more of the following conditions exists:
AC power unavailable
Power supply failed
Power supply in standby mode
Power supply exceeded current limit.
Solid green = Normal
Off = One or more of the following conditions exists:
AC power unavailable
Power supply failed
Power supply in standby mode
Power supply exceeded current limit.
16 Component identification

System board components

Item Description
1 FlexibleLOM connector
2 Primary (processor 1) PCIe riser connector
3 System maintenance switch
4 Front display port/USB 2.0 connector
5 x4 SATA port 1
6 x4 SATA port 2
7 x2 SATA port 3
8 x1 SATA port 4
9 Front power/USB 3.0 connector
10 Optical/SATA port 5
11 Energy pack connector
12 Micro SD card slot
13 Chassis Intrusion Detection connector
14 Drive backplane power connector
15 Dual internal USB 3.0 connector
16 Type-a SmartArray connector
17 Secondary (processor 2) PCIe riser connector
18 System battery
19 TPM connector (optional)
20 Serial port connector (optional)
Component identification 17

System maintenance switch descriptions

Position Default Function
1
S1
S2 Off Reserved
S3 Off Reserved
S4 Off Reserved
1
S5
Off
Off
Off = iLO security is enabled.
On = iLO security is disabled.
Off = Power-on password is enabled.
On = Power-on password is disabled.
S61, 2,
3
Off
S7 Off Reserved
S8 Reserved
S9 Reserved
S10 Reserved
S11 Reserved
S12 Reserved
1
To access the redundant ROM, set S1, S5, and S6 to On.
2
When the system maintenance switch position 6 is set to the On position, the system is prepared to restore all configuration settings to their manufacturing defaults.
3
When the system maintenance switch position 6 is set to the On position and Secure Boot is enabled, some configurations cannot be restored. For more information, see Secure Boot on page 159.

NMI functionality

An NMI crash dump enables administrators to create crash dump files when a system is hung and not responding to traditional debugging methods.
Off = No function
On = Restore default manufacturing settings
An analysis of the crash dump log is an essential part of diagnosing reliability problems, such as hanging operating systems, device drivers, and applications. Many crashes freeze a system, and the only available action for administrators is to cycle the system power. Resetting the system erases any information that could support problem analysis, but the NMI feature preserves that information by performing a memory dump before a hard reset.
To force the OS to invoke the NMI handler and generate a crash dump log, the administrator can use the iLO Virtual NMI feature.

DIMM slot locations

DIMM slots are numbered sequentially (1 through 12) for each processor. The supported AMP modes use the letter assignments for population guidelines.
18 Component identification

DIMM label identification

To determine DIMM characteristics, see the label attached to the DIMM. The information in this section helps you to use the label to locate specific information about the DIMM.
Item Description Example
1 Capacity
2 Rank
8 GB
16 GB
32 GB
64 GB
128 GB
1R = Single rank
2R = Dual rank
4R = Quad rank
8R = Octal rank
Table Continued
Component identification 19
Item Description Example
3 Data width on DRAM
4 Memory generation
5 Maximum memory speed
6 CAS latency
x4 = 4-bit
x8 = 8-bit
x16 = 16-bit
PC4 = DDR4
2133 MT/s
2400 MT/s
2666 MT/s
2933 MT/s
P = CAS 15-15-15
T = CAS 17-17-17
U = CAS 20-18-18
V = CAS 19-19-19 (for RDIMM, LRDIMM)
V = CAS 22-19-19 (for 3DS TSV LRDIMM)
7 DIMM type
For more information about product features, specifications, options, configurations, and compatibility, see the HPE DDR4 SmartMemory QuickSpecs on the Hewlett Packard Enterprise website (http://
www.hpe.com/support/DDR4SmartMemoryQS).

NVDIMM identification

NVDIMM boards are blue instead of green. This change to the color makes it easier to distinguish NVDIMMs from DIMMs.
To determine NVDIMM characteristics, see the full product description as shown in the following example:
Y = CAS 21-21-21 (for RDIMM, LRDIMM)
Y = CAS 24-21-21 (for 3DS TSV LRDIMM)
R = RDIMM (registered)
L = LRDIMM (load reduced)
E = Unbuffered ECC (UDIMM)
20 Component identification
Item Description Definition
1 Capacity 16 GiB
2 Rank 1R (Single rank)
3 Data width per DRAM chip x4 (4 bit)
4 Memory type NN4=DDR4 NVDIMM-N
5 Maximum memory speed 2667 MT/s
6 Speed grade V (latency 19-19-19)
7 DIMM type RDIMM (registered)
8 Other
For more information about NVDIMMs, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
NVDIMM 2D Data Matrix barcode
The 2D Data Matrix barcode is on the right side of the NVDIMM label and can be scanned by a cell phone or other device.
When scanned, the following information from the label can be copied to your cell phone or device:
(P) is the module part number.
(L) is the technical details shown on the label.
(S) is the module serial number.
Example: (P)HMN82GR7AFR4N-VK (L)16GB 1Rx4 NN4-2666V-RZZZ-10(S)80AD-01-1742-11AED5C2
Component identification 21

NVDIMM LED identification

Item LED description LED color
1 Power LED Green
2 Function LED Blue
NVDIMM-N LED combinations
State Definition NVDIMM-N Power LED
0 AC power is on (12V rail) but the NVM
controller is not working or not ready.
1 AC power is on (12V rail) and the NVM
controller is ready.
2 AC power is off or the battery is off (12V
rail off).
3 AC power is on (12V rail) or the battery is
on (12V rail) and the NVDIMM-N is active (backup and restore).
NVDIMM Function LED patterns
For the purpose of this table, the NVDIMM-N LED operates as follows:
Solid indicates that the LED remains in the on state.
Flashing indicates that the LED is on for 2 seconds and off for 1 second.
Fast-flashing indicates that the LED is on for 300 ms and off for 300 ms.
State Definition NVDIMM-N Function LED
NVDIMM-N Function
(green)
On Off
On On
Off Off
On Flashing
LED (blue)
0 The restore operation is in progress. Flashing
1 The restore operation is successful. Solid or On
2 Erase is in progress. Flashing
3 The erase operation is successful. Solid or On
4 The NVDIMM-N is armed, and the NVDIMM-N is in
normal operation.
22 Component identification
Solid or On
Table Continued
State Definition NVDIMM-N Function LED
5 The save operation is in progress. Flashing
6 The NVDIMM-N finished saving and battery is still
turned on (12 V still powered).
7 The NVDIMM-N has an internal error or a firmware
update is in progress. For more information about an NVDIMM-N internal error, see the IML.

HPE Persistent Memory module label identification

Solid or On
Fast-flashing
Item Description Example
1 Unique ID number 8089-A2-1802-1234567
2 Model number NMA1XBD512G2S
3 Capacity
4 DataMatrix bar code Includes part number and serial number
For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/support/
persistentmemoryQS).

Device numbers

8 SFF device bay numbering
128 GB
256 GB
512 GB
Component identification 23
8 SFF + 2 SFF device bay numbering
Item Description
1 Box 1, bays 1-8
2 Box 2, bays 1 and 2
4 LFF device bay numbering
10 SFF NVMe/SAS backplane option device bay numbering
When the 10 SFF NVMe/SAS backplane option is installed, NVMe drives must be installed in bays 9 and
10.The other bays support a mix of NVMe and SAS drives.
Optional rear device bay numbering
The optional rear device bay supports either 1 SFF drive in a SmartDrive carrier, or 2 uFF M.2 drives in an HPE Smart Carrier M.2 (SCM).
When the HPE SFF Flash Adapter is installed, the uFF drives are recognized as 1 and 101.
24 Component identification

Hot-plug drive LED definitions

Item LED Status Definition
1 Locate Solid blue The drive is being identified by a host
Flashing blue The drive carrier firmware is being updated or
2 Activity ring Rotating green Drive activity.
Off No drive activity.
3 Do not remove Solid white Do not remove the drive. Removing the drive
application.
requires an update.
causes one or more of the logical drives to fail.
Off Removing the drive does not cause a logical
drive to fail.
4 Drive status Solid green The drive is a member of one or more logical
drives.
Flashing green
Flashing amber/
green
Flashing amber The drive is not configured and predicts the
Solid amber The drive has failed.
The drive is doing one of the following:
Rebuilding
Performing a RAID migration
Performing a strip size migration
Performing a capacity expansion
Performing a logical drive extension
Erasing
Spare part activation
The drive is a member of one or more logical drives and predicts the drive will fail.
drive will fail.
Off The drive is not configured by a RAID
controller or a spare drive.
Component identification 25

NVMe SSD LED definitions

The NVMe SSD is a PCIe bus device. A device attached to a PCIe bus cannot be removed without allowing the device and bus to complete and cease the signal/traffic flow.
CAUTION: Do not remove an NVMe SSD from the drive bay while the Do not remove LED is flashing. The Do not remove LED flashes to indicate that the device is still in use. Removing the NVMe SSD before the device has completed and ceased signal/traffic flow can cause loss of data.
Item LED Status Definition
1 Locate Solid blue The drive is being identified by a host application.
Flashing blue The drive carrier firmware is being updated or requires an update.
2 Activity
ring
Off No drive activity
3 Drive
status
Flashing green
Flashing amber/
Flashing amber The drive is not configured and predicts the drive will fail.
Solid amber The drive has failed.
Rotating green Drive activity
Solid green The drive is a member of one or more logical drives.
The drive is doing one of the following:
Rebuilding
Performing a RAID migration
Performing a stripe size migration
Performing a capacity expansion
Performing a logical drive extension
Erasing
The drive is a member of one or more logical drives and predicts the
green
drive will fail.
Off The drive is not configured by a RAID controller.
4 Do not
remove
26 Component identification
Solid white Do not remove the drive. The drive must be ejected from the PCIe bus
prior to removal.
Table Continued
Item LED Status Definition
Flashing white The drive ejection request is pending.
Off The drive has been ejected.
5 Power Solid green Do not remove the drive. The drive must be ejected from the PCIe bus
prior to removal.
Flashing green The drive ejection request is pending.
Off The drive has been ejected.

uFF drive components and LEDs

Item Description Status
1 Locate Off—Normal
Solid blue—The drive is being identified by a host application
Flashing blue—The drive firmware is being updated or requires an update
2 uFF drive ejection latch Removes the uFF drive when released
3 Do not remove LED Off—OK to remove the drive. Removing the drive
does not cause a logical drive to fail.
Solid white—Do not remove the drive. Removing the drive causes one or more of the logical drives to fail.
Table Continued
Component identification 27
Item Description Status
4 Drive status LED • Off—The drive is not configured by a RAID
controller
Solid green—The drive is a member of one or more logical drives
Flashing green (4 Hz)—The drive is operating normally and has activity
Flashing green (1 Hz)—The drive is rebuilding or performing a RAID migration, stripe size migration, capacity expansion, logical drive extension, or is erasing
Flashing amber/green (1 Hz)—The drive is a member of one or more logical drives that predicts the drive will fail
Solid amber—The drive has failed
Flashing amber (1 Hz)—The drive is not configured and predicts the drive will fail
5 Adapter ejection release latch
and handle

Hot-plug fans

CAUTION: To avoid damage to server components, fan blanks must be installed in fan bays 1 and
2 in a single-processor configuration.
CAUTION: To avoid damage to the equipment, do not operate the server for extended periods of time if the server does not have the optimal number of fans installed. Although the server might boot, Hewlett Packard Enterprise does not recommend operating the server without the required fans installed and operating.
The valid fan configurations are listed in the following tables.
One-processor configuration
Fan bay 1 Fan bay 2 Fan bay 3 Fan bay 4 Fan bay 5 Fan bay 6 Fan bay 7
Fan blank Fan blank Fan Fan Fan Fan Fan
Two-processor configuration
Fan bay 1 Fan bay 2 Fan bay 3 Fan bay 4 Fan bay 5 Fan bay 6 Fan bay 7
Removes the SFF flash adapter when released
Fan Fan Fan Fan Fan Fan Fan
28 Component identification
The loss of a single fan rotor (one standard fan) causes loss of redundancy. The loss of two fan rotors (two standard fans or one high-performance fan) causes the server to initiate a shutdown.
The high-performance fans are used for 8 SFF +2 SFF NVMe and 10 SFF drive configurations when NVMe drives are installed in the server. They are also required for ASHRAE-compliant configurations. For more information on ASHRAE-compliant configurations, see the Hewlett Packard Enterprise website http://www.hpe.com/servers/ASHRAE.
The server supports variable fan speeds. The fans operate at minimum speed until a temperature change requires a fan speed increase to cool the server. The server shuts down during the following temperature­related scenarios:
At POST and in the OS, iLO performs an orderly shutdown if a cautionary temperature level is detected. If the server hardware detects a critical temperature level before an orderly shutdown occurs, the server performs an immediate shutdown.
When the Thermal Shutdown feature is disabled in the BIOS/Platform Configuration (RBSU), iLO does not perform an orderly shutdown when a cautionary temperature level is detected. Disabling this feature does not disable the server hardware from performing an immediate shutdown when a critical temperature level is detected.
CAUTION: A thermal event can damage server components when the Thermal Shutdown feature is disabled in the BIOS/Platform Configuration (RBSU).
Component identification 29

HPE Smart Array P824i-p MR Gen10 Controller

Components
Item Description
1 Internal SAS port 1i
2 Internal SAS port 2i
3 Internal SAS port 3i
4 Internal SAS port 4i
5 Controller backup power cable connector
6 Internal SAS port 5i
7 Internal SAS port 6i
30 Component identification

HPE InfiniBand HDR/Ethernet 940QSFP 56x16 adapter LEDs

Link LED status
Off A link has not been established.
Solid amber Active physical link exists
Blinking amber 4 Hz blinking amber indicates a problem with the
Solid green A valid logical (data activity) link exists with no
Blinking green A valid logical link exists with active traffic.
1
2-port adapter LEDs are shown. The 1-port adapters have only a single LED.
1
Description
physical link.
active traffic.
Component identification 31

Operations

Power up the server

To power up the server, use one of the following methods:
Press the Power On/Standby button.
Use the virtual power button through iLO.

Power down the server

Before powering down the server for any upgrade or maintenance procedures, perform a backup of critical server data and programs.
IMPORTANT: When the server is in standby mode, auxiliary power is still being provided to the system.
To power down the server, use one of the following methods:
Press and release the Power On/Standby button.
This method initiates a controlled shutdown of applications and the OS before the server enters standby mode.
Press and hold the Power On/Standby button for more than 4 seconds to force the server to enter standby mode.
This method forces the server to enter standby mode without properly exiting applications and the OS. If an application stops responding, you can use this method to force a shutdown.
Use a virtual power button selection through iLO .
This method initiates a controlled remote shutdown of applications and the OS before the server enters standby mode.
Before proceeding, verify that the server is in standby mode by observing that the system power LED is amber.

Extend the server from the rack

NOTE: If the optional cable management arm option is installed, you can extend the server without
powering down the server or disconnecting peripheral cables and power cords. These steps are only necessary with the standard cable management solution.
Procedure
1. Power down the server (Power down the server on page 32).
2. Disconnect all peripheral cables and power cords.
3. Loosen the front panel thumbscrews.
4. Extend the server on the rack rails until the server rail-release latches engage.
32 Operations
WARNING: To reduce the risk of personal injury or equipment damage, be sure that the rack is
adequately stabilized before extending a component from the rack.
WARNING: To reduce the risk of personal injury, be careful when pressing the server rail-release latches and sliding the server into the rack. The sliding rails could pinch your fingers.
5. After performing the installation or maintenance procedure, slide the server into the rack:
a. Slide the server fully into the rack.
b. Secure the server by tightening the thumbscrews.
6. Connect the peripheral cables and power cords.

Remove the server from the rack

To remove the server from a Hewlett Packard Enterprise, Compaq-branded, Telco, or third-party rack:
Procedure
1. Power down the server (Power down the server on page 32).
2. Extend the server from the rack (Extend the server from the rack on page 32).
3. Disconnect the cabling and remove the server from the rack. For more information, see the
documentation that ships with the rack mounting option.
4. Place the server on a sturdy, level surface.

Remove the access panel

WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: Do not operate the server for long periods with the access panel open or removed. Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage.
To remove the component:
Procedure
1. Power down the server (Power down the server on page 32).
2. Extend the server from the rack (Extend the server from the rack on page 32).
3. Open or unlock the locking latch, slide the access panel to the rear of the chassis, and remove the
access panel.

Install the access panel

Procedure
1. Place the access panel on top of the server with the latch open.
Operations 33
Allow the panel to extend past the rear of the server approximately 1.25 cm (0.5 in).
2. Push down on the latch.
The access panel slides to a closed position.
3. Tighten the security screw on the latch, if needed.

Remove the hot-plug fan

Procedure
1. Observe the following alert:
IMPORTANT: After removing a high-performance (dual-rotor) fan, install or replace the fan within
60 seconds. Otherwise, the server will shut down gracefully.
2. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
3. Remove the access panel (Remove the access panel on page 33).
4. Remove the fan.
34 Operations
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage.
IMPORTANT: For optimum cooling, install fans in all primary fan locations.
To replace the component, reverse the removal procedure.

Remove the primary PCI riser cage

CAUTION: To prevent damage to the server or expansion boards, power down the server and
remove all AC power cords before removing or installing the PCI riser cage.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the PCI riser cage.
Operations 35

Install the primary PCI riser cage

Procedure
1. Install the PCI riser cage.
2. Install the access panel (Install the access panel on page 33).
3. Install the server into the rack (Installing the server into the rack on page 46).
4. Connect each power cord to the server.
5. Connect each power cord to the power source.
6. Power up the server (Power up the server on page 32).
36 Operations

Remove the secondary PCI riser cage

Procedure
1. Observe the following alert:
CAUTION: To prevent damage to the server or expansion boards, power down the server and
remove all AC power cords before removing or installing the PCI riser cage.
2. Back up all server data.
3. Power down the server (Power down the server on page 32).
4. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. If needed, remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
8. Disconnect any cables connected to the PCI riser cage.
9. Remove any expansion boards installed in the PCI riser cage.
10. Remove the PCI riser cage.
Operations 37

Install the secondary PCI riser cage

Procedure
1. Install the PCI riser cage.
2. If needed, install expansion boards (Installing an expansion board in the secondary riser cage on
page 96).
3. Install the access panel (Install the access panel on page 33).
4. Install the server into the rack (Installing the server into the rack on page 46).
5. Connect each power cord to the server.
6. Connect each power cord to the power source.
7. Power up the server (Power up the server on page 32).
38 Operations

Remove the 8 SFF drive backplane

Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove all drives (Removing a hot-plug SAS or SATA hard drive on page 63).
7. Disconnect and remove all cables connected to the drive backplane.
8. Remove the 8 SFF SAS/SATA drive backplane.

Release the cable management arm

Release the cable management arm and then swing the arm away from the rack.
Operations 39
40 Operations

Setup

Optional service

Delivered by experienced, certified engineers, Hewlett Packard Enterprise support services help you keep your servers up and running with support packages tailored specifically for HPE ProLiant systems. Hewlett Packard Enterprise support services let you integrate both hardware and software support into a single package. A number of service level options are available to meet your business and IT needs.
Hewlett Packard Enterprise support services offer upgraded service levels to expand the standard product warranty with easy-to-buy, easy-to-use support packages that will help you make the most of your server investments. Some of the Hewlett Packard Enterprise support services for hardware, software or both are:
Foundation Care – Keep systems running.
6-Hour Call-to-Repair
4-Hour 24x7
Next Business Day
Proactive Care – Help prevent service incidents and get you to technical experts when there is one.
6-Hour Call-to-Repair
1
1
4-Hour 24x7
Next Business Day
Deployment service for both hardware and software
Hewlett Packard Enterprise Education Services – Help train your IT staff.
1
The time commitment for this repair service might vary depending on the geographical region of site. For
more service information available in your site, contact your local center.
For more information on Hewlett Packard Enterprise support services, see the Hewlett Packard
Enterprise website.

Optimum environment

When installing the server in a rack, select a location that meets the environmental standards described in this section.

Space and airflow requirements

To allow for servicing and adequate airflow, observe the following space and airflow requirements when deciding where to install a rack:
Leave a minimum clearance of 63.5 cm (25 in) in front of the rack.
Hewlett Packard Enterprise support
Leave a minimum clearance of 76.2 cm (30 in) behind the rack.
Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack or row of racks.
Setup 41
Hewlett Packard Enterprise servers draw in cool air through the front door and expel warm air through the rear door. Therefore, the front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet, and the rear door must be adequately ventilated to allow the warm air to escape from the cabinet.
CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation openings.
When vertical space in the rack is not filled by a server or rack component, the gaps between the components cause changes in airflow through the rack and across the servers. Cover all gaps with blanking panels to maintain proper airflow.
CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement ensures proper airflow. Using a rack without blanking panels results in improper cooling that can lead to thermal damage.
The 9000 and 10000 Series Racks provide proper server cooling from flow-through perforations in the front and rear doors that provide 64 percent open area for ventilation.
CAUTION: When using a Compaq branded 7000 series rack, install the high airflow rack door insert (PN 327281-B21 for 42U rack, PN 157847-B21 for 22U rack) to provide proper front-to-back airflow and cooling.
CAUTION: If a third-party rack is used, observe the following additional requirements to ensure adequate airflow and to prevent damage to the equipment:
Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow 5,350 sq cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate airflow (equivalent to the required 64 percent open area for ventilation).
Side—The clearance between the installed rack component and the side panels of the rack must be a minimum of 7 cm (2.75 in).

Temperature requirements

To ensure continued safe and reliable equipment operation, install or position the system in a well­ventilated, climate-controlled environment.
The maximum recommended ambient operating temperature (TMRA) for most server products is 35°C (95°F). The temperature in the room where the rack is located must not exceed 35°C (95°F).
CAUTION: To reduce the risk of damage to the equipment when installing third-party options:
Do not permit optional equipment to impede airflow around the server or to increase the internal rack temperature beyond the maximum allowable limits.
Do not exceed the manufacturer’s TMRA.

Power requirements

Installation of this equipment must comply with local and regional electrical regulations governing the installation of information technology equipment by licensed electricians. This equipment is designed to operate in installations covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992 (code for Protection of Electronic Computer/Data Processing Equipment). For electrical power ratings on options, refer to the product rating label or the user documentation supplied with that option.
42 Setup
WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not overload
the AC supply branch circuit that provides power to the rack. Consult the electrical authority having jurisdiction over wiring and installation requirements of your facility.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating uninterruptible power supply. This device protects the hardware from damage caused by power surges and voltage spikes and keeps the system in operation during a power failure.

Electrical grounding requirements

The server must be grounded properly for proper operation and safety. In the United States, you must install the equipment in accordance with NFPA 70, 1999 Edition (National Electric Code), Article 250, as well as any local and regional building codes. In Canada, you must install the equipment in accordance with Canadian Standards Association, CSA C22.1, Canadian Electrical Code. In all other countries, you must install the equipment in accordance with any regional or national electrical wiring codes, such as the International Electrotechnical Commission (IEC) Code 364, parts 1 through 7. Furthermore, you must be sure that all power distribution devices used in the installation, such as branch wiring and receptacles, are listed or certified grounding-type devices.
Because of the high ground-leakage currents associated with multiple servers connected to the same power source, Hewlett Packard Enterprise recommends the use of a PDU that is either permanently wired to the building’s branch circuit or includes a nondetachable cord that is wired to an industrial-style plug. NEMA locking-style plugs or those complying with IEC 60309 are considered suitable for this purpose. Using common power outlet strips for the server is not recommended.

Connecting a DC power cable to a DC power source

WARNING: To reduce the risk of electric shock or energy hazards:
This equipment must be installed by trained service personnel, as defined by the NEC and IEC 60950-1, Second Edition, the standard for Safety of Information Technology Equipment.
Connect the equipment to a reliably grounded Secondary circuit source. A Secondary circuit has no direct connection to a Primary circuit and derives its power from a transformer, converter, or equivalent isolation device.
The branch circuit overcurrent protection must be rated 27 A.
WARNING: When installing a DC power supply, the ground wire must be connected before the positive or negative leads.
WARNING: Remove power from the power supply before performing any installation steps or maintenance on the power supply.
CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the earthing conductor at the equipment. For more information, see the documentation that ships with the power supply.
Setup 43
CAUTION: If the DC connection exists between the earthed conductor of the DC supply circuit and the earthing conductor at the server equipment, the following conditions must be met:
This equipment must be connected directly to the DC supply system earthing electrode conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply system earthing electrode conductor is connected.
This equipment should be located in the same immediate area (such as adjacent cabinets) as any other equipment that has a connection between the earthed conductor of the same DC supply circuit and the earthing conductor, and also the point of earthing of the DC system. The DC system should be earthed elsewhere.
The DC supply source is to be located within the same premises as the equipment.
Switching or disconnecting devices should not be in the earthed circuit conductor between the DC source and the point of connection of the earthing electrode conductor.
To connect a DC power cable to a DC power source:
1. Cut the DC power cord ends no shorter than 150 cm (59.06 in).
2. If the power source requires ring tongues, use a crimping tool to install the ring tongues on the power
cord wires.
IMPORTANT: The ring terminals must be UL approved and accommodate 12 gauge wires.
IMPORTANT: The minimum nominal thread diameter of a pillar or stud type terminal must be 3.5
mm (0.138 in); the diameter of a screw type terminal must be 4.0 mm (0.157 in).
3. Stack each same-colored pair of wires and then attach them to the same power source. The power
cord consists of three wires (black, red, and green).
For more information, see the documentation that ships with the power supply.

Server warnings and cautions

WARNING: This server is heavy. To reduce the risk of personal injury or damage to the equipment:
Observe local occupational health and safety requirements and guidelines for manual material handling.
Get help to lift and stabilize the product during installation or removal, especially when the product is not fastened to the rails. Hewlett Packard Enterprise recommends that a minimum of two people are required for all rack server installations. If the server is installed higher than chest level, a third person may be required to help align the server.
Use caution when installing the server in or removing the server from the rack; it is unstable when not fastened to the rails.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them.
44 Setup
WARNING: To reduce the risk of personal injury, electric shock, or damage to the equipment,
remove the power cord to remove power from the server. The front panel Power On/Standby button does not completely shut off system power. Portions of the power supply and some internal circuitry remain active until AC/DC power is removed.
WARNING: To reduce the risk of fire or burns after removing the energy pack:
Do not disassemble, crush, or puncture the energy pack.
Do not short external contacts.
Do not dispose of the energy pack in fire or water.
After power is disconnected, battery voltage might still be present for 1s to 160s.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating uninterruptible power supply. This device protects the hardware from damage caused by power surges and voltage spikes and keeps the system in operation during a power failure.
CAUTION: Do not operate the server for long periods with the access panel open or removed. Operating the server in this manner results in improper airflow and improper cooling that can lead to thermal damage.

Rack warnings

WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
The leveling jacks are extended to the floor.
The full weight of the rack rests on the leveling jacks.
The stabilizing feet are attached to the rack if it is a single-rack installation.
The racks are coupled together in multiple-rack installations.
Only one component is extended at a time. A rack may become unstable if more than one component is extended for any reason.
WARNING: To reduce the risk of personal injury or equipment damage when unloading a rack:
At least two people are needed to safely unload the rack from the pallet. An empty 42U rack can weigh as much as 115 kg (253 lb), can stand more than 2.1 m (7 ft) tall, and might become unstable when being moved on its casters.
Never stand in front of the rack when it is rolling down the ramp from the pallet. Always handle the rack from both sides.
WARNING: To reduce the risk of personal injury or damage to the equipment, adequately stabilize the rack before extending a component outside the rack. Extend only one component at a time. A rack may become unstable if more than one component is extended.
WARNING: When installing a server in a telco rack, be sure that the rack frame is adequately secured at the top and bottom to the building structure.
Setup 45

Identifying the contents of the server shipping carton

Unpack the server shipping carton and locate the materials and documentation necessary for installing the server. All the rack mounting hardware necessary for installing the server into the rack is included with the rack or the server.
The contents of the server shipping carton include:
Server
Power cord
Hardware documentation and software products
Rack-mounting hardware and documentation
In addition to the supplied items, you might need:
Operating system or application software
Hardware options
Screwdriver

Installing hardware options

Install any hardware options before initializing the server. For options installation information, refer to the option documentation. For server-specific information, refer to "Hardware options installation."

Installing the server into the rack

To install the server into a rack with square, round, or threaded holes, refer to the instructions that ship with the rack hardware kit.
WARNING: This server is heavy. To reduce the risk of personal injury or damage to the equipment:
Observe local occupational health and safety requirements and guidelines for manual material handling.
Get help to lift and stabilize the product during installation or removal, especially when the product is not fastened to the rails. Hewlett Packard Enterprise recommends that a minimum of two people are required for all rack server installations. A third person may be required to help align the server if the server is installed higher than chest level.
Use caution when installing the server in or removing the server from the rack; it is unstable when not fastened to the rails.
CAUTION: Always plan the rack installation so that the heaviest item is on the bottom of the rack. Install the heaviest item first, and continue to populate the rack from the bottom to the top.
Procedure
1. Install the server and cable management arm into the rack. For more information, see the installation
instructions that ship with the selected rail system.
2. Connect peripheral devices to the server. For more information, see Rear panel components on
46 Setup
page 15.
3. Connect the power cord to the rear of the server.
4. Use the hook-and-loop strap to secure the power cord.
5. Connect the power cord to the power source.

Operating system

This ProLiant server does not ship with provisioning media. Everything required to manage and install the system software and firmware is preloaded on the server.
To operate properly, the server must have a supported operating system. Attempting to run an unsupported operating system can cause serious and unpredictable results. For the latest information on operating system support, see the Hewlett Packard Enterprise website.
Failure to observe UEFI requirements for ProLiant Gen10 servers can result in errors installing the operating system, failure to recognize boot media, and other boot failures. For more information on these requirements, see the HPE UEFI Requirements on the Hewlett Packard Enterprise website.
To install an operating system on the server, use one of the following methods:
Intelligent Provisioning—For single-server deployment, updating, and provisioning capabilities. For
more information, see Installing the operating system with Intelligent Provisioning on page 47.
Insight Control server provisioning—For multiserver remote OS deployment, use Insight Control server provisioning for an automated solution. For more information, see the Insight Control documentation on the Hewlett Packard Enterprise website.
For additional system software and firmware updates, download the Service Pack for ProLiant from the Hewlett Packard Enterprise website. Software and firmware must be updated before using the server for the first time, unless any installed software or components require an older version.
For more information, see Keeping the system current on page 162.
For more information on using these installation methods, see the Hewlett Packard Enterprise website.

Installing the operating system with Intelligent Provisioning

Procedure
1. Connect the Ethernet cable between the network connector on the server and a network jack.
2. Press the Power On/Standby button.
3. During server POST, press F10.
4. Complete the initial Preferences and Registration portion of Intelligent Provisioning.
5. At the 1 Start screen, click Configure and Install.
6. To finish the installation, follow the onscreen prompts. An Internet connection is required to update the
firmware and systems software.

Selecting boot options in UEFI Boot Mode

On servers operating in UEFI Boot Mode, the boot controller and boot order are set automatically.
Setup 47
Procedure
1. Press the Power On/Standby button.
2. During the initial boot:
To modify the server configuration ROM default settings, press the F9 key in the ProLiant POST
screen to enter the UEFI System Utilities screen. By default, the System Utilities menus are in the English language.
If you do not need to modify the server configuration and are ready to install the system software,
press the F10 key to access Intelligent Provisioning.
For more information on automatic configuration, see the UEFI documentation on the Hewlett Packard
Enterprise website.

Selecting boot options

This server supports both Legacy BIOS Boot Mode and UEFI Boot Mode. On servers operating in UEFI Boot Mode, the boot controller and boot order are set automatically.
Procedure
1. Press the Power On/Standby button.
2. Do one of the following:
a. To enter the UEFI System Utilities screen and modify the server configuration ROM default
settings, press the F9 key on the ProLiant POST screen. Choose one of the following boot modes:
Legacy BIOS
UEFI (default)
b. If you do not need to modify the server configuration and are ready to install the system software,
press the F10 key to access Intelligent Provisioning.
For more information on automatic configuration, see the UEFI documentation on the Hewlett Packard
Enterprise website.

Registering the server

To experience quicker service and more efficient support, register the product at the Hewlett Packard Enterprise Product Registration website.
48 Setup

Hardware options installation

Hewlett Packard Enterprise product QuickSpecs

For more information about product features, specifications, options, configurations, and compatibility, see the product QuickSpecs on the Hewlett Packard Enterprise website (

Introduction

Install any hardware options before initializing the server. For options installation information, see the option documentation. For server-specific information, use the procedures in this section.
If multiple options are being installed, read the installation instructions for all the hardware options to identify similar steps and streamline the installation process.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the server before beginning any installation procedure. Improper grounding can cause electrostatic discharge.

Installing a redundant hot-plug power supply

http://www.hpe.com/info/qs).
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
Procedure
1. Observe the following alerts:
CAUTION: All power supplies installed in the server must have the same output power capacity.
Verify that all power supplies have the same part number and label color. The system becomes unstable and may shut down when it detects mismatched power supplies.
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all bays are populated with either a component or a blank.
2. Access the product rear panel ( Release the cable management arm on page 39).
3. Remove the blank.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the power supply or
power supply blank to cool before touching it.
Hardware options installation 49
4. Insert the power supply into the power supply bay until it clicks into place.
5. Connect the power cord to the power supply.
6. Route the power cord. Use best practices when routing power cords and other cables. A cable
management arm is available to help with routing. To obtain a cable management arm, contact a Hewlett Packard Enterprise authorized reseller.
7. Connect the power cord to the AC power source.
8. Be sure that the power supply LED is green (Rear panel LEDs on page 15).

Memory options

IMPORTANT: This server does not support mixing LRDIMMs and RDIMMs. Attempting to mix any
combination of these DIMMs can cause the server to halt during BIOS initialization. All memory installed in the server must be of the same type.

DIMM and NVDIMM population information

For specific DIMM and NVDIMM population information, see the DIMM population guidelines on the Hewlett Packard Enterprise website (http://www.hpe.com/docs/memory-population-rules).

DIMM-processor compatibility

The installed processor determines the type of DIMM that is supported in the server:
50 Hardware options installation
First-generation Intel Xeon Scalable processors support DDR4-2666 DIMMs.
Second-generation Intel Xeon Scalable processors support DDR4-2933 DIMMs.
Mixing DIMM types is not supported. Install only the supported DDR4-2666 or DDR4-2933 DIMMs in the server.

HPE SmartMemory speed information

For more information about memory speed information, see the Hewlett Packard Enterprise website (https://www.hpe.com/docs/memory-speed-table).

Installing a DIMM

The server supports up to 24 DIMMs.
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
For more information on specific options, see the server QuickSpecs on the Hewlett Packard Enterprise
website.
Procedure
1. Power down the server (Power down the server on page 32).
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
4. Remove the access panel (Remove the access panel on page 33).
5. Open the DIMM slot latches.
6. Install the DIMM.
Hardware options installation 51
7. Install the access panel (Install the access panel on page 33).
8. Install the server in the rack.
9. Connect each power cord to the server.
10. Connect each power cord to the power source.
11. Power up the server (Power up the server on page 32).
Use the BIOS/Platform Configuration (RBSU) in the UEFI System Utilities to configure the memory mode.
For more information about LEDs and troubleshooting failed DIMMs, see Systems Insight Display
combined LED descriptions on page 13.

HPE 16GB NVDIMM option

HPE NVDIMMs are flash-backed NVDIMMs used as fast storage and are designed to eliminate smaller storage bottlenecks. The HPE 16GB NVDIMM for HPE ProLiant Gen10 servers is ideal for smaller database storage bottlenecks, write caching tiers, and any workload constrained by storage bottlenecks.
The HPE 16GB NVDIMM is supported on select HPE ProLiant Gen10 servers with first generation Intel Xeon Scalable processors. The server can support up to 12 NVDIMMs in 2 socket servers (up to 192GB) and up to 24 NVDIMMs in 4 socket servers (up to 384GB). The HPE Smart Storage Battery provides backup power to the memory slots allowing data to be moved from the DRAM portion of the NVDIMM to the Flash portion for persistence during a power down event.
For more information on HPE NVDIMMs, see the Hewlett Packard Enterprise website (http://
www.hpe.com/info/persistentmemory).
NVDIMM-processor compatibility
HPE 16GB NVDIMMs are only supported in servers with first generation Intel Xeon Scalable processors installed.
Server requirements for NVDIMM support
Before installing an HPE 16GB NVDIMM in a server, make sure that the following components and software are available:
52 Hardware options installation
A supported HPE server using Intel Xeon Scalable Processors: For more information, see the NVDIMM QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
An HPE Smart Storage Battery
A minimum of one regular DIMM: The system cannot have only NVDIMM-Ns installed.
A supported operating system with persistent memory/NVDIMM drivers. For the latest software information, see the Hewlett Packard Enterprise website (http://persistentmemory.hpe.com).
For minimum firmware versions, see the HPE 16GB NVDIMM User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/nvdimm-docs).
To determine NVDIMM support for your server, see the server QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
Installing an NVDIMM
CAUTION: To avoid damage to the hard drives, memory, and other system components, the air
baffle, drive blanks, and access panel must be installed when the server is powered up.
CAUTION: To avoid damage to the hard drives, memory, and other system components, be sure to install the correct DIMM baffles for your server model.
CAUTION: DIMMs are keyed for proper alignment. Align notches in the DIMM with the corresponding notches in the DIMM slot before inserting the DIMM. Do not force the DIMM into the slot. When installed properly, not all DIMMs will face in the same direction.
CAUTION: Electrostatic discharge can damage electronic components. Be sure you are properly grounded before beginning this procedure.
CAUTION: Failure to properly handle DIMMs can damage the DIMM components and the system board connector. For more information, see the DIMM handling guidelines in the troubleshooting guide for your product on the Hewlett Packard Enterprise website:
HPE ProLiant Gen10 (http://www.hpe.com/info/gen10-troubleshooting)
HPE Synergy (http://www.hpe.com/info/synergy-troubleshooting)
CAUTION: Unlike traditional storage devices, NVDIMMs are fully integrated in with the ProLiant server. Data loss can occur when system components, such as the processor or HPE Smart Storage Battery, fails. HPE Smart Storage battery is a critical component required to perform the backup functionality of NVDIMMs. It is important to act when HPE Smart Storage Battery related failures occur. Always follow best practices for ensuring data protection.
Prerequisites
Before installing an NVDIMM, be sure the server meets the Server requirements for NVDIMM support on page 52.
Procedure
1. Power down the server (Power down the server on page 32).
2. Remove all power:
Hardware options installation 53
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
4. Remove the access panel (Remove the access panel on page 33).
5. Locate any NVDIMMs already installed in the server.
6. Verify that all LEDs on any installed NVDIMMs are off.
7. Install the NVDIMM.
8. Install and connect the HPE Smart Storage Battery, if it is not already installed.
Installing an energy pack in 8 SFF and 4 LFF configurations on page 136
Installing an energy pack in the 10 SFF SAS/SATA/NVMe Combo backplane configuration
on page 133
9. Install any components removed to access the DIMM slots and the HPE Smart Storage Battery.
10. Install the access panel.
11. Slide or install the server into the rack.
12. If removed, reconnect all power cables.
13. Power up the server.
14. If required, sanitize the NVDIMM-Ns. For more information, see NVDIMM sanitization on page 55.
Configuring the server for NVDIMMs
After installing NVDIMMs, configure the server for NVDIMMs. For information on configuring settings for NVDIMMs, see the HPE 16GB NVDIMM User Guide on the Hewlett Packard Enterprise website (
www.hpe.com/info/nvdimm-docs).
http://
54 Hardware options installation
The server can be configured for NVDIMMs using either of the following:
UEFI System Utilities—Use System Utilities through the Remote Console to configure the server for NVDIMM memory options by pressing the F9 key during POST. For more information about UEFI System Utilities, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
iLO RESTful API for HPE iLO 5—For more information about configuring the system for NVDIMMs, see https://hewlettpackard.github.io/ilo-rest-api-docs/ilo5/.
NVDIMM sanitization
Media sanitization is defined by NIST SP800-88 Guidelines for Media Sanitization (Rev 1, Dec 2014) as "a general term referring to the actions taken to render data written on media unrecoverable by both ordinary and extraordinary means."
The specification defines the following levels:
Clear: Overwrite user-addressable storage space using standard write commands; might not sanitize data in areas not currently user-addressable (such as bad blocks and overprovisioned areas)
Purge: Overwrite or erase all storage space that might have been used to store data using dedicated device sanitize commands, such that data retrieval is "infeasible using state-of-the-art laboratory techniques"
Destroy: Ensure that data retrieval is "infeasible using state-of-the-art laboratory techniques" and render the media unable to store data (such as disintegrate, pulverize, melt, incinerate, or shred)
The NVDIMM-N Sanitize options are intended to meet the Purge level.
For more information on sanitization for NVDIMMs, see the following sections in the HPE 16GB NVDIMM User Guide on the Hewlett Packard Enterprise website (
NVDIMM sanitization policies
NVDIMM sanitization guidelines
Setting the NVDIMM-N Sanitize/Erase on the Next Reboot Policy
NIST SP800-88 Guidelines for Media Sanitization (Rev 1, Dec 2014) is available for download from the NIST website (http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf).
NVDIMM relocation guidelines
Requirements for relocating NVDIMMs or a set of NVDIMMs when the data must be preserved
The destination server hardware must match the original server hardware configuration.
All System Utilities settings in the destination server must match the original System Utilities settings in the original server.
If NVDIMM-Ns are used with NVDIMM Interleaving ON mode in the original server, do the following:
Install the NVDIMMs in the same DIMM slots in the destination server.
http://www.hpe.com/info/nvdimm-docs):
Install the entire NVDIMM set (all the NVDIMM-Ns on the processor) on the destination server.
This guideline would apply when replacing a system board due to system failure.
If any of the requirements cannot be met during NVDIMM relocation, do the following:
Hardware options installation 55
Manually back up the NVDIMM-N data before relocating NVDIMM-Ns to another server.
Relocate the NVDIMM-Ns to another server.
Sanitize all NVDIMM-Ns on the new server before using them.
Requirements for relocating NVDIMMs or a set of NVDIMMs when the data does not have to be preserved
If data on the NVDIMM-N or set of NVDIMM-Ns does not have to be preserved, then
Move the NVDIMM-Ns to the new location and sanitize all NVDIMM-Ns after installing them to the new location. For more information, see NVDIMM sanitization on page 55.
Observe all DIMM and NVDIMM population guidelines. For more information, see DIMM and NVDIMM population information on page 50.
Observe the process for removing an NVDIMM.
Observe the process for installing an NVDIMM.
Review and configure the system settings for NVDIMMs. For more information, see Configuring the
server for NVDIMMs on page 54.

HPE Persistent Memory option

HPE Persistent Memory, which offers the flexibility to deploy as dense memory or fast storage and features Intel Optane DC Persistent Memory, enables per-socket memory capacity of up to 3.0 TB. HPE Persistent Memory, together with traditional volatile DRAM DIMMs, provide fast, high-capacity, cost­effective memory and storage to transform big data workloads and analytics by enabling data to be stored, moved, and processed quickly.
HPE Persistent Memory modules use the standard DIMM form factor and are installed alongside DIMMs in a server memory slot. HPE Persistent Memory modules are designed for use only with second­generation Intel Xeon Scalable processors, and are available in the following capacities:
128 GB
256 GB
512 GB
HPE Persistent Memory module-processor compatibility
HPE Persistent Memory modules are supported only in servers with second-generation Intel Xeon Scalable processors installed.
HPE Persistent Memory population information
For specific population and configuration information, see the memory population guidelines on the Hewlett Packard Enterprise website (http://www.hpe.com/docs/memory-population-rules).
System requirements for HPE Persistent Memory module support
IMPORTANT: Hewlett Packard Enterprise recommends that you implement best practice
configurations for high availability (HA) such as clustered configurations.
Before installing HPE Persistent Memory modules, make sure that the following components and software are available:
56 Hardware options installation
A supported HPE ProLiant Gen10 server or Synergy compute module using second-generation Intel Xeon Scalable processors. For more information, see the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/support/persistentmemoryQS).
HPE DDR4 Standard Memory RDIMMs or LRDIMMs (the number will vary based on your chosen configuration).
Supported firmware and drives:
System ROM version 2.10 or later
Server Platform Services (SPS) Firmware version 04.01.02.296
HPE iLO 5 Firmware version 1.43
HPE Innovation Engine Firmware version 2.1.x or later
Download the required firmware and drivers from the Hewlett Packard Enterprise website (http:// www.hpe.com/info/persistentmemory).
A supported operating system:
Windows Server 2012 R2 with persistent memory drivers from Hewlett Packard Enterprise
Windows Server 2016 with persistent memory drivers from Hewlett Packard Enterprise
Windows Server 2019
Red Hat Enterprise Linux 7.6
SUSE Linux Enterprise Server 15 SP1
VMware vSphere 6.7 U1
Hardware and licensing requirements for optional encryption of the HPE Persistent Memory modules:
HPE TPM 2.0 (local key encryption)
HPE iLO Advanced Pack license (remote key encryption)
Key management server (remote key encryption)
For more information, see the HPE Persistent Memory User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/persistentmemory-docs).
Installing HPE Persistent Memory modules
Use this procedure only for new HPE Persistent Memory module installations. If you are migrating this HPE Persistent Memory module from another server, see the HPE Persistent Memory User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/persistentmemory-docs).
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
The components included with the hardware option kit
A T-10 Torx screwdriver might be needed to unlock the access panel.
Procedure
1. Observe the following alerts:
Hardware options installation 57
CAUTION: DIMMs and HPE Persistent Memory modules are keyed for proper alignment. Align
notches on the DIMM or HPE Persistent Memory module with the corresponding notches in the slot before installing the component. Do not force the DIMM or HPE Persistent Memory module into the slot. When installed properly, not all DIMMs or HPE Persistent Memory modules will face in the same direction.
CAUTION: Electrostatic discharge can damage electronic components. Be sure you are properly grounded before beginning this procedure.
CAUTION: Failure to properly handle HPE Persistent Memory modules can damage the component and the system board connector.
IMPORTANT: Hewlett Packard Enterprise recommends that you implement best practice configurations for high availability (HA) such as clustered configurations.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Place the server on a flat, level work surface.
6. Remove the access panel on page 33.
7. Open the DIMM slot latches.
8. Install the HPE Persistent Memory module.
9. Install the access panel (Install the access panel on page 33).
10. Slide or install the server into the rack.
58 Hardware options installation
11. If removed, reconnect all power cables.
12. Power up the server.
13. Configure the server for HPE Persistent Memory.
For more information, see Configuring the server for HPE Persistent Memory on page 59.
Configuring the server for HPE Persistent Memory
After installing HPE Persistent Memory modules, configure the server for HPE Persistent Memory.
IMPORTANT: Always follow recommendations from your software application provider for high­availability best practices to ensure maximum uptime and data protection.
A number of configuration tools are available, including:
UEFI System Utilities—Access System Utilities through the Remote Console to configure the server by pressing the F9 key during POST.
iLO RESTful API—Use the iLO RESTful API through tools such as the RESTful Interface Tool (ilorest) or other third-party tools.
HPE Persistent Memory Management Utility—The HPE Persistent Memory Management Utility is a desktop application used to configure the server for HPE Persistent Memory, as well as evaluate and monitor the server memory configuration layout.
For more information, see the HPE Persistent Memory User Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/persistentmemory-docs).

Installing a high-performance fan

This kit is available to meet some extended ambient operating temperatures above 35 degrees Celsius. For more information about the qualifications for extended ambient configurations, see the Hewlett Packard Enterprise website.
The high-performance fans are used for 8-SFF and 10 SFF drive configurations. They are also required for the 10 SFF SAS/SATA/NVMe Combo backplane option and for ASHRAE compliant configurations. For more information on ASHRAE compliant configurations, see the Hewlett Packard Enterprise website.
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
Procedure
1. Observe the following alert:
IMPORTANT: After removing a high-performance (dual-rotor) fan, install or replace the fan
within 60 seconds. Otherwise, the server will shut down gracefully.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
Hardware options installation 59
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove all standard fans from the fan bays.
7. Remove fan blanks from the fan bays, if installed.
8. Install high-performance fans in each of the seven fan bays.
If needed, ensure each fan is securely installed by pressing the tab. Do not press on other areas of the fan.
60 Hardware options installation
9. Install the access panel (Install the access panel on page 33).
10. Slide the server into the rack.
11. Connect each power cord to the server.
12. Connect each power cord to the power source.
13. Power up the server (Power up the server on page 32).

Drive options

Depending on the configuration, this server supports SAS, SATA, NVMe, and uFF M.2 drives. For more information on drive support, see Device numbers on page 23.
When adding hard drives to the server, observe the following general guidelines:
The system automatically sets all device numbers.
If only one hard drive is used, install it in the bay with the lowest device number.
Drives should be the same capacity to provide the greatest storage space efficiency when drives are grouped together into the same drive array.

Hot-plug drive guidelines

When adding drives to the server, observe the following general guidelines:
The system automatically sets all device numbers.
If only one drive is used, install it in the bay with the lowest device number.
Drives should be the same capacity to provide the greatest storage space efficiency when drives are grouped together into the same drive array.

Removing the hard drive blank

Remove the component as indicated.
Hardware options installation 61

Installing a hot-plug SAS or SATA drive

Prerequisites
Before installing this option, be sure that you have the following:
The components included with the hardware option kit
Procedure
1. Remove the drive blank.
2. Prepare the drive.
3. Install the drive.
62 Hardware options installation
4. Determine the status of the drive from the drive LED definitions (Hot-plug drive LED definitions on
page 25).

Removing a hot-plug SAS or SATA hard drive

CAUTION: For proper cooling, do not operate the server without the access panel, baffles,
expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize the amount of time the access panel is open.
1. Determine the status of the drive from the hot-plug drive LED definitions.
2. Back up all server data on the drive.
3. Remove the drive.

Installing the NVMe drives

NVMe drives are supported in 8 SFF and 10 SFF server configurations when the 10 SFF SAS/SATA/ NVMe Combo backplane option or the 2 SFF NVMe backplane option is installed. When either backplane is installed, NVMe drives are required in bays 9 and 10. For more information, see page 23.
Prerequisites
NVMe drives are supported in the 8SFF and 10 SFF server configurations.
Before installing this option, be sure you have the following:
The components included with the hardware option kit
Procedure
1. Observe the following alert:
Device numbers on
Hardware options installation 63
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless
all bays are populated with either a component or a blank.
2. Remove the drive blank, if installed.
3. Press the Do Not Remove button to open the release handle.
4. Install the drives.
5. Install an SFF drive blank in any unused drive bays.
64 Hardware options installation

Removing and replacing an NVMe drive

An NVMe SSD is a PCIe BUS device. Devices attached to a PCIe bus cannot be removed without allowing the device and the bus to complete and cease signal/traffic flow.
Procedure
1. Back up all server data.
2. Observe the LED status of the drive and determine if it can be removed.
3. Remove the drive:
a. Push the Power button.
The Do Not Remove button illuminates and flashes.
b. Wait until the flashing stops and the Do Not Remove button is no longer illuminated.
c. Push the Do Not Remove button and then remove the drive.

Installing a uFF drive and SCM drive carrier

IMPORTANT: Not all drive bays support the drive carrier. To find supported bays, see the server
QuickSpecs.
Procedure
1. If needed, install the uFF drive into the drive carrier.
2. Remove the drive blank.
Hardware options installation 65
3. Install the drives.
Push firmly near the ejection handle until the latching spring engages with the drive bay.
4. Power on the server.
To configure the drive, use HPE Smart Storage Administrator.

Removing and replacing a uFF drive

Procedure
1. Back up all server data.
2. Observe the LED status of the drive and determine if it can be removed.
3. Remove the drive.
66 Hardware options installation
To remove the drive carrier:
To replace the component, reverse the removal procedure.

Installing an 8 SFF optical drive

Prerequisites
Before installing an optical drive, be sure the 8 SFF display port/USB/optical blank option is installed. For more information, see Installing an 8 SFF display port/USB/optical blank option on page 76.
Procedure
1. Remove the optical drive blank.
2. Install the optical drive.
Hardware options installation 67
3. Connect the optical drive cable.

Universal media bay options

Installing a 2 SFF SAS/SATA drive cage

Prerequisites
Universal media bay options are compatible only with the 8 SFF chassis.
Hewlett Packard Enterprise recommends installing the P816i-a controller to support 10 SAS/SATA drives. For more information, see Installing an HPE Smart Array P816i-a SR Gen10 Controller option on page 108.
Additional controller options are available. For more information, see the HPE DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ CablingMatrixGen10).
In addition, be sure that you have the following:
68 Hardware options installation
The components included with the hardware option kit
T-10 Torx screwdriver
Additional cables, as needed. For more information, see SFF cables on page 144.
2 SFF SAS or SATA drives or blanks
For more information, contact a Hewlett Packard Enterprise authorized reseller.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the universal media bay blank.
7. Install the 2 SFF SAS/SATA drive cage.
Hardware options installation 69
8. Observe the following:
NOTE: The following information describes the standard cable routing for this component. For more
information on optional cable routing, see the HPE ProLiant DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CablingMatrixGen10).
9. Route and connect the data cable.
10. Route and connect the power cable.
70 Hardware options installation
11. Install the access panel (Install the access panel on page 33).
12. Install the server in the rack.
13. Connect each power cord to the server.
14. Connect each power cord to the power source.
15. Power up the server (Power up the server on page 32).
16. Install drives.

Installing a 2 SFF NVMe drive cage option

Prerequisites
Before installing this option, be sure that you have the following:
The components included with the hardware option kit
T-10 Torx screwdriver
T-15 Torx screwdriver
Additional cables, as needed. For more information, see SFF cables on page 144.
NVMe drives
For more information, contact a Hewlett Packard Enterprise authorized reseller.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
Hardware options installation 71
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Use a T-10 Torx screwdriver to remove the universal media bay blank.
7. Use a T-10 Torx screwdriver to install the 2 SFF NVMe drive cage.
8. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
9. Use a T-15 Torx screwdriver to remove the existing riser board.
72 Hardware options installation
10. Use a T-15 Torx screwdriver to install the riser provided in the kit in the primary PCI riser cage.
11. Observe the following:
NOTE: The following information describes the standard cable routing for this component. For more
information on optional cable routing, see the HPE ProLiant DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CablingMatrixGen10).
12. Route and connect the data cable.
13. Install the primary PCI riser cage.
14. Install the access panel (Install the access panel on page 33).
15. Install the server in the rack.
16. Connect each power cord to the server.
Hardware options installation 73
17. Connect each power cord to the power source.
18. Power up the server (Power up the server on page 32).
19. Install drives.

Installing a 2 SFF HPE Smart Carrier M.2 (SCM) drive cage

Prerequisites
Hewlett Packard Enterprise recommends installing the P816i-a controller to support more than eight SAS/SATA drives. Additional controller options are available. For more information, see the HPE DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/ CablingMatrixGen10).
Before installing this option, be sure you that have the following:
The components included with the hardware option kit
T-10 Torx screwdriver
Additional cables, as needed. For more information, see SFF cables on page 144.
2 SFF SAS/SATA drives, 4 uFF M.2 drives, or blanks
For more information, contact a Hewlett Packard Enterprise authorized reseller.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the universal media bay blank.
74 Hardware options installation
7. Install the drive cage.
8. Observe the following:
NOTE: The following information describes the standard cable routing for this component. For more
information on optional cable routing, see the HPE ProLiant DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CablingMatrixGen10).
9. Route and connect the data cable.
Hardware options installation 75
10. Route and connect the power cable.
11. Install the access panel (Install the access panel on page 33).
12. Install the server in the rack.
13. Connect each power cord to the server.
14. Connect each power cord to the power source.
15. Power up the server (Power up the server on page 32).
16. Install drives.

Installing an 8 SFF display port/USB/optical blank option

Prerequisites
Before installing this option, be sure that you have the following:
76 Hardware options installation
The components included with the hardware option kit
T-10 Torx screwdriver
An optical drive, if installing
For more information, contact a Hewlett Packard Enterprise authorized reseller.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the universal media bay blank.
7. Install the 8 SFF display port/USB/optical blank option.
Hardware options installation 77
8. Route and connect the data cable.
9. If needed, install an optical drive (Installing an 8 SFF optical drive on page 67).
10. Install the access panel (Install the access panel on page 33).
11. Install the server in the rack.
12. Connect each power cord to the server.
13. Connect each power cord to the power source.
14. Power up the server (Power up the server on page 32).

Installing the 4 LFF optical drive option

Prerequisites
Before installing this option, be sure that you have the following:
78 Hardware options installation
The components included with the hardware option kit
T-10 Torx screwdriver
LFF optical cable option kit
An optical drive
For more information, contact a Hewlett Packard Enterprise authorized reseller.
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the LFF optical drive bay blank.
7. Install the optical drive.
Hardware options installation 79
8. Observe the following:
NOTE: The following information describes the standard cable routing for this component. For more
information on optional cable routing, see the HPE ProLiant DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CablingMatrixGen10).
9. Connect the optical drive cable to the optical drive backplane and to the SATA optical/storage drive
connector.
10. Install the access panel (Install the access panel on page 33).
11. Install the server in the rack.
12. Connect each power cord to the server.
13. Connect each power cord to the power source.
14. Power up the server (Power up the server on page 32).
80 Hardware options installation

Installing the rear drive riser cage option

The rear drive riser cage option supports low-profile PCI riser options in slot 2.
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
T-10 and T-15 Torx screwdriver
1 SFF drive, 2 uFF M.2 drives, or blanks
Procedure
1. Power down the server (Power down the server on page 32).
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
4. Remove the access panel (Remove the access panel on page 33).
5. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
6. If installed, remove any expansion boards installed in the riser board.
7. Remove the riser board. Set aside for later use.
Hardware options installation 81
8. If installed, remove the slot 2 bracket from the primary riser cage.
9. If needed, install the riser board removed in step 7 on the rear drive riser cage bracket.
10. Install the drive cage on the riser cage.
82 Hardware options installation
11. If needed, install the riser cage bracket on the rear drive riser cage.
12. If needed, install an expansion board (Installing an expansion board in the primary riser cage on
page 88).
13. Install the rear drive riser cage in the primary riser cage position.
14. Route and connect the data and power cables.
Hardware options installation 83
Hewlett Packard Enterprise recommends using embedded SATA solutions when connecting the cable. Other options exist. For more information, see the HPE DL360 Gen10 Server cabling matrix on the Hewlett Packard Enterprise website (http://www.hpe.com/info/CablingMatrixGen10).
15. Install drives or blanks (Drive options on page 61).
16. Install the access panel (Install the access panel on page 33).
17. Install the server in the rack.
18. Power up the server (Power up the server on page 32).
19. Connect each power cord to the server.
20. Connect each power cord to the power source.

Primary PCI riser cage options

The primary PCI riser cage supports the following:
Slot 1: Full-height, 3/4-length expansion boards (up to 9.5")
Slot 2:
Half-length, half-height expansion boards
3/4-length expansion boards when either a low-profile type -a controller or no controller is installed.

Installing an optional primary PCI riser board

Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
T-15 Torx screwdriver
84 Hardware options installation
Procedure
1. Back up all server data.
2. Power down the server (Power down the server on page 32).
3. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
4. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
5. Remove the access panel (Remove the access panel on page 33).
6. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
7. If needed, remove any expansion boards installed in the riser cage.
8. Remove the existing riser board from the PCI riser cage.
9. Install the optional riser board in the riser cage.
Hardware options installation 85
10. Install the following, as needed:
Expansion boards (Installing an expansion board in the primary riser cage on page 88)
GPU (Installing an accelerator or GPU in the primary riser cage on page 90)
Controllers (Controller options on page 101)
11. Install the riser cage (Install the primary PCI riser cage on page 36).
12. Install the access panel (Install the access panel on page 33).
13. Install the server in the rack.
14. Connect each power cord to the server.
15. Connect each power cord to the power source.
16. Power up the server (Power up the server on page 32).

Installing the SATA M.2 2280 riser option

Prerequisites
Before installing this option, be sure that you have the following:
The components included with the hardware option kit
Up to two 2280 form factor M.2 drives
T-15 Torx screwdriver
Procedure
1. Power down the server (Power down the server on page 32).
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
86 Hardware options installation
4. Remove the access panel (Remove the access panel on page 33).
5. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
6. If needed, remove any expansion boards installed in the riser cage.
7. Remove the existing riser board from the riser cage.
8. Install the M.2 riser board.
9. Remove the screw securing the standoff on the riser.
10. Install the M.2 drives.
Hardware options installation 87
11. Install the following, as needed:
Expansion boards (Installing an expansion board in the primary riser cage on page 88)
Controllers (Controller options on page 101)
12. Install the primary PCI riser cage (Install the primary PCI riser cage on page 36).
13. Install the access panel (Install the access panel on page 33).
14. Install the server in the rack.

Installing an expansion board in the primary riser cage

Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
T-10 Torx screwdriver
Procedure
1. Observe the following alerts:
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the
internal system components to cool before touching them.
2. Back up all server data.
3. Power down the server (Power down the server on page 32).
4. Remove all power:
88 Hardware options installation
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
8. Remove the expansion slot blank.
9. Use a T-10 Torx screwdriver to Install the expansion board.
10. Connect any required internal or external cables to the expansion board.
11. Install the primary PCI riser cage (Install the primary PCI riser cage on page 36).
Hardware options installation 89
12. Install the access panel (Install the access panel on page 33).
13. Install the server in the rack.
14. Connect each power cord to the server.
15. Connect each power cord to the power source.
16. Power up the server (Power up the server on page 32).

Installing an accelerator or GPU in the primary riser cage

Use these instructions to install accelerator options, including GPUs, in the server.
Prerequisites
This option requires the standard primary PCI riser cage.
Before installing this option, be sure that the power supplies support the installation of this option. For more information, see the Hewlett Packard Enterprise Configurator website.
In addition, be sure that you have the following items:
The components included with the hardware option kit
HPE DL360 Gen10 CPU1 Cable Kit (If installing a high-powered GPU kit)
Procedure
1. Observe the following alerts:
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the
internal system components to cool before touching them.
2. Back up all server data.
3. Power down the server (Power down the server on page 32).
4. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
8. Install the card in the x16 slot in the primary PCI riser cage position.
90 Hardware options installation
9. If installing a GPU requiring greater than 75 W, connect the power cable to the primary riser power
connector.
10. If the card requires rear support, install the GPU support bracket.
11. Install the riser cage (Install the primary PCI riser cage on page 36).
12. Install the access panel (Install the access panel on page 33).
13. Install the server in the rack.
14. Connect each power cord to the server.
15. Connect each power cord to the power source.
16. Power up the server (Power up the server on page 32).

Secondary PCI riser options

Installing a secondary full-height PCI riser cage option

When installed, this riser cage supports full-height, 3/4-length expansion boards up to 9.5". PCIe3 slot 2 is no longer available.
Hardware options installation 91
Prerequisites
This option requires a dual processor configuration.
Before installing this option, be sure you have the following:
The components included with the hardware option kit
Any expansion boards or controllers you plan to install
T-10 Torx screwdriver
T-15 Torx screwdriver
Procedure
1. Power down the server (Power down the server on page 32).
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
4. Remove the access panel (Remove the access panel on page 33).
5. Remove the primary PCI riser cage (Remove the primary PCI riser cage on page 35).
6. Use a T-10 Torx screwdriver to remove the slot 2 bracket from the primary riser cage.
7. If installed, remove the low-profile riser cage.
92 Hardware options installation
8. Lift and remove the secondary riser cage latch.
Use a T-15 Torx screwdriver to remove the riser cage screw.
9. Install the full height PCIe x16 riser cage latch.
Use a T-15 Torx screwdriver to remove the riser cage screw.
Hardware options installation 93
10. Install the riser cage.
11. Install one of the following, as needed:
Expansion boards (Installing an expansion board in the secondary riser cage on page 96)
GPU (Installing an accelerator or GPU in the secondary riser cage on page 99)
Controllers (Controller options on page 101)
12. Install the access panel (Install the access panel on page 33).
13. Install the server in the rack.
14. Connect each power cord to the server.
15. Connect each power cord to the power source.
16. Power up the server (Power up the server on page 32).
94 Hardware options installation

Installing a secondary low-profile PCIe slot riser cage option

When installed, this riser cage provides an additional low profile slot and supports half-length/half-height expansion boards.
Prerequisites
This option requires a dual processor configuration.
Before installing this option, be sure that you have the following:
The components included with the hardware option kit
Procedure
1. Observe the following alerts:
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the
internal system components to cool before touching them.
2. Back up all server data.
3. Power down the server (Power down the server on page 32).
4. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. Install the secondary low-profile PCIe slot riser cage.
Hardware options installation 95
8. Install one of the following, as needed:
Expansion boards (Installing an expansion board in the secondary riser cage on page 96)
Controllers (Controller options on page 101)
9. Install the access panel (Install the access panel on page 33).
10. Install the server into the rack (Installing the server into the rack on page 46).
11. Connect each power cord to the server.
12. Connect each power cord to the power source.
13. Power up the server (Power up the server on page 32).

Installing an expansion board in the secondary riser cage

Prerequisites
Before installing this option, be sure that you have the following:
The components included with the hardware option kit
T-10 Torx screwdriver
Procedure
1. Observe the following alerts:
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the
internal system components to cool before touching them.
2. Back up all server data.
96 Hardware options installation
3. Power down the server (Power down the server on page 32).
4. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. Install a secondary riser cage:
Low-profile (Installing a secondary low-profile PCIe slot riser cage option on page 95)
Full-height (Installing a secondary full-height PCI riser cage option on page 91)
8. Remove the expansion slot blank:
A T-10 Torx screwdriver is required to remove the expansion slot blank.
Half-length
Hardware options installation 97
Full-length
9. Install the expansion board.
98 Hardware options installation
10. Connect any required internal or external cables to the expansion board.
11. Install the access panel (Install the access panel on page 33).
12. Install the server in the rack.
13. Connect each power cord to the server.
14. Connect each power cord to the power source.
15. Power up the server (Power up the server on page 32).

Installing an accelerator or GPU in the secondary riser cage

Use these instructions to install accelerator options, including GPUs, in the server.
Prerequisites
When installing a 3/4-length GPU, a low profile type -a controller must be installed.
Before installing this option, do the following:
Be sure that the power supplies support the installation of this option. For more information, see the Hewlett Packard Enterprise Configurator website.
Be sure that you have the following items:
The components included with the GPU enablement option kit
T-15 Torx screwdriver
Procedure
1. Observe the following alerts:
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the
internal system components to cool before touching them.
Hardware options installation 99
2. Back up all server data.
3. Power down the server (Power down the server on page 32).
4. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
5. Do one of the following:
a. Extend the server from the rack (Extend the server from the rack on page 32).
b. Remove the server from the rack (Remove the server from the rack on page 33).
6. Remove the access panel (Remove the access panel on page 33).
7. If installed, remove the low profile riser cage.
8. Install the secondary full-height PCI riser cage (Installing a secondary full-height PCI riser cage
option on page 91).
9. Remove the existing rear guide bracket from the card, if installed.
10. If installing a 3/4-length GPU, install the bracket supplied in the kit.
A T-15 Torx screwdriver is required to install the bracket.
100 Hardware options installation
Loading...