This document is for the person who installs, administers, and troubleshoots servers and storage
systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of computer
equipment and trained in recognizing hazards in products with hazardous energy levels.
Part Number: 868990-007
Published: August 2019
Edition: 7
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use,
or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel® and Xeon® are trademarks of Intel Corporation in the United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
4Serial label pull tab or optional Systems Insight Display
5iLO service port
6USB 3.0 port
Universal media bay components
ItemDescription
1USB 2.0 port
2Video display port
3Optical disk drive (optional)
4Drives (optional)
8 Component identification
12-drive LFF front panel components
ItemDescription
1Drive bays
8-drive LFF model front panel components
ItemDescription
1Drives (optional)
2LFF power switch module
3Drive bays
LFF power switch module components
Component identification9
ItemDescription
1Optical disk drive
2Serial label pull tab
3USB 3.0 port
4iLO service port
5Video display port
Front panel LEDs and buttons
SFF front panel LEDs and button
ItemDescriptionStatus
1Power On/Standby button and
system power LED*
2Health LED*Solid green = Normal
Solid green = System on
Flashing green (1 Hz/cycle per sec) = Performing
power on sequence
Solid amber = System in standby
Off = No power present†
Flashing green (1 Hz/cycle per sec) = iLO is rebooting
Flashing amber = System degraded
Flashing red (1 Hz/cycle per sec) = System critical**
Table Continued
10Component identification
ItemDescriptionStatus
3NIC status LED*Solid green = Link to network
Flashing green (1 Hz/cycle per sec) = Network active
Off = No network activity
4UID button/LED*Solid blue = Activated
Flashing blue:
•1 Hz/cycle per sec = Remote management or
firmware upgrade in progress
•4 Hz/cycle per sec = iLO manual reboot sequence
initiated
•8 Hz/cycle per sec = iLO manual reboot sequence
in progress
Off = Deactivated
*When all four LEDs described in this table flash simultaneously, a power fault has occurred. For more
information, see "Power fault LEDs."
**If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the
system health status.
†Facility power is not present, power cord is not attached, no power supplies are installed, power supply
failure has occurred, or the power button cable is disconnected.
LFF 12-drive model front panel LEDs and button
Component identification11
ItemDescriptionStatus
1Health LED*Solid green = Normal
Flashing green (1 Hz/cycle per sec) = iLO is rebooting
Flashing amber = System degraded
Flashing red (1 Hz/cycle per sec) = System critical**
2Power On/Standby button and
system power LED*
3NIC status LED*Solid green = Link to network
4UID button/LED*Solid blue = Activated
Solid green = System on
Flashing green (1 Hz/cycle per sec) = Performing
power on sequence
Solid amber = System in standby
Off = No power present†
Flashing green (1 Hz/cycle per sec) = Network active
Off = No network activity
Flashing blue:
•1 Hz/cycle per sec = Remote management or
firmware upgrade in progress
•4 Hz/cycle per sec = iLO manual reboot sequence
initiated
•8 Hz/cycle per sec = iLO manual reboot sequence
in progress
Off = Deactivated
*When all four LEDs described in this table flash simultaneously, a power fault has occurred. For more
information, see "Power fault LEDs."
**If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the
system health status.
†Facility power is not present, power cord is not attached, no power supplies are installed, power supply
failure has occurred, or the power button cable is disconnected.
12Component identification
LFF power switch module LEDs and button
ItemDescriptionStatus
1UID button/LED*Solid blue = Activated
Flashing blue:
•1 Hz/cycle per sec = Remote management or
firmware upgrade in progress
•4 Hz/cycle per sec = iLO manual reboot sequence
initiated
•8 Hz/cycle per sec = iLO manual reboot sequence
in progress
Off = Deactivated
2Health LED*Solid green = Normal
Flashing green (1 Hz/cycle per sec) = iLO is rebooting
Flashing amber = System degraded
Flashing red (1 Hz/cycle per sec) = System critical**
3NIC status LED*Solid green = Link to network
Flashing green (1 Hz/cycle per sec) = Network active
Off = No network activity
4Power On/Standby button and
system power LED*
Solid green = System on
Flashing green (1 Hz/cycle per sec) = Performing
power on sequence
Solid amber = System in standby
Off = No power present†
Component identification13
*When all four LEDs described in this table flash simultaneously, a power fault has occurred. For more
information, see "Power fault LEDs."
**If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the
system health status.
†Facility power is not present, power cord is not attached, no power supplies are installed, power supply
failure has occurred, or the power button cable is disconnected.
UID button functionality
The UID button can be used to display the Server Health Summary when the server will not power on. For
more information, see the latest HPE iLO 5 User Guide on the Hewlett Packard Enterprise website.
Front panel LED power fault codes
The following table provides a list of power fault codes, and the subsystems that are affected. Not all power
faults are used by all servers.
SubsystemLED behavior
System board1 flash
Processor2 flashes
Memory3 flashes
Riser board PCIe slots4 flashes
FlexibleLOM5 flashes
Removable HPE Smart Array SR Gen10 controller6 flashes
System board PCIe slots7 flashes
Power backplane or storage backplane8 flashes
Power supply9 flashes
Systems Insight Display LEDs
The Systems Insight Display LEDs represent the system board layout. The display enables diagnosis with the
access panel installed.
14Component identification
DescriptionStatus
Processor LEDs
DIMM LEDs
Fan LEDs
NIC LEDs
1
Power supply LEDs
Off = Normal
Amber = Failed processor
Off = Normal
Amber = Failed DIMM or configuration issue
Off = Normal
Amber = Failed fan or missing fan
Off = No link to network
Solid green = Network link
Flashing green = Network link with activity
If power is off, the front panel LED is not active. For
status, see Rear panel LEDs on page 18.
Off = Normal
Solid amber = Power subsystem degraded, power
supply failure, or input power lost.
PCI riser LED
Off = Normal
Amber = Incorrectly installed PCI riser cage
Over temp LED
Off = Normal
Amber = High system temperature detected
Amp Status LED
Off = AMP modes disabled
Solid green = AMP mode enabled
Solid amber = Failover
Flashing amber = Invalid configuration
Power cap LED
Off = System is in standby, or no cap is set.
Solid green = Power cap applied
1
For Networking Choice server models, the embedded NIC ports are not equipped on the server. Therefore, the NIC LEDs
on the Systems Insight Display will flash based on the FlexibleLOM network port activity. In the case of a dual-port
FlexibleLOM, only NIC LED 1 and 2 will illuminate to correspond with the activity of the respective network ports.
When the health LED on the front panel illuminates either amber or red, the server is experiencing a health
event. For more information on the combination of these LEDs, see Systems Insight Display combinedLED descriptions on page 16).
Component identification15
Systems Insight Display combined LED descriptions
The combined illumination of the following LEDs indicates a system condition:
•Systems Insight Display LEDs
•System power LED
•Health LED
Systems Insight Display
LED and color
Processor (amber)RedAmberOne or more of the following
Processor (amber)AmberGreenProcessor in socket X is in a pre-
DIMM (amber)RedGreenOne or more DIMMs have failed.
DIMM (amber)AmberGreenDIMM in slot X is in a pre-failure
Over temp (amber)AmberGreenThe Health Driver has detected a
Over temp (amber)RedAmberThe server has detected a hardware
Health
LED
System
power LED
Status
conditions may exist:
•Processor in socket X has failed.
•Processor X is not installed in the
socket.
•Processor X is unsupported.
•ROM detects a failed processor
during POST.
failure condition.
condition.
cautionary temperature level.
critical temperature level.
PCI riser (amber)RedGreenThe PCI riser cage is not seated
Fan (amber)AmberGreenOne fan has failed or has been
Fan (amber)RedGreenTwo or more fans have failed or been
Power supply (amber)RedAmberOne or more of the following
16Component identification
properly.
removed.
removed.
conditions may exist:
•Only one power supply is installed
and that power supply is in
standby.
•Power supply fault
•System board fault
Table Continued
Systems Insight Display
LED and color
Power supply (amber)AmberGreenOne or more of the following
Power cap (off)—AmberStandby
Health
LED
System
power LED
Status
conditions may exist:
•Redundant power supply is
installed and only one power
supply is functional.
•AC power cord is not plugged into
redundant power supply.
•Redundant power supply fault
•Power supply mismatch at POST
or power supply mismatch through
hot-plug addition
Power cap (green)—Flashing
Power cap (green)—GreenPower is available.
Power cap (flashing amber)—AmberPower is not available.
IMPORTANT: If more than one DIMM slot LED is illuminated, further troubleshooting is required. Test
each bank of DIMMs by removing all other DIMMs. Isolate the failed DIMM by replacing each DIMM in a
bank with a known working DIMM.
Rear panel components
Waiting for power
green
ItemDescription
1Primary riser slots 1-3 (Optional drive cage)
2Optional riser slots 4-6 (Optional drive cage)
3Optional riser slots 7-8 (Optional drive cage)
4Power supply 1
5Power supply 2
6Video port
Table Continued
Component identification17
ItemDescription
7Serial port (optional)*
81Gb RJ-45 ports 1–4 (if equipped)
9iLO management port
10USB 3.0 ports
11FlexibleLOM slot
*When a tertiary riser cage is installed as shown, the serial port can be installed in riser slot 6.
Rear panel LEDs
ItemDescriptionStatus
1UID LED
2Link LED
3Activity LED
4Power supply
LEDs
Off = Deactivated
Solid blue = Activated
Flashing blue = System being
managed remotely
Off = No network link
Green = Network link
Off = No network activity
Solid green = Link to network
Flashing green = Network activity
Off = System is off or power supply has
failed.
Solid green = Normal
18Component identification
System board components
ItemDescription
1FlexibleLOM connector
2System maintenance switch
3Primary PCIe riser connector
4Front display port/USB 2.0 connector
Table Continued
Component identification19
ItemDescription
5x4 SATA port 1
6x4 SATA port 2
7x2 SATA port 3
8x1 SATA port 4
9Optical disk drive/SATA port 5
10Power switch/SID module connector
11Drive backplane power connectors
12Energy pack connector
13Chassis intrusion detection connector
14Drive backplane power connector
15Micro SD card slot
16Dual internal USB 3.0 ports
17Type-a Smart Array connector
18Secondary PCIe riser connector*
19System battery
20Tertiary PCIe riser connector*
21TPM connector
22Serial port connector (optional)
* Requires a second processor
System maintenance switch descriptions
PositionDefaultFunction
1
S1
S2OffReserved
S3OffReserved
S4OffReserved
1
S5
S61, 2,
3
Off
Off
Off
Off = iLO security is enabled.
On = iLO security is disabled.
Off = Power-on password is enabled.
On = Power-on password is disabled.
Off = No function
S7OffReserved
20Component identification
On = Restore default manufacturing settings
Table Continued
PositionDefaultFunction
S8—Reserved
S9—Reserved
S10—Reserved
S11—Reserved
S12—Reserved
1
To access the redundant ROM, set S1, S5, and S6 to On.
2
When the system maintenance switch position 6 is set to the On position, the system is prepared to restore all
configuration settings to their manufacturing defaults.
3
When the system maintenance switch position 6 is set to the On position and Secure Boot is enabled, some
configurations cannot be restored. For more information, see Secure Boot on page 179.
DIMM label identification
To determine DIMM characteristics, see the label attached to the DIMM. The information in this section helps
you to use the label to locate specific information about the DIMM.
ItemDescriptionExample
1Capacity
8 GB
16 GB
32 GB
64 GB
128 GB
2Rank
1R = Single rank
2R = Dual rank
4R = Quad rank
8R = Octal rank
Table Continued
Component identification21
ItemDescriptionExample
3Data width on DRAM
4Memory generation
5Maximum memory speed
6CAS latency
x4 = 4-bit
x8 = 8-bit
x16 = 16-bit
PC4 = DDR4
2133 MT/s
2400 MT/s
2666 MT/s
2933 MT/s
P = CAS 15-15-15
T = CAS 17-17-17
U = CAS 20-18-18
V = CAS 19-19-19 (for RDIMM, LRDIMM)
V = CAS 22-19-19 (for 3DS TSV LRDIMM)
7DIMM type
For more information about product features, specifications, options, configurations, and compatibility, see the
HPE DDR4 SmartMemory QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/
support/DDR4SmartMemoryQS).
DIMM slot locations
DIMM slots are numbered sequentially (1 through 12) for each processor.
Y = CAS 21-21-21 (for RDIMM, LRDIMM)
Y = CAS 24-21-21 (for 3DS TSV LRDIMM)
R = RDIMM (registered)
L = LRDIMM (load reduced)
E = Unbuffered ECC (UDIMM)
22Component identification
NVDIMM identification
NVDIMM boards are blue instead of green. This change to the color makes it easier to distinguish NVDIMMs
from DIMMs.
To determine NVDIMM characteristics, see the full product description as shown in the following example:
ItemDescriptionDefinition
1Capacity16 GiB
2Rank1R (Single rank)
3Data width per DRAM chipx4 (4 bit)
4Memory typeNN4=DDR4 NVDIMM-N
5Maximum memory speed2667 MT/s
6Speed gradeV (latency 19-19-19)
7DIMM typeRDIMM (registered)
8Other—
For more information about NVDIMMs, see the product QuickSpecs on the Hewlett Packard Enterprise
website (http://www.hpe.com/info/qs).
NVDIMM 2D Data Matrix barcode
The 2D Data Matrix barcode is on the right side of the NVDIMM label and can be scanned by a cell phone or
other device.
Component identification23
When scanned, the following information from the label can be copied to your cell phone or device:
on (12V rail) and the NVDIMM-N is active
(backup and restore).
24Component identification
NVDIMM-N Function LED
(green)
OnOff
OnOn
OffOff
OnFlashing
(blue)
NVDIMM Function LED patterns
For the purpose of this table, the NVDIMM-N LED operates as follows:
•Solid indicates that the LED remains in the on state.
•Flashing indicates that the LED is on for 2 seconds and off for 1 second.
•Fast-flashing indicates that the LED is on for 300 ms and off for 300 ms.
StateDefinitionNVDIMM-N Function LED
0The restore operation is in progress.Flashing
1The restore operation is successful.Solid or On
2Erase is in progress.Flashing
3The erase operation is successful.Solid or On
4The NVDIMM-N is armed, and the NVDIMM-N is in
normal operation.
5The save operation is in progress.Flashing
6The NVDIMM-N finished saving and battery is still turned
on (12 V still powered).
7The NVDIMM-N has an internal error or a firmware
update is in progress. For more information about an
NVDIMM-N internal error, see the IML.
HPE Persistent Memory module label identification
Solid or On
Solid or On
Fast-flashing
ItemDescriptionExample
1Unique ID number8089-A2-1802-1234567
2Model numberNMA1XBD512G2S
Table Continued
Component identification25
ItemDescriptionExample
3Capacity
4QR codeIncludes part number and serial number
For more information about product features, specifications, options, configurations, and compatibility, see the
product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/support/persistentmemoryQS).
128 GB
256 GB
512 GB
Processor, heatsink, and socket components
ItemDescription
1Heatsink nuts
2Processor carrier
3Pin 1 indicator
4Heatsink latch
5Alignment post
1
Symbol also on the processor and frame.
Drives
1
26Component identification
SAS/SATA drive components and LEDs
ItemDescriptionStatus
1Locate•Solid blue = The drive is being identified by a host
application.
•Flashing blue = The drive carrier firmware is being
updated or requires an update.
2Activity ring LED•Rotating green = Drive activity.
•Off = No drive activity.
3Do not remove LED•Solid white = Do not remove the drive. Removing
the drive causes one or more of the logical drives to
fail.
•Off = Removing the drive does not cause a logical
drive to fail.
4Drive status LED•Solid green = The drive is a member of one or more
logical drives.
•Flashing green = The drive is rebuilding or
performing a RAID migration, strip size migration,
capacity expansion, or logical drive extension, or is
erasing.
•Flashing amber/green = The drive is a member of
one or more logical drives and predicts the drive will
fail.
•Flashing amber = The drive is not configured and
predicts the drive will fail.
•Solid amber = The drive has failed.
•Off = The drive is not configured by a RAID
controller.
NVMe SSD LED definitions
The NVMe SSD is a PCIe bus device. A device attached to a PCIe bus cannot be removed without allowing
the device and bus to complete and cease the signal/traffic flow.
Component identification27
CAUTION: Do not remove an NVMe SSD from the drive bay while the Do not remove LED is flashing.
The Do not remove LED flashes to indicate that the device is still in use. Removing the NVMe SSD
before the device has completed and ceased signal/traffic flow can cause loss of data.
Item LEDStatusDefinition
1LocateSolid blueThe drive is being identified by a host application.
Flashing blueThe drive carrier firmware is being updated or requires an update.
2Activity
ring
OffNo drive activity
3Drive
status
Flashing green
Flashing amber/
Flashing amber The drive is not configured and predicts the drive will fail.
Solid amberThe drive has failed.
Rotating greenDrive activity
Solid greenThe drive is a member of one or more logical drives.
The drive is doing one of the following:
•Rebuilding
•Performing a RAID migration
•Performing a stripe size migration
•Performing a capacity expansion
•Performing a logical drive extension
•Erasing
The drive is a member of one or more logical drives and predicts the
green
drive will fail.
OffThe drive is not configured by a RAID controller.
4Do not
remove
Flashing whiteThe drive ejection request is pending.
5PowerSolid greenDo not remove the drive. The drive must be ejected from the PCIe bus
28Component identification
Solid whiteDo not remove the drive. The drive must be ejected from the PCIe bus
OffThe drive has been ejected.
prior to removal.
prior to removal.
Table Continued
Item LEDStatusDefinition
Flashing greenThe drive ejection request is pending.
OffThe drive has been ejected.
uFF drive components and LEDs
ItemDescriptionStatus
1Locate•Off—Normal
•Solid blue—The drive is being identified by a host
application
•Flashing blue—The drive firmware is being updated
or requires an update
2uFF drive ejection latchRemoves the uFF drive when released
3Do not remove LED•Off—OK to remove the drive. Removing the drive
does not cause a logical drive to fail.
•Solid white—Do not remove the drive. Removing
the drive causes one or more of the logical drives to
fail.
Table Continued
Component identification29
ItemDescriptionStatus
4Drive status LED•Off—The drive is not configured by a RAID
controller
•Solid green—The drive is a member of one or more
logical drives
•Flashing green (4 Hz)—The drive is operating
normally and has activity
•Flashing green (1 Hz)—The drive is rebuilding or
performing a RAID migration, stripe size migration,
capacity expansion, logical drive extension, or is
erasing
•Flashing amber/green (1 Hz)—The drive is a
member of one or more logical drives that predicts
the drive will fail
•Solid amber—The drive has failed
•Flashing amber (1 Hz)—The drive is not configured
and predicts the drive will fail
5Adapter ejection release latch
and handle
Fan bay numbering
Removes the SFF flash adapter when released
30Component identification
Drive box identification
Front boxes
ItemDescription
1Box 1
2Box 2
3Box 3
ItemDescription
1Box 1
2Box 2
3Box 3
Rear boxes
ItemDescription
1Box 4
2Box 5
3Box 6
Component identification31
ItemDescription
1Box 4
2Box 6
Midplane box (LFF only)
ItemDescription
1Box 7
Drive bay numbering
Drive bay numbering depends on how the drive backplanes are connected:
•To a controller
◦Embedded controllers use the onboard SATA ports.
◦Type-a controllers install to the type-a smart array connector.
◦Type-p controllers install to a PCIe riser.
•To a SAS expander
Installs in the primary or secondary PCIe riser
32Component identification
Drive bay numbering: Smart Array controller
When the drive backplane is connected directly to a storage controller, then each drive box starts at 1. The
following images are examples of common configurations.
Component identification33
Drive bay numbering: SAS expander
Drive numbering through a SAS Expander is continuous.
•SAS expander port 1 always connects to port 1 of the controller.
•SAS expander port 2 always connects to port 2 of the controller.
•SAS expander port 3 = drive numbers 1-4.
•SAS expander port 4 = drive numbers 5-8.
•SAS expander port 5 = drive numbers 9-12.
•SAS expander port 6 = drive numbers 13-16.
34Component identification
•SAS expander port 7 = drive numbers 17-20.
•SAS expander port 8 = drive numbers 21-24.
•SAS expander port 9 = drive numbers 25-28.
Common configuration examples:
When any stacked 2SFF drive configuration is connected to the SAS expander, the drive numbering skips the
second number to allow uFF drive bay numbering on page 37.
•Front 2SFF to SAS expander port 3:
•Rear 2SFF to SAS expander port 9:
Component identification35
•Front 2SFF side-by-side (unstacked) to SAS expander port 3:
•Rear 3LFF to SAS expander port 9:
•Mid 4LFF to SAS expander port 6:
•Front 12LFF + Midplane 4LFF + All rear 2SFF:
36Component identification
Drive bay numbering: NVMe drives
If the server is populated with NVMe drives and NVMe risers:
uFF drive bay numbering
There are two uFF drives in each drive carrier.
If the drives are connected to a controller:
•The left bay = The default bay number of the server
•The right bay = The default bay number of the server + 100
If the drives are connected to a SAS expander:
Component identification37
For example:
•If the drives are connected to port 3 of the SAS expander, then the uFF drives are 1-4.
•If the drives are connected to port 9 of the SAS expander, then the uFF drives are 25-28.
Riser components
4-port NVMe Slimline riser
ItemDescription
1–4x8 Slimline NVMe connectors
38Component identification
Three-slot with NVMe Slimline riser
ItemDescription
1x8 Slimline NVMe connector
2Controller backup power connectors (3)
3–5x8 PCIe slots
Three-slot with M.2 riser
ItemDescription
1GPU power cable connector
2Controller backup power connectors (3)
3M.2 SSD drive connectors
1
4x8 PCIe slot
5x16 PCIe slot
6x8 PCIe slot
1
The riser supports installation of a second M.2 SSD drive on the reverse side.
Component identification39
Three-slot GPU riser
ItemDescription
1GPU power cable connector
2Controller backup power connectors (3)
3x8 PCIe slot
4x16 PCIe slot
5x8 PCIe slot
Two-slot GPU riser
ItemDescription
1GPU power cable connector
2Controller backup power connectors (2)
3x16 PCIe slot
4x16 PCIe slot
40Component identification
Two-slot x8 riser (tertiary)
ItemDescription
1x8 PCIe slot
2x8 PCIe slot
3Controller backup power connectors (2)
x8 riser (tertiary)
ItemDescription
1x8 PCIe slot
2x8 Slimline NVMe connector
3Controller backup power connector
Component identification41
Dual Slimline riser (tertiary)
ItemDescription
1x8 Slimline NVMe connector
2x8 Slimline NVMe connector
HPE Flex Slot Power Supply with Integrated Battery Backup
Unit components and LED
1. Battery check button
2. Power LED
For more information about the HPE Flex Slot Power Supply with Integrated Battery Backup Unit, see the
document that ships with the component.
The label on the component indicates that the flex slot power supply has an integrated battery back up
module.
42Component identification
Figure 1: HPE Flex Slot Power Supply with Integrated Battery Backup Unit label
Checking the battery backup charge level
Procedure
1. Using a ball tip pen, press and release the battery check button.
After releasing the button, you might have to wait up to seven seconds before the LED starts flashing.
2. Note the number of LED flashes and reference the following table.
FlashesBattery State RSOC
1
0Battery bad/failed
1RSOC <= 29%
230% <= RSOC <= 62%
363% <= RSOC <= 94%
495% <= RSOC
1
Relative State of Charge
The battery will fully charge within one hour of being installed into the server.
Blinking amber4 Hz blinking amber indicates a problem with the
Solid greenA valid logical (data activity) link exists with no active
Blinking greenA valid logical link exists with active traffic.
1
2-port adapter LEDs are shown. The 1-port adapters have only a single LED.
1
Description
physical link.
traffic.
Component identification45
Operations
Power up the server
To power up the server, use one of the following methods:
•Press the Power On/Standby button.
•Use the virtual power button through iLO.
Power down the server
Before powering down the server for any upgrade or maintenance procedures, perform a backup of critical
server data and programs.
IMPORTANT: When the server is in standby mode, auxiliary power is still being provided to the system.
To power down the server, use one of the following methods:
•Press and release the Power On/Standby button.
This method initiates a controlled shutdown of applications and the OS before the server enters standby
mode.
•Press and hold the Power On/Standby button for more than 4 seconds to force the server to enter standby
mode.
This method forces the server to enter standby mode without properly exiting applications and the OS. If
an application stops responding, you can use this method to force a shutdown.
•Use a virtual power button selection through iLO.
This method initiates a controlled remote shutdown of applications and the OS before the server enters
standby mode.
Before proceeding, verify that the server is in standby mode by observing that the system power LED is
amber.
Extending the server from the rack
WARNING: To reduce the risk of personal injury or equipment damage, be sure that the rack is
adequately stabilized before extending anything from the rack.
Procedure
Pull down the quick release levers on each side of the server, and then extend the server from the rack.
46 Operations
Removing the server from the rack
To remove the server from a Hewlett Packard Enterprise, Compaq-branded, Telco, or third-party rack:
Procedure
1. Power down the server.
2. Extend the server from the rack.
3. Disconnect the cabling and remove the server from the rack.
For more information, see the documentation that ships with the rack mounting option.
4. Place the server on a sturdy, level surface.
Secure cables using the cable management arm
For rack rail installation instructions, see the documentation that ships with the rack rails.
WARNING: To reduce the risk of electric shock, fire, or damage to the equipment:
•Do not insert wrong connectors into ports.
•Do not disable the power cord grounding plug. The grounding plug is an important safety feature.
•Plug the power cord into a grounded (earthed) electrical outlet that is easily accessible at all times.
•Unplug the power cord from the power supply to disconnect power to the equipment.
•Do not route the power cord where it can be walked on or pinched by items placed against it. Pay
particular attention to the plug, electrical outlet, and the point where the cord extends from the
server.
Procedure
1. After the server is racked, connect any peripheral devices to the server.
Operations47
To identify components, see Rear panel components on page 17.
2. At the rear of the server, plug in the power cord to the power supply.
3. Install the power cord anchors.
4. Secure the cables to the cable management arm.
IMPORTANT: Leave enough slack in each of the cables to prevent damage to the cables when the
server is extended from the rack.
5. Connect the power cord to the AC power source.
Releasing the cable management arm
Release the cable management arm and then swing the arm away from the rack.
48Operations
Remove the access panel
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Procedure
1. Power down the server.
2. Extend the server from the rack.
3. Open or unlock the locking latch, slide the access panel to the rear of the chassis, and remove the access
panel.
Install the access panel
Procedure
1. Place the access panel on top of the server with the latch open.
Allow the panel to extend past the rear of the server approximately 1.25 cm (0.5 in).
2. Push down on the latch.
The access panel slides to a closed position.
3. Tighten the security screw on the latch, if needed.
Removing the fan cage
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Operations49
IMPORTANT: For optimum cooling, install fans in all primary fan locations.
Procedure
1. Power down the server.
2. Do one of the following:
•Disconnect each power cord from the power source.
•Disconnect each power cord from the server.
3. Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Remove the air baffle.
6. Remove the fan cage.
Installing the fan cage
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
IMPORTANT: For optimum cooling, install fans in all primary fan locations.
50Operations
Removing the air baffle or midplane drive cage
CAUTION: Do not detach the cable that connects the battery pack to the cache module. Detaching the
cable causes any unsaved data in the cache module to be lost.
CAUTION: For proper cooling, do not operate the server without the access panel, baffles, expansion
slot covers, or blanks installed. If the server supports hot-plug components, minimize the amount of time
the access panel is open.
Procedure
1. Power down the server.
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Do one of the following:
•Remove the air baffle.
Operations51
•Remove the 4LFF midplane drive cage:
a. Disconnect all cables.
b. Remove all drives.
Be sure to note the location of each drive.
c. Remove the drive cage.
CAUTION: Do not drop the drive cage on the system board. Dropping the drive cage on the
system board might damage the system or components. Remove all drives and use two hands
when installing or removing the drive cage.
52Operations
Installing the air baffle
Procedure
1. Observe the following alerts.
CAUTION: For proper cooling, do not operate the server without the access panel, baffles,
expansion slot covers, or blanks installed. If the server supports hot-plug components, minimize the
amount of time the access panel is open.
CAUTION: Do not detach the cable that connects the battery pack to the cache module. Detaching
the cable causes any unsaved data in the cache module to be lost.
2. Install the air baffle.
Removing a riser cage
CAUTION: To prevent damage to the server or expansion boards, power down the server and remove
all AC power cords before removing or installing the PCI riser cage.
Procedure
1. Power down the server.
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
Operations53
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Remove the riser cage:
•Primary and secondary riser cages
•Tertiary riser cage
Removing a riser slot blank
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all PCI
slots have either an expansion slot cover or an expansion board installed.
54Operations
Procedure
1. Power down the server.
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Remove the riser cage.
6. Remove the blank.
Removing the hard drive blank
Remove the component as indicated.
Accessing the Systems Insight Display
The Systems Insight Display is supported only on SFF models.
Procedure
1. Press and release the panel.
2. After the display fully ejects, rotate the display to view the LEDs.
Operations55
56Operations
Setup
HPE support services
Delivered by experienced, certified engineers, HPE support services help you keep your servers up and
running with support packages tailored specifically for HPE ProLiant systems. HPE support services let you
integrate both hardware and software support into a single package. A number of service level options are
available to meet your business and IT needs.
HPE support services offer upgraded service levels to expand the standard product warranty with easy-tobuy, easy-to-use support packages that will help you make the most of your server investments. Some of the
HPE support services for hardware, software or both are:
•Foundation Care – Keep systems running.
◦6-Hour Call-to-Repair
◦4-Hour 24x7
◦Next Business Day
•Proactive Care – Help prevent service incidents and get you to technical experts when there is one.
◦6-Hour Call-to-Repair
◦4-Hour 24x7
◦Next Business Day
•Startup and implementation services for both hardware and software
•HPE Education Services – Help train your IT staff.
For more information on HPE support services, see the
Set up the server
Prerequisites
Before setting up the server:
•Download the latest SPP:
http://www.hpe.com/servers/spp/download
Support validation required
•Verify that your OS or virtualization software is supported:
http://www.hpe.com/info/ossupport
•Read the operational requirements for the server:
Operational requirements on page 60
•Read the safety and compliance information on the HPE website:
◦For smart array SR controllers, use HPE Smart Storage Administrator to create arrays:
a. From the boot screen, press F10 to run Intelligent Provisioning.
b. From Intelligent Provisioning, run HPE Smart Storage Administrator.
◦For smart array MR controllers, use the UEFI System Configuration to create arrays.
For procedures on creating arrays with MR controllers, see the following guide in the informationlibrary:
HPE Smart Array P824i-p MR Gen10 User Guide
IMPORTANT: Smart array MR controllers are not supported by Intelligent Provisioning or
Smart Storage Administrator.
NOTE: Before installing an OS with a smart array MR controller, configure the drives. If the drives
are not configured, the OS will not detect the drives during installation. For more information, see
the Smart Array MR user guide for your controller.
•If no controller is installed, do one of the following:
◦AHCI is enabled by default. You can deploy an OS or virtualization software.
◦Disable AHCI, enable software RAID, and then create an array:
a. From the boot screen, press F9 to run UEFI System Utilities.
b. From the UEFI System Utilities screen, select System Configurations > BIOS/Platform
Configuration (RBSU) > Storage Options > SATA Controller Options > Embedded SATA
configuration > Smart Array SW RAID Support
c. Enable SW RAID.
Setup59
d. Save the configuration and reboot the server.
e. Create an array:
I.From the boot screen, press F9 to run UEFI System Utilities.
II.From the UEFI System Utilities screen, select System Configuration > Embedded
Press F10 at the boot screen to run Intelligent Provisioning.
IMPORTANT: Smart array MR controllers are not supported by Intelligent Provisioning or Smart
Storage Administrator.
•Manually deploy an OS.
a. Insert the installation media.
For remote management, click Virtual Drives in the iLO remote console to mount images, drivers,
or files to a virtual folder. If a storage driver is required to install the OS, use the virtual folder to store
the driver.
b. Press F11 at the boot screen to select the boot device.
c. After the OS is installed, update the drivers.
9. Register the server (http://www.hpe.com/info/register).
Operational requirements
Space and airflow requirements
To allow for servicing and adequate airflow, observe the following space and airflow requirements when
deciding where to install a rack:
•Leave a minimum clearance of 63.5 cm (25 in) in front of the rack.
•Leave a minimum clearance of 76.2 cm (30 in) behind the rack.
•Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack or
row of racks.
Hewlett Packard Enterprise servers draw in cool air through the front door and expel warm air through the
rear door. Therefore, the front and rear rack doors must be adequately ventilated to allow ambient room air to
enter the cabinet, and the rear door must be adequately ventilated to allow the warm air to escape from the
cabinet.
60Setup
CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation
openings.
When vertical space in the rack is not filled by a server or rack component, the gaps between the components
cause changes in airflow through the rack and across the servers. Cover all gaps with blanking panels to
maintain proper airflow.
CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement
ensures proper airflow. Using a rack without blanking panels results in improper cooling that can lead to
thermal damage.
The 9000 and 10000 Series Racks provide proper server cooling from flow-through perforations in the front
and rear doors that provide 64 percent open area for ventilation.
CAUTION: When using a Compaq branded 7000 series rack, install the high airflow rack door insert
(PN 327281-B21 for 42U rack, PN 157847-B21 for 22U rack) to provide proper front-to-back airflow and
cooling.
CAUTION: If a third-party rack is used, observe the following additional requirements to ensure
adequate airflow and to prevent damage to the equipment:
•Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow 5,350 sq
cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate airflow (equivalent to
the required 64 percent open area for ventilation).
•Side—The clearance between the installed rack component and the side panels of the rack must be
a minimum of 7 cm (2.75 in).
Temperature requirements
To ensure continued safe and reliable equipment operation, install or position the system in a well-ventilated,
climate-controlled environment.
The maximum recommended ambient operating temperature (TMRA) for most server products is 35°C
(95°F). The temperature in the room where the rack is located must not exceed 35°C (95°F).
CAUTION: To reduce the risk of damage to the equipment when installing third-party options:
•Do not permit optional equipment to impede airflow around the server or to increase the internal rack
temperature beyond the maximum allowable limits.
•Do not exceed the manufacturer’s TMRA.
Power requirements
Installation of this equipment must comply with local and regional electrical regulations governing the
installation of information technology equipment by licensed electricians. This equipment is designed to
operate in installations covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992 (code
for Protection of Electronic Computer/Data Processing Equipment). For electrical power ratings on options,
refer to the product rating label or the user documentation supplied with that option.
WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not overload the
AC supply branch circuit that provides power to the rack. Consult the electrical authority having
jurisdiction over wiring and installation requirements of your facility.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating
uninterruptible power supply. This device protects the hardware from damage caused by power surges
and voltage spikes and keeps the system in operation during a power failure.
Setup61
Electrical grounding requirements
The server must be grounded properly for proper operation and safety. In the United States, you must install
the equipment in accordance with NFPA 70, 1999 Edition (National Electric Code), Article 250, as well as any
local and regional building codes. In Canada, you must install the equipment in accordance with Canadian
Standards Association, CSA C22.1, Canadian Electrical Code. In all other countries, you must install the
equipment in accordance with any regional or national electrical wiring codes, such as the International
Electrotechnical Commission (IEC) Code 364, parts 1 through 7. Furthermore, you must be sure that all
power distribution devices used in the installation, such as branch wiring and receptacles, are listed or
certified grounding-type devices.
Because of the high ground-leakage currents associated with multiple servers connected to the same power
source, Hewlett Packard Enterprise recommends the use of a PDU that is either permanently wired to the
building’s branch circuit or includes a nondetachable cord that is wired to an industrial-style plug. NEMA
locking-style plugs or those complying with IEC 60309 are considered suitable for this purpose. Using
common power outlet strips for the server is not recommended.
Connecting a DC power cable to a DC power source
WARNING: To reduce the risk of electric shock or energy hazards:
•This equipment must be installed by trained service personnel, as defined by the NEC and IEC
60950-1, Second Edition, the standard for Safety of Information Technology Equipment.
•Connect the equipment to a reliably grounded Secondary circuit source. A Secondary circuit has no
direct connection to a Primary circuit and derives its power from a transformer, converter, or
equivalent isolation device.
•The branch circuit overcurrent protection must be rated 27 A.
WARNING: When installing a DC power supply, the ground wire must be connected before the positive
or negative leads.
WARNING: Remove power from the power supply before performing any installation steps or
maintenance on the power supply.
CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the
earthing conductor at the equipment. For more information, see the documentation that ships with the
power supply.
CAUTION: If the DC connection exists between the earthed conductor of the DC supply circuit and the
earthing conductor at the server equipment, the following conditions must be met:
•This equipment must be connected directly to the DC supply system earthing electrode conductor or
to a bonding jumper from an earthing terminal bar or bus to which the DC supply system earthing
electrode conductor is connected.
•This equipment should be located in the same immediate area (such as adjacent cabinets) as any
other equipment that has a connection between the earthed conductor of the same DC supply circuit
and the earthing conductor, and also the point of earthing of the DC system. The DC system should
be earthed elsewhere.
62Setup
•The DC supply source is to be located within the same premises as the equipment.
•Switching or disconnecting devices should not be in the earthed circuit conductor between the DC
source and the point of connection of the earthing electrode conductor.
To connect a DC power cable to a DC power source:
1. Cut the DC power cord ends no shorter than 150 cm (59.06 in).
2. If the power source requires ring tongues, use a crimping tool to install the ring tongues on the power cord
wires.
IMPORTANT: The ring terminals must be UL approved and accommodate 12 gauge wires.
IMPORTANT: The minimum nominal thread diameter of a pillar or stud type terminal must be 3.5
mm (0.138 in); the diameter of a screw type terminal must be 4.0 mm (0.157 in).
3. Stack each same-colored pair of wires and then attach them to the same power source. The power cord
consists of three wires (black, red, and green).
For more information, see the documentation that ships with the power supply.
Server warnings and cautions
WARNING: This server is heavy. To reduce the risk of personal injury or damage to the equipment:
•Observe local occupational health and safety requirements and guidelines for manual material
handling.
•Get help to lift and stabilize the product during installation or removal, especially when the product is
not fastened to the rails. Hewlett Packard Enterprise recommends that a minimum of two people are
required for all rack server installations. If the server is installed higher than chest level, a third
person may be required to help align the server.
•Use caution when installing the server in or removing the server from the rack; it is unstable when
not fastened to the rails.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
WARNING: To reduce the risk of personal injury, electric shock, or damage to the equipment, remove
the power cord to remove power from the server. The front panel Power On/Standby button does not
completely shut off system power. Portions of the power supply and some internal circuitry remain active
until AC/DC power is removed.
Setup63
WARNING: To reduce the risk of fire or burns after removing the energy pack:
•Do not disassemble, crush, or puncture the energy pack.
•Do not short external contacts.
•Do not dispose of the energy pack in fire or water.
After power is disconnected, battery voltage might still be present for 1s to 160s.
AVERTISSEMENT: Pour réduire les risques d'incendie ou de brûlures après le retrait du module
batterie :
•N'essayez pas de démonter, d'écraser ou de percer le module batterie.
•Ne court-circuitez pas ses contacts externes.
•Ne jetez pas le module batterie dans le feu ou dans l'eau.
Après avoir déconnecté l'alimentation, une tension peut subsister dans la batterie durant 1 à 160
secondes.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating
uninterruptible power supply. This device protects the hardware from damage caused by power surges
and voltage spikes and keeps the system in operation during a power failure.
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Rack warnings
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
•The leveling jacks are extended to the floor.
•The full weight of the rack rests on the leveling jacks.
•The stabilizing feet are attached to the rack if it is a single-rack installation.
•The racks are coupled together in multiple-rack installations.
•Only one component is extended at a time. A rack may become unstable if more than one
component is extended for any reason.
WARNING: To reduce the risk of personal injury or equipment damage when unloading a rack:
•At least two people are needed to safely unload the rack from the pallet. An empty 42U rack can
weigh as much as 115 kg (253 lb), can stand more than 2.1 m (7 ft) tall, and might become unstable
when being moved on its casters.
64Setup
•Never stand in front of the rack when it is rolling down the ramp from the pallet. Always handle the
rack from both sides.
WARNING: To reduce the risk of personal injury or damage to the equipment, adequately stabilize the
rack before extending a component outside the rack. Extend only one component at a time. A rack may
become unstable if more than one component is extended.
WARNING: When installing a server in a telco rack, be sure that the rack frame is adequately secured
at the top and bottom to the building structure.
Electrostatic discharge
Be aware of the precautions you must follow when setting up the system or handling components. A
discharge of static electricity from a finger or other conductor may damage system boards or other staticsensitive devices. This type of damage may reduce the life expectancy of the system or component.
To prevent electrostatic damage:
•Avoid hand contact by transporting and storing products in static-safe containers.
•Keep electrostatic-sensitive parts in their containers until they arrive at static-free workstations.
•Place parts on a grounded surface before removing them from their containers.
•Avoid touching pins, leads, or circuitry.
•Always be properly grounded when touching a static-sensitive component or assembly. Use one or more
of the following methods when handling or installing electrostatic-sensitive parts:
◦Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist
straps are flexible straps with a minimum of 1 megohm ±10 percent resistance in the ground cords. To
provide proper ground, wear the strap snug against the skin.
◦Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when
standing on conductive floors or dissipating floor mats.
◦Use conductive field service tools.
◦Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller
install the part.
For more information on static electricity or assistance with product installation, contact an authorized reseller.
Server box contents
The server shipping box contains the following contents:
•A server
•A power cord
•Rack-mounting hardware (optional)
•Documentation
Installing hardware options
Install any hardware options before initializing the server. For options installation information, refer to the
option documentation. For server-specific information, refer to "Hardware options installation."
Setup65
POST screen options
When the server is powered on, the POST screen is displayed. The following options are displayed:
•System Utilities (F9)
Use this option to configure the system BIOS.
•Intelligent Provisioning (F10)
Use this option to deploy an operating system or configure storage.
•Boot order (F11)
Use this option to make a one-time boot selection.
•Network boot (F12)
Use this option to boot the server from the network.
Installing or deploying an operating system
Before installing an operating system, observe the following:
•Be sure to read the HPE UEFI requirements for ProLiant servers on the Hewlett Packard Enterprisewebsite. If UEFI requirements are not met, you might experience boot failures or other errors when
installing the operating system.
•Update firmware before using the server for the first time, unless software or components require an older
version. For more information, see "Keeping the system current on page 182."
•For the latest information on supported operating systems, see the Hewlett Packard Enterprise website.
•The server does not ship with OS media. All system software and firmware is preloaded on the server.
Registering the server
To experience quicker service and more efficient support, register the product at the Hewlett Packard
Enterprise Product Registration website.
66Setup
Hardware options installation
Product QuickSpecs
For more information about product features, specifications, options, configurations, and compatibility, see the
product QuickSpecs on the Hewlett Packard Enterprise website (
Introduction
Install any hardware options before initializing the server. For options installation information, see the option
documentation. For server-specific information, use the procedures in this section.
If multiple options are being installed, read the installation instructions for all the hardware options to identify
similar steps and streamline the installation process.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the server before beginning
any installation procedure. Improper grounding can cause electrostatic discharge.
Installing a fan filter into the security bezel
http://www.hpe.com/info/qs).
The fan filter installs into the security bezel. To add a fan filter, the server must have a security bezel.
Hardware options installation67
Installing the bezel and bezel lock
Power supply options
Hot-plug power supply calculations
For hot-plug power supply specifications and calculators to determine electrical and heat loading for the
server, see the Hewlett Packard Enterprise Power Advisor website (http://www.hpe.com/info/poweradvisor/
online).
Installing a redundant hot-plug power supply
CAUTION: All power supplies installed in the server must have the same output power capacity. Verify
that all power supplies have the same part number and label color. The system becomes unstable and
might shut down if it detects different power supplies.
68Hardware options installation
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all bays
are populated with either a component or a blank.
Procedure
1. Release the cable management arm to access the rear panel.
2. Remove the blank.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the power supply or power
supply blank to cool before touching it.
3. Insert the power supply into the power supply bay until it clicks into place.
4. Connect the power cord to the power supply.
5. Route the power cord.
Use the cable management arm and best practices when routing cords and cables.
6. Connect the power cord to the power source.
7. Observe the power supply LED.
Hardware options installation69
Energy pack options
Hewlett Packard Enterprise offers two centralized backup power source options to back up write cache
content on P-class Smart Array controllers in case of an unplanned server power outage.
•HPE Smart Storage Battery
•HPE Smart Storage Hybrid Capacitor
IMPORTANT: The HPE Smart Storage Hybrid Capacitor is only supported on Gen10 and later
servers that support the 96W HPE Smart Storage Battery.
One energy pack option can support multiple devices. An energy pack option is required for P-class Smart
Array controllers. Once installed, the status of the energy pack displays in HPE iLO. For more information,
see the HPE iLO user guide on the Hewlett Packard Enterprise website (http://www.hpe.com/support/ilo-
docs).
HPE Smart Storage Battery
The HPE Smart Storage Battery supports the following devices:
•HPE Smart Array SR controllers
•HPE Smart Array MR controllers
•NVDIMMs
IMPORTANT: To support NVDIMMs, the HPE Smart Storage Battery must be installed.
A single 96W battery can support up to 24 devices.
After the battery is installed, it might take up to two hours to charge. Controller features requiring backup
power are not re-enabled until the battery is capable of supporting the backup power.
This server supports the 96W HPE Smart Storage Battery with the 145mm cable.
Installing a Smart Storage Battery
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
The components included with the hardware option kit
NOTE: System ROM and firmware messages might display "energy pack" in place of "Smart Storage
Battery." Energy pack refers to both HPE Smart Storage batteries and HPE Smart Storage Hybrid capacitors.
Procedure
1.Power down the server .
2.Do one of the following:
•Disconnect each power cord from the power source.
•Disconnect each power cord from the server.
70Hardware options installation
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4.Remove the access panel.
5.Do one of the following:
•Remove the air baffle.
•If installed on LFF models, remove the midplane drive cage.
6.Install the Smart Storage battery.
7.Install the cable.
8.Install the fan cage.
Hardware options installation71
9.Install the air baffle.
10. Install the access panel.
11. Slide the server into the rack.
12. Connect each power cord to the server.
13. Connect each power cord to the power source.
14. Power up the server .
The installation is complete.
HPE Smart Storage Hybrid Capacitor
The HPE Smart Storage Hybrid Capacitor supports the following devices:
•HPE Smart Array SR controllers
•HPE Smart Array MR controllers
IMPORTANT: NVDIMMs are only supported by the HPE Smart Storage Battery.
The capacitor pack can support up to three devices.
This server supports the HPE Smart Storage Hybrid Capacitor with the 145mm cable.
Before installing the HPE Smart Storage Hybrid Capacitor, verify that the system BIOS meets the minimum
firmware requirements to support the capacitor pack.
IMPORTANT: If the system BIOS or controller firmware is older than the minimum recommended
firmware versions, the capacitor pack will only support one device.
The capacitor pack is fully charged after the system boots.
Minimum firmware versions
ProductMinimum firmware version
HPE ProLiant DL380 Gen10 Server system ROM
HPE Smart Array SR controllers1.90
HPE Smart Array MR controllers24.23.0-0041
2.00
Installing an energy pack option for HPE Smart Storage
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
•T-10 Torx screwdriver
•The components included with the hardware option kit
72Hardware options installation
Procedure
1.Power down the server.
2.Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4.Remove the access panel.
5.Remove the air baffle.
6.Remove the fan cage.
7.Install the energy pack option.
8.Route and connect the cable.
CAUTION: Do not connect the energy pack while the server is operating. Verify that all power to
the server has been removed.
Hardware options installation73
9.Install the fan cage.
10. Install the access panel.
11. Install the server into the rack.
12. Connect each power cord to the server.
13. Connect each power cord to the power source.
14. Power up the server.
Drive options
Drive guidelines
Depending on the configuration, the server supports SAS, SATA, and NVMe drives.
Observe the following general guidelines:
•The system automatically sets all drive numbers.
•If only one hard drive is used, install it in the bay with the lowest drive number.
For drive numbering, see "Drive bay numbering".
•The NVMe SSD is a PCIe bus device. Devices attached to a PCIe bus cannot be removed without
allowing the device and bus to complete and cease the signal/traffic flow.
Do not remove an NVMe SSD from the drive bay while the Do Not Remove button LED is flashing. The Do
Not Remove button LED flashes to indicate that the device is still in use. Removal of the NVMe SSD
before the device has completed and ceased signal/traffic flow can cause loss of data.
•Drives with the same capacity provide the greatest storage space efficiency when grouped into the same
drive array.
Supported drive carriers
Depending on the drive cage, the server supports the following drive carriers:
74Hardware options installation
•SFF Smart Carrier (SC)
•SFF Smart Carrier NVMe (SCN)
•SFF Smart Carrier M.2 (SCM)
•LFF Smart Carrier (SC)
•LFF to SFF Smart Carrier Converter
Installing a hot-plug SAS or SATA drive
Procedure
1. Remove the drive blank.
2. Prepare the drive.
3. Install the drive.
4. Observe the LED status of the drive.
Hardware options installation75
Installing an NVMe drive
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless all drive
and device bays are populated with either a component or a blank.
Procedure
1. Remove the drive blank.
2. Prepare the drive.
3. Install the drive.
4. Observe the LED status of the drive.
Installing a uFF drive and SCM drive carrier
IMPORTANT: Not all drive bays support the drive carrier. To find supported bays, see the server
QuickSpecs.
Procedure
1. If needed, install the uFF drive into the drive carrier.
76Hardware options installation
2. Remove the drive blank.
3. Install the drives.
Push firmly near the ejection handle until the latching spring engages with the drive bay.
4. Power on the server.
To configure the drive, use HPE Smart Storage Administrator.
Installing an M.2 drive
This procedure is for replacing M.2 drives located on an expansion card, riser, or the system board only. Do
not use this procedure to replace uFF drives.
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
•The components included with the hardware option kit
•T-10 Torx screwdriver
Hardware options installation77
Procedure
1. Power down the server .
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Remove the riser cage.
6. Install the drive.
The installation is complete.
Fan options
CAUTION: To avoid damage to server components, fan blanks must be installed in fan bays 1 and 2 in
a single-processor configuration.
CAUTION: To avoid damage to the equipment, do not operate the server for extended periods of time if
the server does not have the optimal number of fans installed. Although the server might boot, Hewlett
Packard Enterprise does not recommend operating the server without the required fans installed and
operating.
Valid fan configurations are listed in the following table.
78Hardware options installation
ConfigurationFan bay 1Fan bay 2Fan bay 3Fan bay 4Fan bay 5Fan bay 6
1 processorFan blankFan blankFanFanFanFan
1 processor 24-SFF
or 12-LFF
configuration with
high-performance
fans
2 processorsFanFanFanFanFanFan
For a single-processor configuration, excluding 24-SFF and 12-LFF configurations, four fans and two blanks
are required in specific fan bays for redundancy. A fan failure or missing fan causes a loss of redundancy. A
second fan failure or missing fan causes an orderly shutdown of the server.
For a dual-processor configuration or single-processor 24-SFF or 12-LFF configurations, six fans are required
for redundancy. A fan failure or missing fan causes a loss of redundancy. A second fan failure or missing fan
causes an orderly shutdown of the server.
High-performance fans might be necessary in 24-SFF and 12-LFF configurations for the following
installations:
•Optional GPU riser installations
•ASHRAE compliant configurations
For more information, see the Hewlett Packard Enterprise website.
The server supports variable fan speeds. The fans operate at minimum speed until a temperature change
requires a fan speed increase to cool the server. The server shuts down during the following temperaturerelated scenarios:
FanFanFanFanFanFan
•At POST and in the OS, iLO performs an orderly shutdown if a cautionary temperature level is detected. If
the server hardware detects a critical temperature level before an orderly shutdown occurs, the server
performs an immediate shutdown.
•When the Thermal Shutdown feature is disabled in the BIOS/Platform Configuration (RBSU), iLO does not
perform an orderly shutdown when a cautionary temperature level is detected. Disabling this feature does
not disable the server hardware from performing an immediate shutdown when a critical temperature level
is detected.
CAUTION: A thermal event can damage server components when the Thermal Shutdown feature is
disabled in the BIOS/Platform Configuration (RBSU).
Installing high-performance fans
CAUTION: Caution: To prevent damage server, ensure that all DIMM latches are closed and locked
before installing the fans.
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Hardware options installation79
Procedure
1. Extend the server from the rack.
2. Remove the access panel.
3. If installed, remove all fan blanks.
4. Remove the air baffle.
5. Remove all standard fans.
IMPORTANT: Do not mix standard fans and high-performance fans in the same server.
6. Install high-performance fans in all fan bays.
80Hardware options installation
7. Install the air baffle.
8. Install the access panel.
9. Install the server into the rack.
Memory options
IMPORTANT: This server does not support mixing LRDIMMs and RDIMMs. Attempting to mix any
combination of these DIMMs can cause the server to halt during BIOS initialization. All memory installed
in the server must be of the same type.
DIMM-processor compatibility
The installed processor determines the type of DIMM that is supported in the server:
•First-generation Intel Xeon Scalable processors support DDR4-2666 DIMMs.
•Second-generation Intel Xeon Scalable processors support DDR4-2933 DIMMs.
Mixing DIMM types is not supported. Install only the supported DDR4-2666 or DDR4-2933 DIMMs in the
server.
DIMM and NVDIMM population information
For specific DIMM and NVDIMM population information, see the DIMM population guidelines on the Hewlett
Packard Enterprise website (http://www.hpe.com/docs/memory-population-rules).
HPE SmartMemory speed information
For more information about memory speed information, see the Hewlett Packard Enterprise website (https://
www.hpe.com/docs/memory-speed-table).
Installing a DIMM
The server supports up to 24 DIMMs.
Hardware options installation81
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit
For more information on specific options, see the server QuickSpecs on the Hewlett Packard Enterprise
website.
Procedure
1.Power down the server.
2.Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3.Do one of the following:
a. Extend the server from the rack.
b. Remove the server from the rack.
4.Remove the access panel.
5.Open the DIMM slot latches.
6.Install the DIMM.
7.Install the access panel.
8.Install the server in the rack.
9.Connect each power cord to the server.
10. Connect each power cord to the power source.
11. Power up the server.
Use the BIOS/Platform Configuration (RBSU) in the UEFI System Utilities to configure the memory mode.
82Hardware options installation
For more information about LEDs and troubleshooting failed DIMMs, see "Systems Insight Display
combined LED descriptions."
HPE 16GB NVDIMM option
HPE NVDIMMs are flash-backed NVDIMMs used as fast storage and are designed to eliminate smaller
storage bottlenecks. The HPE 16GB NVDIMM for HPE ProLiant Gen10 servers is ideal for smaller database
storage bottlenecks, write caching tiers, and any workload constrained by storage bottlenecks.
The HPE 16GB NVDIMM is supported on select HPE ProLiant Gen10 servers with first generation Intel Xeon
Scalable processors. The server can support up to 12 NVDIMMs in 2 socket servers (up to 192GB) and up to
24 NVDIMMs in 4 socket servers (up to 384GB). The HPE Smart Storage Battery provides backup power to
the memory slots allowing data to be moved from the DRAM portion of the NVDIMM to the Flash portion for
persistence during a power down event.
For more information on HPE NVDIMMs, see the Hewlett Packard Enterprise website (http://www.hpe.com/
info/persistentmemory).
NVDIMM-processor compatibility
HPE 16GB NVDIMMs are only supported in servers with first generation Intel Xeon Scalable processors
installed.
Server requirements for NVDIMM support
Before installing an HPE 16GB NVDIMM in a server, make sure that the following components and software
are available:
•A supported HPE server using Intel Xeon Scalable Processors: For more information, see the NVDIMM
QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
•An HPE Smart Storage Battery
•A minimum of one regular DIMM: The system cannot have only NVDIMM-Ns installed.
•A supported operating system with persistent memory/NVDIMM drivers. For the latest software
information, see the Hewlett Packard Enterprise website (http://persistentmemory.hpe.com).
•For minimum firmware versions, see the HPE 16GB NVDIMM User Guide on the Hewlett Packard
Enterprise website (http://www.hpe.com/info/nvdimm-docs).
To determine NVDIMM support for your server, see the server QuickSpecs on the Hewlett Packard Enterprise
website (http://www.hpe.com/info/qs).
Installing an NVDIMM
CAUTION: To avoid damage to the hard drives, memory, and other system components, the air baffle,
drive blanks, and access panel must be installed when the server is powered up.
CAUTION: To avoid damage to the hard drives, memory, and other system components, be sure to
install the correct DIMM baffles for your server model.
CAUTION: DIMMs are keyed for proper alignment. Align notches in the DIMM with the corresponding
notches in the DIMM slot before inserting the DIMM. Do not force the DIMM into the slot. When installed
properly, not all DIMMs will face in the same direction.
Hardware options installation83
CAUTION: Electrostatic discharge can damage electronic components. Be sure you are properly
grounded before beginning this procedure.
CAUTION: Failure to properly handle DIMMs can damage the DIMM components and the system board
connector. For more information, see the DIMM handling guidelines in the troubleshooting guide for your
product on the Hewlett Packard Enterprise website:
CAUTION: Unlike traditional storage devices, NVDIMMs are fully integrated in with the ProLiant server.
Data loss can occur when system components, such as the processor or HPE Smart Storage Battery,
fails. HPE Smart Storage battery is a critical component required to perform the backup functionality of
NVDIMMs. It is important to act when HPE Smart Storage Battery related failures occur. Always follow
best practices for ensuring data protection.
Prerequisites
Before installing an NVDIMM, be sure the server meets the Server requirements for NVDIMM support on
page 83.
Procedure
1.Power down the server.
2.Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3.Do one of the following:
a. Extend the server from the rack.
b. Remove the server from the rack.
4.Remove the access panel.
5.If the Smart Storage battery is not installed, do one of the following:
•Remove the air baffle.
•If installed on LFF models, remove the midplane drive cage.
6.Locate any NVDIMMs already installed in the server.
7.Verify that all LEDs on any installed NVDIMMs are off.
8.Install the NVDIMM.
84Hardware options installation
9.If it is not already installed, install the Smart Storage battery.
10. Install the access panel.
11. Slide or install the server into the rack.
12. Connect each power cord to the server.
13. Power up the server.
14. If required, sanitize the NVDIMM-Ns. For more information, see NVDIMM sanitization on page 85.
Configuring the server for NVDIMMs
After installing NVDIMMs, configure the server for NVDIMMs. For information on configuring settings for
NVDIMMs, see the HPE 16GB NVDIMM User Guide on the Hewlett Packard Enterprise website (
www.hpe.com/info/nvdimm-docs).
The server can be configured for NVDIMMs using either of the following:
•UEFI System Utilities—Use System Utilities through the Remote Console to configure the server for
NVDIMM memory options by pressing the F9 key during POST. For more information about UEFI System
Utilities, see the Hewlett Packard Enterprise website (http://www.hpe.com/info/uefi/docs).
•iLO RESTful API for HPE iLO 5—For more information about configuring the system for NVDIMMs, see
Media sanitization is defined by NIST SP800-88 Guidelines for Media Sanitization (Rev 1, Dec 2014) as "a
general term referring to the actions taken to render data written on media unrecoverable by both ordinary
and extraordinary means."
The specification defines the following levels:
Hardware options installation85
•Clear: Overwrite user-addressable storage space using standard write commands; might not sanitize data
in areas not currently user-addressable (such as bad blocks and overprovisioned areas)
•Purge: Overwrite or erase all storage space that might have been used to store data using dedicated
device sanitize commands, such that data retrieval is "infeasible using state-of-the-art laboratory
techniques"
•Destroy: Ensure that data retrieval is "infeasible using state-of-the-art laboratory techniques" and render
the media unable to store data (such as disintegrate, pulverize, melt, incinerate, or shred)
The NVDIMM-N Sanitize options are intended to meet the Purge level.
For more information on sanitization for NVDIMMs, see the following sections in the HPE 16GB NVDIMMUser Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/nvdimm-docs):
•NVDIMM sanitization policies
•NVDIMM sanitization guidelines
•Setting the NVDIMM-N Sanitize/Erase on the Next Reboot Policy
NIST SP800-88 Guidelines for Media Sanitization (Rev 1, Dec 2014) is available for download from the NIST
website (http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-88r1.pdf).
NVDIMM relocation guidelines
Requirements for relocating NVDIMMs or a set of NVDIMMs when the data must be preserved
•The destination server hardware must match the original server hardware configuration.
•All System Utilities settings in the destination server must match the original System Utilities settings in the
original server.
•If NVDIMM-Ns are used with NVDIMM Interleaving ON mode in the original server, do the following:
◦Install the NVDIMMs in the same DIMM slots in the destination server.
◦Install the entire NVDIMM set (all the NVDIMM-Ns on the processor) on the destination server.
This guideline would apply when replacing a system board due to system failure.
If any of the requirements cannot be met during NVDIMM relocation, do the following:
◦Manually back up the NVDIMM-N data before relocating NVDIMM-Ns to another server.
◦Relocate the NVDIMM-Ns to another server.
◦Sanitize all NVDIMM-Ns on the new server before using them.
Requirements for relocating NVDIMMs or a set of NVDIMMs when the data does not have to be
preserved
If data on the NVDIMM-N or set of NVDIMM-Ns does not have to be preserved, then
•Move the NVDIMM-Ns to the new location and sanitize all NVDIMM-Ns after installing them to the new
location. For more information, see NVDIMM sanitization on page 85.
•Observe all DIMM and NVDIMM population guidelines. For more information, see DIMM and NVDIMMpopulation information on page 81.
•Observe the process for removing an NVDIMM.
86Hardware options installation
•Observe the process for installing an NVDIMM.
•Review and configure the system settings for NVDIMMs. For more information, see Configuring the
server for NVDIMMs on page 85.
HPE Scalable Persistent Memory (CTO only)
HPE Scalable Persistent Memory is an integrated storage solution that runs at memory speeds with terabyte
capacity unlocking new levels of performance for your business workloads. It provides a complete hardware
and software solution utilizing the following components:
•DRAM for application performance
•A tier of flash for persistence
•A backup power source to move data from DRAM to flash
HPE Scalable Persistent Memory is ideal for enabling in-memory compute with persistence and any workload
that could benefit from low-latency DRAM-level performance. This option is available as HPE Factory
Configure To Order (CTO) SKUs only.
For configuration details for HPE Scalable Persistent Memory, see the HPE Scalable Persistent Memory UserGuide at
For more information about HPE Scalable Persistent Memory, see http://www.hpe.com/info/
persistentmemory.
http://www.hpe.com/info/nvdimm-docs.
HPE Persistent Memory option
HPE Persistent Memory, which offers the flexibility to deploy as dense memory or fast storage and features
Intel Optane DC Persistent Memory, enables per-socket memory capacity of up to 3.0 TB. HPE Persistent
Memory, together with traditional volatile DRAM DIMMs, provide fast, high-capacity, cost-effective memory
and storage to transform big data workloads and analytics by enabling data to be stored, moved, and
processed quickly.
HPE Persistent Memory modules use the standard DIMM form factor and are installed alongside DIMMs in a
server memory slot. HPE Persistent Memory modules are designed for use only with second-generation Intel
Xeon Scalable processors, and are available in the following capacities:
HPE Persistent Memory modules are supported only in servers with second-generation Intel Xeon Scalable
processors installed.
HPE Persistent Memory population information
For specific population and configuration information, see the memory population guidelines on the Hewlett
Packard Enterprise website (http://www.hpe.com/docs/memory-population-rules).
System requirements for HPE Persistent Memory module support
IMPORTANT: Hewlett Packard Enterprise recommends that you implement best practice configurations
for high availability (HA) such as clustered configurations.
Hardware options installation87
Before installing HPE Persistent Memory modules, make sure that the following components and software are
available:
•A supported HPE ProLiant Gen10 server or Synergy compute module using second-generation Intel Xeon
Scalable processors. For more information, see the product QuickSpecs on the Hewlett Packard
Enterprise website (http://www.hpe.com/support/persistentmemoryQS).
•HPE DDR4 Standard Memory RDIMMs or LRDIMMs (the number will vary based on your chosen
configuration).
•Supported firmware and drives:
◦System ROM version 2.10 or later
◦Server Platform Services (SPS) Firmware version 04.01.04.296
◦HPE iLO 5 Firmware version 1.43
◦HPE Innovation Engine Firmware version 2.1.x or later
Download the required firmware and drivers from the Hewlett Packard Enterprise website (http://www.hpe.com/info/persistentmemory).
•A supported operating system:
◦Windows Server 2012 R2 with persistent memory drivers from Hewlett Packard Enterprise
◦Windows Server 2016 with persistent memory drivers from Hewlett Packard Enterprise
◦Windows Server 2019
◦Red Hat Enterprise Linux 7.6
◦Red Hat Enterprise Linux 8.0
◦SUSE Linux Enterprise Server 12 SP4
◦SUSE Linux Enterprise Server 15 with SUSE-SU-2019:0224-1 kernel update
◦SUSE Linux Enterprise Server 15 SP1 with SUSE-SU-2019:1550-1 kernel update
◦VMware vSphere 6.7 U2 + Express Patch 10 (ESXi670-201906002) or later (supports App Direct and
Memory modes)
◦VMware vSphere 6.5 U3 or later (supports Memory mode)
•Hardware and licensing requirements for optional encryption of the HPE Persistent Memory modules:
◦HPE TPM 2.0 (local key encryption)
◦HPE iLO Advanced License (remote key encryption)
◦Key management server (remote key encryption)
For more information, see the HPE Persistent Memory User Guide on the Hewlett Packard Enterprise website
(http://www.hpe.com/info/persistentmemory-docs).
Installing an HPE Persistent Memory module
Use this procedure only for new HPE Persistent Memory module installations. If you are migrating this HPE
Persistent Memory module from another server, see the HPE Persistent Memory User Guide on the Hewlett
Packard Enterprise website (http://www.hpe.com/info/persistentmemory-docs).
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
88Hardware options installation
•The components included with the hardware option kit
•A T-10 Torx screwdriver might be needed to unlock the access panel.
Procedure
1.Observe the following alerts:
CAUTION: DIMMs and HPE Persistent Memory modules are keyed for proper alignment. Align
notches on the DIMM or HPE Persistent Memory module with the corresponding notches in the slot
before installing the component. Do not force the DIMM or HPE Persistent Memory module into the
slot. When installed properly, not all DIMMs or HPE Persistent Memory modules will face in the
same direction.
CAUTION: Electrostatic discharge can damage electronic components. Be sure you are properly
grounded before beginning this procedure.
CAUTION: Failure to properly handle HPE Persistent Memory modules can damage the
component and the system board connector.
IMPORTANT: Hewlett Packard Enterprise recommends that you implement best practice
configurations for high availability (HA) such as clustered configurations.
2.Power down the server.
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack, if necessary.
4.Place the server on a flat, level work surface.
5.Remove the access panel on page 49.
6.Remove all components necessary to access the DIMM slots.
7.Install the HPE Persistent Memory module.
Hardware options installation89
8.Install any components removed to access the DIMM slots.
9.Install the access panel.
10. Slide or install the server into the rack.
11. If removed, reconnect all power cables.
12. Power up the server.
13. Configure the server for HPE Persistent Memory.
For more information, see Configuring the server for HPE Persistent Memory on page 90.
Configuring the server for HPE Persistent Memory
After installing HPE Persistent Memory modules, configure the server for HPE Persistent Memory.
IMPORTANT: Always follow recommendations from your software application provider for highavailability best practices to ensure maximum uptime and data protection.
A number of configuration tools are available, including:
•UEFI System Utilities—Access System Utilities through the Remote Console to configure the server by
pressing the F9 key during POST.
•iLO RESTful API—Use the iLO RESTful API through tools such as the RESTful Interface Tool (ilorest) or
other third-party tools.
•HPE Persistent Memory Management Utility—The HPE Persistent Memory Management Utility is a
desktop application used to configure the server for HPE Persistent Memory, as well as evaluate and
monitor the server memory configuration layout.
For more information, see the HPE Persistent Memory User Guide on the Hewlett Packard Enterprise website
(http://www.hpe.com/info/persistentmemory-docs).
Controller options
The server supports the following storage controllers:
•Embedded controllers
Enabled through System Utilities and configured through HPE Smart Storage Administrator (Intelligent
Provisioning)
•Type-a controllers
Type-a controllers install in the type-a smart array connector.
•Type-p controllers
Type-p controllers install in a PCIe expansion slot
Installing a storage controller
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
The components included with the hardware option kit
90Hardware options installation
Procedure
1. Power down the server.
2. Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3. Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4. Remove the access panel.
5. Do one of the following:
•Remove the air baffle.
•If installed, remove the 4LFF midplane drive cage.
6. Do one of the following:
•For Type-a Smart Array controllers, install the controller into the Smart Array connector.
•For Type-p Smart Array controllers, install the controller into an expansion slot.
7. Cable the controller.
The installation is complete.
Hardware options installation91
Installing an HPE Smart Array P824i-p MR Gen10 controller in a configured
server
Procedure
1.Back up data on the system.
2.Close all applications.
3.Update the server firmware if it is not the latest revision.
4.Do one of the following:
•If the new Smart Array is the new boot device, install the device drivers.
•If the new Smart Array is not the new boot device, go to the next step.
NOTE: If the logical drive is used in a Smart Array SR controller RAID array, you are not able to boot
from that device if you are attached to a Smart Array MR controller.
5.Ensure that users are logged off and all tasks are completed on the server.
6.Power down the server.
CAUTION: In systems that use external data storage, be sure that the server is the first unit to be
powered down and the last to be powered back up. Taking this precaution ensures that the system
does not erroneously mark the drives as failed when the server is powered up.
7.Power down all peripheral devices that are attached to the server.
8.Disconnect the power cord from the power source.
9.Disconnect the power cord from the server.
10. Remove or open the access panel.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
11. Remove the riser.
12. Select an available x8 or larger PCIe expansion slot.
A x8 physical size slot is required, even though the slot width may be electrically x4 or x1. Hewlett
Packard Enterprise recommends using a slot that is electrically x8.
13. Remove the slot cover.
Save the retaining screw, if one is present.
14. Slide the controller along the slot alignment guide, if one is present, and then press the board firmly into
the expansion slot so that the contacts on the board edge are seated properly in the slot.
15. Secure the controller in place with the retaining screw. If the slot alignment guide has a latch (near the
rear of the board), close the latch.
16. Connect the controller backup power cable.
92Hardware options installation
IMPORTANT: To enable SmartCache or CacheCade in a P-class type-p Smart Array controller, you
must:
•Connect the controller backup power cable to the controller backup power connector on the
system or riser board.
•Connect the energy pack cable to the energy pack connector on the system board.
17. Connect storage devices to the controller.
For cabling information, see the server user guide.
18. Install the HPE Smart Storage Battery or HPE Smart Storage Hybrid Capacitor.
19. Reinstall the riser.
20. Connect peripheral devices to the server.
21. Connect the power cord to the server.
22. Connect the power cord to the power source.
23. Power up all peripheral devices.
24. Power up the server.
Array and controller configuration
During the initial provisioning of the server, you must configure the controller using the Smart Array
configuration utility in UEFI System Utilities.
After the initial provisioning of the server, you can use any of the following options to configure the arrays and
controllers:
Hardware options installation93
•UEFI System Utilities
•HPE MR Storage Administrator
•StorCLI
HPE MR Storage Administrator and StorCLI are available in the Service Pack for Proliant (SPP).
For more information about using each configuration utility, see the documentation for the configuration utility.
NOTE:
•Any RAID configuration created for the HPE Smart Array MR controller is not available to HPE Smart
Array SR controllers.
•The message "Data Protection disabled" in the logical drive properties can be ignored as it refers to a
feature not currently supported by the HPE MR Storage Administrator product.
Installing a Universal Media Bay
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
•The components included with the hardware option kit
•T-10 Torx screwdriver
Procedure
1.Power down the server.
2.Remove all power:
a. Disconnect each power cord from the power source.
b. Disconnect each power cord from the server.
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
Remove the access panel.
4.
5.Remove the air baffle.
6.Remove the fan cage.
7.Remove the bay blank.
94Hardware options installation
8.Route the USB and video cables through the opening.
9.If installing a two-bay SFF front drive cage, install the drive cage.
10. Install the universal media bay.
11. (Optional) Install the optical disk drive.
Hardware options installation95
12. Connect the cables.
13. Install the fan cage.
14. Install the air baffle.
15. Install the access panel.
16. Slide the server into the rack.
17. Connect each power cord to the server.
18. Connect each power cord to the power source.
19. Power up the server.
The installation is complete.
Drive cage options
Installing a front 8NVMe SSD Express Bay drive cage
Observe the following:
•The drive cage can be installed in any box. This procedure covers installing the drive cage in box 1.
•When installing in box 1, the NVMe riser must be installed in the tertiary PCIe slot.
•When installing in box 2, the NVMe riser must be installed in the secondary PCIe slot.
•When installing in box 3, the NVMe riser must be installed in the primary PCIe slot.
Prerequisites
An associated NVMe riser and high-performance fans are required when installing this option.
Procedure
1.Observe the following alerts.
96Hardware options installation
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the server before
beginning any installation procedure. Improper grounding can cause ESD.
2.Power down the server .
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4.Remove the access panel.
5.Remove the air baffle.
6.Remove the fan cage.
7.Remove the blank.
8.Install the drive cage:
a. Remove all drives and drive blanks.
b. Install the drive cage.
Hardware options installation97
9.Install the associated NVMe riser.
10. Connect the power cable to the drive backplane power connector.
11. Connect the data cables from the drive backplane to the NVMe riser.
12. Install drives or drive blanks.
The installation is complete.
Installing a front 6SFF SAS/SATA + 2NVMe Premium drive cage
The drive cage can be installed in any box. This procedure covers installing the drive cage in box 1.
Prerequisites
A storage controller and high-performance fans are required when installing this drive cage.
Procedure
1.Observe the following alerts.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the server before
beginning any installation procedure. Improper grounding can cause ESD.
2.Power down the server .
3.Do one of the following:
•Extend the server from the rack.
•Remove the server from the rack.
4.Remove the access panel.
5.Remove the air baffle.
98Hardware options installation
6.Remove the fan cage.
7.Remove the blank.
8.Install the drive cage:
a. If drive blanks are installed in the drive cage assembly, remove the drive blanks. Retain the drive
blanks for use in empty drive bays.
b. Install the drive cage.
9.Connect the power cable.
10. Install a storage controller.
11. Connect the data cables from the drive backplane to the controller.
12. Install drives or drive blanks.
The installation is complete.
Hardware options installation99
Installing airflow labels
When an Express Bay drive cage is installed, airflow labels might be required:
Prerequisites
Before you perform this procedure, make sure that you have the following items available:
The components included with the hardware option kit
Procedure
•If an eight-bay SFF drive cage is installed in box 1, then airflow labels are not required.
•If a blank is installed in box 1, replace it with the blank that comes with the kit.
•If a Universal Media Bay is installed in box 1, do one of the following:
◦If the 2SFF drive cage is not installed, then install airflow labels as shown.
100Hardware options installation
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.