This document contains setup, installation, and configuration information for the HPE Apollo
2000 Chassis. This document is for the person who installs, administers, and troubleshoots
the system. Hewlett Packard Enterprise assumes that you are qualified in the servicing of
computer equipment and trained in using safe practices when dealing with hazardous energy
levels.
Part Number: 879112-003
Published: June 2018
Edition: 3
Copyright 2017, 2018 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession,
use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer
Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard
Enterprise website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in
the United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Page 3
Contents
HPE Apollo 2000 Gen10 System............................................................6
Planning the installation.........................................................................7
Acronyms and abbreviations...............................................................85
Contents5
Page 6
HPE Apollo 2000 Gen10 System
Introduction
The HPE Apollo 2000 Gen10 System consists of a chassis and servers. There are four chassis options
with different storage configurations. To ensure proper thermal cooling, all server tray slots on the chassis
must be populated with servers or server tray blanks.
Chassis
•HPE Apollo r2200 Gen10 Chassis (12 low-profile LFF model)
•HPE Apollo r2600 Gen10 Chassis (24 SFF model, supports a maximum of 24 SFF SmartDrives or a
mix of 16 SFF SmartDrives and 8 NVMe drives)
•HPE Apollo r2800 Gen10 Chassis with 16 NVMe
•HPE Apollo r2800 Gen 10 Chassis (24 SFF model with storage expander backplane)
Servers
•HPE ProLiant XL170r Gen10 Server (1U)
•HPE ProLiant XL190r Gen10 Server (2U)
The chassis supports the combination of 1U and 2U servers. One chassis can support a maximum of the
following:
•Four 1U servers
•Two 1U servers and one 2U server
•Two 2U servers
For more information about product features, specifications, options, configurations, and compatibility, see
the product QuickSpecs on the
Hewlett Packard Enterprise website.
6 HPE Apollo 2000 Gen10 System
Page 7
Planning the installation
Safety and regulatory compliance
For important safety, environmental, and regulatory information, see Safety and Compliance Information
for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
website (
Product QuickSpecs
For more information about product features, specifications, options, configurations, and compatibility, see
the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
Configuration guidelines
Operate the chassis only when a device or blank is installed in all device bays. Before powering up the
chassis, be sure to do the following:
•Install a drive or drive blank into all drive bays.
•Install a server or server tray blank into all server bays.
•Install a power supply or power supply blank into all power supply bays.
Validate power and cooling requirements based on location and installed components.
Power requirements
Installation of this equipment must comply with local and regional electrical regulations governing the
installation of IT equipment by licensed electricians. This equipment is designed to operate in installations
covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992 (code for Protection of
Electronic Computer/Data Processing Equipment). For electrical power ratings on options, refer to the
product rating label or the user documentation supplied with that option.
WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not overload
the AC supply branch circuit that provides power to the rack. Consult the electrical authority having
jurisdiction over wiring and installation requirements of your facility.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating
UPS. This device protects the hardware from damage caused by power surges and voltage spikes
and keeps the server in operation during a power failure.
HPE Apollo Platform Manager
HPE Apollo Platform Manager, formerly named HPE Advanced Power Manager, is a point of contact for
system administration.
To install, configure, and access HPE APM, see the HPE Apollo Platform Manager User Guide on the
Hewlett Packard Enterprise website (http://www.hpe.com/support/APM_UG_en).
Planning the installation7
Page 8
Hot-plug power supply calculations
For more information on the hot-plug power supply and calculators to determine server power
consumption in various system configurations, see the Hewlett Packard Enterprise Power Advisor website
(http://www.hpe.com/info/poweradvisor/online).
Compiling the documentation
The documentation, while delivered individually and in various formats, works as a system. Consult these
documents before attempting installation. These documents provide the required important safety
information and decision-making steps for the configuration. To access these documents, see the Hewlett
Packard Enterprise website.
Warnings and cautions
WARNING: To reduce the risk of personal injury or damage to equipment, heed all warnings and
cautions throughout the installation instructions.
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
•The rack is bolted to the floor using the concrete anchor kit.
•The leveling feet extend to the floor.
•The full weight of the rack rests on the leveling feet.
•The racks are coupled together in multiple rack installations.
•Only one component is extended at a time. If more than one component is extended, a rack
might become unstable.
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the
equipment:
•Observe local occupational health and safety requirements and guidelines for manual material
handling.
•Remove all servers from the chassis before installing or moving the chassis.
•Use caution and get help to lift and stabilize the chassis during installation or removal, especially
when the chassis is not fastened to the rack.
WARNING: To reduce the risk of personal injury or damage to the equipment, you must adequately
support the chassis during installation and removal.
WARNING: Install the chassis starting from the bottom of the rack and work your way up the rack.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
8 Hot-plug power supply calculations
Page 9
WARNING: To reduce the risk of electric shock or damage to the equipment:
•Never reach inside the chassis while the system is powered up.
•Perform service on system components only as instructed in the user documentation.
CAUTION: Always be sure that equipment is properly grounded and that you follow proper
grounding procedures before beginning any installation procedure. Improper grounding can result in
ESD damage to electronic components. For more information, refer to "Electrostatic discharge on
page 73."
CAUTION: When performing non-hot-plug operations, you must power down the server and/or the
system. However, it may be necessary to leave the server powered up when performing other
operations, such as hot-plug installations or troubleshooting.
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Space and airflow requirements
Installation of the chassis is supported in 1200 mm Gen10 racks.
To allow for servicing and adequate airflow, observe the following space and airflow requirements when
deciding where to install a rack:
•Leave a minimum clearance of 63.5 cm (25 in) in front of the rack.
•Leave a minimum clearance of 76.2 cm (30 in) behind the rack.
•Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack
or row of racks.
Front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet,
and the rear door must be adequately ventilated to allow the warm air to escape from the cabinet.
CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation
openings.
When vertical space in the rack is not filled by a chassis or rack component, the gaps between the
components cause changes in airflow through the rack and across the components. Cover all gaps with
blanking panels to maintain proper airflow.
CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement
ensures proper airflow. Using a rack without blanking panels results in improper cooling that can
lead to thermal damage.
Space and airflow requirements9
Page 10
CAUTION: If a third-party rack is used, observe the following additional requirements to ensure
adequate airflow and to prevent damage to the equipment:
•Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow
5,350 sq cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate airflow
(equivalent to the required 64 percent open area for ventilation).
•Side—The clearance between the installed rack component and the side panels of the rack must
be a minimum of 7 cm (2.75 in).
Temperature requirements
To ensure continued safe and reliable equipment operation, install or position the rack in a well-ventilated,
climate-controlled environment.
The operating temperature inside the rack is always higher than the room temperature and is dependent
on the configuration of equipment in the rack. Check the TMRA for each piece of equipment before
installation.
CAUTION: To reduce the risk of damage to the equipment when installing third-party options:
•Do not permit optional equipment to impede airflow around the chassis or to increase the internal
rack temperature beyond the maximum allowable limits.
•Do not exceed the manufacturer’s TMRA.
Grounding requirements
•The building installation must provide a means of connection to protective earth.
•The equipment must be connected to that means of connection.
•A service person must check whether the socket-outlet from which the equipment is to be powered
provides a connection to the building protective earth. If the outlet does not provide a connection, the
service person must arrange for the installation of a protective earthing conductor from the separate
protective earthing terminal to the protective earth wire in the building.
Connecting a DC power cable to a DC power source
WARNING: To reduce the risk of electric shock or energy hazards:
•This equipment must be installed by trained service personnel, as defined by the NEC and IEC
60950-1, Second Edition, the standard for Safety of Information Technology Equipment.
•Connect the equipment to a reliably grounded Secondary circuit source. A Secondary circuit has
no direct connection to a Primary circuit and derives its power from a transformer, converter, or
equivalent isolation device.
•The branch circuit overcurrent protection must be rated 20 A.
WARNING: When installing a DC power supply, the ground wire must be connected before the
positive or negative leads.
10 Temperature requirements
Page 11
WARNING: Remove power from the power supply before performing any installation steps or
maintenance on the power supply.
CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the
earthing conductor at the equipment. For more information, see the documentation that ships with
the power supply.
CAUTION: If the DC connection exists between the earthed conductor of the DC supply circuit and
the earthing conductor at the server equipment, the following conditions must be met:
•This equipment must be connected directly to the DC supply system earthing electrode
conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply
system earthing electrode conductor is connected.
•This equipment should be located in the same immediate area (such as adjacent cabinets) as
any other equipment that has a connection between the earthed conductor of the same DC
supply circuit and the earthing conductor, and also the point of earthing of the DC system. The
DC system should be earthed elsewhere.
•The DC supply source is to be located within the same premises as the equipment.
•Switching or disconnecting devices should not be in the earthed circuit conductor between the
DC source and the point of connection of the earthing electrode conductor.
To connect a DC power cable to a DC power source:
1. Cut the DC power cord ends no shorter than 150 cm (59.06 in).
2. If the power source requires ring tongues, use a crimping tool to install the ring tongues on the power
cord wires.
IMPORTANT: The ring terminals must be UL approved and accommodate 12 gauge wires.
IMPORTANT: The minimum nominal thread diameter of a pillar or stud type terminal must be 3.5
mm (0.138 in); the diameter of a screw type terminal must be 4.0 mm (0.157 in).
3. Stack each same-colored pair of wires and then attach them to the same power source. The power
cord consists of three wires (black, red, and green).
For more information, see the documentation that ships with the power supply.
Planning the installation11
Page 12
Identifying components and LEDs
System components
ItemDescription
1RCM module (optional)
2Power supply
3HPE Apollo 2000 Gen10 Chassis
4Fan
5HPE ProLiant XL190r Gen10 Server
6HPE ProLiant XL170r Gen10 Server
7Server tray blank
Front panel components
HPE Apollo r2200 Gen10 Chassis
12 Identifying components and LEDs
Page 13
ItemDescription
1Left bezel ear
2Low-profile LFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
HPE Apollo r2600 Gen10 Chassis
ItemDescription
1Left bezel ear
2SFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
5Non-removable bezel blank
HPE Apollo r2800 Gen10 Chassis with 16 NVMe
Identifying components and LEDs13
Page 14
ItemDescription
1Left bezel ear
2NVMe drives
3Right bezel ear
4Chassis serial label pull tab
5Non-removable bezel blanks
HPE Apollo r2800 Gen10 Chassis (24 SFF model with storage expander backplane)
ItemDescription
1Left bezel ear
2SFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
5Expander daughter module with power LED
1
When the LEDs described in this table flash simultaneously, a power fault has occurred. For more information, see
Front panel LEDs.
Front panel LEDs
1
14 Front panel LEDs
Page 15
ItemDescriptionStatus
1Power On/Standby button and system power
LED (Server 1)
1
2Power On/Standby button and system power
LED (Server 2)
3Health LED (Server 2)1
4Health LED (Server 1)
5Health LED (Server 3)
1
1
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
6Health LED (Server 4)
1
7Power On/Standby button and system power
LED (Server 4)
1
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Table Continued
Identifying components and LEDs15
Page 16
ItemDescriptionStatus
8UID button/LED
1
Solid blue = Activated Flashing blue:
• 1 flash per second = Remote management or
firmware upgrade in progress
• 4 flashes per second = iLO manual soft
reboot sequence initiated
• 8 flashes per second = iLO manual hard
reboot sequence in progress
Off = Deactivated
9Power On/Standby button and system power
LED (Server 3)
1
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
1
When the LEDs described in this table flash simultaneously, a power fault has occurred.
2
Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has
occurred, or the front I/O cable is disconnected.
3
If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health
status.
2
24 SFF with expander daughter board LEDs
ItemDescriptionStatus
1Expander daughter board power good LED
2Expander daughter board power fault LED
Power fault LEDs
The following table provides a list of power fault LEDs, and the subsystems that are affected. Not all
power faults are used by all servers.
16 Power fault LEDs
Solid green = Expander daughter board power
is good
Off = Expander daughter board power fault
LED will be on
Solid yellow = Expander daughter board power
fault has occurred
Off = Expander daughter board power good
LED will be on
Page 17
SubsystemLED behavior
System board1 flash
Processor2 flashes
Memory3 flashes
Riser board PCIe slots4 flashes
FlexibleLOM5 flashes
Removable HPE Flexible Smart Array
controller
System board PCIe slots7 flashes
Power backplane or storage backplane8 flashes
Power supply9 flashes
Rear panel components
Four 1U servers
ItemDescription
6 flashes
1Server 4
2Server 3
3Power supply 2
4RCM module (optional)
5Power supply 1
6Server 2
7Server 1
Two 2U servers
Rear panel components17
Page 18
ItemDescription
1Server 3
2Power supply 2
3RCM module (optional)
4Power supply 1
5Server 1
Power supply LEDs
18 Power supply LEDs
Page 19
ItemDescriptionStatus
1Power supply 1 LED
2Power supply 2 LED
Fan locations
Solid green = Normal
Off = One or more of the following conditions
exists:
• Power is unavailable
• Power supply failed
• Power supply is in standby mode
• Power supply error
Solid green = Normal
Off = One or more of the following conditions
exists:
• Power is unavailable
• Power supply failed
• Power supply is in standby mode
• Power supply error
Drive bay numbering
IMPORTANT: Depending on the chassis configuration and the components installed in the servers,
it might be necessary to limit the number of drives installed in the chassis. For more information, see
"Temperature requirements" in the server user guide.
Fan locations19
Page 20
Apollo r2200 Gen10 Chassis (1U servers in AHCI mode)
ItemDescription
1Server 1 drive bays
2Server 2 drive bays
3Server 3 drive bays
4Server 4 drive bays
Apollo r2200 Gen10 Chassis (2U servers in AHCI mode)
ItemDescription
1Server 1 drive bays
3Server 3 drive bays
Apollo r2200 Gen10 Chassis (1U and 2U servers using embedded SATA S100i or a Smart Array
controller)
One 1U server corresponds to a maximum of three low-profile LFF hot-plug drives.
• Server 1 corresponds to drive bays 1-1 through 1-3.
• Server 2 corresponds to drive bays 2-1 through 2-3.
• Server 3 corresponds to drive bays 3-1 through 3-3.
• Server 4 corresponds to drive bays 4-1 through 4-3.
One 2U server corresponds to a maximum of six low-profile LFF hot-plug drives.
• Server 1 corresponds to drive bays 1-1 through 2-3.
• Server 3 corresponds to drive bays 3-1 through 4-3.
20Identifying components and LEDs
Page 21
Apollo r2600 Gen10 Chassis (1U servers in AHCI mode)
ItemDescriptionSupported drives
1Server 1 drive baysDrive bays 1, 2, 3, and 4 support SFF SmartDrives
only. Drive bays 5 and 6 support both SFF SmartDrives
and NVMe drives.
2Server 2 drive baysDrive bays 1 and 2 support both SFF SmartDrives and
NVMe drives. Drive bays 3, 4, 5, and 6 support SFF
SmartDrives only.
3Server 3 drive baysDrive bays 1, 2, 3, and 4 support SFF SmartDrives
only. Drive bays 5 and 6 support both SFF SmartDrives
and NVMe drives.
4Server 4 drive baysDrive bays 1 and 2 support both SFF SmartDrives and
NVMe drives. Drive bays 3, 4, 5, and 6 support SFF
SmartDrives only.
Apollo r2600 Gen10 Chassis (2U servers in AHCI mode)
Identifying components and LEDs21
Page 22
ItemDescriptionSupported drives
1Server 1 drive baysDrive bays 5, 6, 9, and 10 support both SFF
SmartDrives and NVMe drives. All other drive bays
support SFF SmartDrives only.
3Server 3 drive baysDrive bays 5, 6, 9, and 10 support both SFF
SmartDrives and NVMe drives. All other drive bays
support SFF SmartDrives only.
Apollo r2600 Gen10 Chassis (1U and 2U servers using the embedded SATA S100i or a Smart Array
controller)
Drive bays 1-5, 1-6, 2-1, 2-2, 3-5, 3-6, 4-1, and 4-2 support both SFF SmartDrives and NVMe drives.
All other drives bays support SFF SmartDrives only.
One 1U server corresponds to a maximum of six drives.
• Server 1 corresponds to drive bays 1-1 through 1-6.
• Server 2 corresponds to drive bays 2-1 through 2-6.
• Server 3 corresponds to drive bays 3-1 through 3-6.
• Server 4 corresponds to drive bays 4-1 through 4-6.
One 2U server corresponds to a maximum of twelve drives.
• Server 1 corresponds to drive bays 1-1 through 2-6.
• Server 3 corresponds to drive bays 3-1 through 4-6.
Apollo r2800 Gen10 Chassis with 16 NVMe
IMPORTANT: The HPE Apollo r2800 Gen10 Chassis with 16 NVMe does not support servers using
the embedded SATA HPE Dynamic Smart Array S100i Controller or any type-p plug-in Smart Array
Controller with internal ports and cables.
One 1U server corresponds to a maximum of four NVMe drives.
• Server 1 corresponds to drive bays 1-1 through 1-4.
• Server 2 corresponds to drive bays 2-1 through 2-4.
• Server 3 corresponds to drive bays 3-1 through 3-4.
• Server 4 corresponds to drive bays 4-1 through 4-4.
One 2U server corresponds to a maximum of eight NVMe drives.
• Server 1 corresponds to drive bays 1-1 through 2-4.
• Server 3 corresponds to drive bays 3-1 through 4-4.
22Identifying components and LEDs
Page 23
Apollo r2800 Gen10 Chassis (24 SFF with storage expander backplane)
The factory default configuration evenly distributes the 24 SFF drive bays in the HPE Apollo 2800
Chassis.
For detailed information and examples on drive bay mapping configuration changes in the HPE Apollo
r2800 Gen10 Chassis, see the iLO REST APIs to GitHub on the Hewlett Packard Enterprise website.
NOTE: While the layout of the drives are internally mapped as mentioned below. The Redfish response or
RESTful interface tool still continues to display a fixed 6 drives irrespective of 1U or 2U node inserted.
The HPE Apollo r2800 Chassis, featuring the storage expander backplane, supports the flexibility to
assign drive bays to specific server nodes. This feature provides secure, remote configuration flexibility
via iLO Redfish interface. To deploy HDD bay mapping configuration iLO user account privilege
"Configure iLO Settings" is required.
Drive bay mapping configuration changes may be made from any server node and take effect after all
server nodes in the HPE Apollo r2800 Chassis are turned off and the Chassis firmware is able to reset the
storage expander backplane. All nodes must remain powered off for at least 5 seconds after executing
the configuration changes. The server nodes may be remotely restarted through the iLO remote interface,
or may be locally restarted by pressing the power button for each node.
This feature requires the following minimum firmware versions:
•Apollo 2000 System Chassis firmware version 1.2.10 or later
•Storage Expander firmware version 1.0 or later
•iLO firmware version 1.20 or later
Six drive bays are allocated to each 1U node.
• Server 1 corresponds to drive bays 1 through 6
• Server 2 corresponds to drive bays 7 through 12
• Server 3 corresponds to drive bays 13 through 18
• Server 4 corresponds to drive bays 19 through 24
Twelve drive bays are allocated to each 2U node.
• Server 1 corresponds to drive bays 1 through 12
• Server 3 corresponds to drive bays 13 through 24
Identifying components and LEDs23
Page 24
Hot-plug drive LED definitions
SmartDrive hot-plug drive LED definitions
ItemDescriptionStatus
1Locate
•Solid blue = The drive is being identified by a host
application.
•Flashing blue = The drive carrier firmware is being
updated or requires an update.
2Activity ring LED
24 Hot-plug drive LED definitions
•Rotating green = Drive activity.
•Off = No drive activity.
Table Continued
Page 25
ItemDescriptionStatus
3Do not remove LED
•Solid white = Do not remove the drive. Removing
the drive causes one or more of the logical drives to
fail.
•Off = Removing the drive does not cause a logical
drive to fail.
4Drive status LED
•Solid green = The drive is a member of one or more
logical drives.
•Flashing green = The drive is rebuilding or
performing a RAID migration, strip size migration,
capacity expansion, or logical drive extension, or is
erasing.
•Flashing amber/green = The drive is a member of
one or more logical drives and predicts the drive will
fail.
•Flashing amber = The drive is not configured and
predicts the drive will fail.
•Solid amber = The drive has failed.
•Off = The drive is not configured by a RAID
controller.
Low-profile LFF hot-plug drive LED definitions
ItemDefinition
1Fault/UID (amber/blue)
2Online/Activity (green)
LED Activity
Low-profile LFF hot-plug drive LED definitions25
Page 26
Online/Activity LED (green)Fault/UID LED (amber/blue)Definition
On, off, or flashingAlternating amber and blue
One or more of the following
conditions exist:
•The drive has failed.
•A predictive failure alert has
been received for this drive.
•The drive has been selected
by a management application.
On, off, or flashingSolid blue
One or both of the following
conditions exist:
•The drive is operating
normally.
•The drive has been selected
by a management application.
OnFlashing amberA predictive failure alert has been
received for this drive. Replace
the drive as soon as possible.
OnOffThe drive is online but is not
currently active.
1 flash per secondFlashing amber
1 flash per secondOff
Do not remove the drive.
Removing the drive might
terminate the current operation
and cause data loss.
The drive is part of an array that
is undergoing capacity expansion
or stripe migration, but a
predictive failure alert has been
received for this drive. To
minimize the risk of data loss, do
not remove the drive until the
expansion or migration is
complete.
Do not remove the drive.
Removing the drive might
terminate the current operation
and cause data loss.
The drive is rebuilding, erasing,
or is part of an array that is
undergoing capacity expansion
or stripe migration.
26Identifying components and LEDs
Table Continued
Page 27
Online/Activity LED (green)Fault/UID LED (amber/blue)Definition
4 flashes per secondFlashing amberThe drive is active but a
4 flashes per secondOffThe drive is active and is
OffSolid amberA critical fault condition has been
OffFlashing amberA predictive failure alert has been
OffOffThe drive is offline, a spare, or
NVMe SSD components
The NVMe SSD is a PCIe bus device. A device attached to a PCIe bus cannot be removed without
allowing the device and bus to complete and cease the signal/traffic flow.
predictive failure alert has been
received for this drive. Replace
the drive as soon as possible.
operating normally.
identified for this drive and the
controller has placed it offline.
Replace the drive as soon as
possible.
received for this drive. Replace
the drive as soon as possible.
not configured as part of an
array.
CAUTION: Do not remove an NVMe SSD from the drive bay while the Do not remove LED is
flashing. The Do not remove LED flashes to indicate that the device is still in use. Removing the
NVMe SSD before the device has completed and ceased signal/traffic flow can cause loss of data.
ItemDescriptionStatus
1Locate LED
2Activity ring LED
Solid blue = The drive is being identified by a host application.
Flashing blue = The drive carrier firmware is being updated or
requires an update.
Rotating green = Drive activity
Off = No drive activity
Table Continued
NVMe SSD components27
Page 28
ItemDescriptionStatus
3Drive status LED
4Do Not Remove LED
5Power LED
Solid green = The drive is a member of one or more logical
drives.
Flashing green = The drive is rebuilding or performing a RAID
migration, stripe size migration, capacity expansion, or logical
drive extension, or is erasing.
Flashing amber/green = The drive is a member of one or more
logical drives and predicts the drive will fail.
Flashing amber = The drive is not configured and predicts the
drive will fail.
Solid amber = The drive has failed.
Off = The drive is not configured by a RAID controller.
Solid white = Do not remove the drive. Drive must be ejected
from the PCIe bus prior to removal.
Flashing white = Ejection request pending
Off = Drive has been ejected
Solid green = Do not remove the drive. Drive must be ejected
from the PCIe bus prior to removal.
Flashing green = Ejection request pending
Off = Drive has been ejected
6Power button
Press to request PCIe ejection. Removal request can be denied
by the:
•RAID controller (one or more of the logical drives could fail)
•Operating system
7Do not remove buttonPress to open the release lever.
28Identifying components and LEDs
Page 29
RCM module components
ItemDescription
1iLO connector
2HPE APM 2.0 connector
3iLO connector
RCM module LEDs
RCM module components29
Page 30
ItemDescriptionStatus
1
iLO activity LED
Green or flashing green = Network activity
Off = No network activity
2
iLO link LED
Green = Linked to network
Off = No network connection
3
iLO link LED
iLO link LED Green = Linked to network
Off = No network connection
4
iLO activity LED
Green or flashing green = Network activity
Off = No network activity
30Identifying components and LEDs
Page 31
Installing the chassis
Installation overview
To set up and install the chassis:
Unpack the system.
1.
2. Prepare the chassis for installation.
3. Install the rack rails and chassis into the rack.
4. Install hardware options into the servers.
5. Install the system components.
6. Cable the chassis.
Unpacking the system
Unpack the following hardware and prepare for installation:
•HPE Apollo 2000 Gen10 Chassis
•Rack rail kit
•System components and cabling
The following documents also ship with the HPE Apollo 2000 Gen10 Chassis:
•Start Here for Important Setup Information
•Safety, Compliance, and Warranty Information
Preparing the chassis for installation
If installing the Smart Storage Battery and redundant fan options, install these options before installing the
chassis into the rack.
Prerequisites
Before installing the chassis into the rack, Hewlett Packard Enterprise recommends removing the servers
and drives. Because a fully populated chassis is heavy, removing the servers facilitates moving and
installing the chassis.
Procedure
1. If installed, remove the bezel.
Installing the chassis 31
Page 32
IMPORTANT: Label the drives before removing them. The drives must be returned to their
original locations.
2. Remove the hot-plug drive.
•SFF SmartDrive
•Low-profile LFF hot-plug drive
•NVMe hot-plug drive
32Installing the chassis
Page 33
3. Remove the server from the chassis:
a. Loosen the thumbscrew.
b. Pull back the handle.
CAUTION: To avoid damage to the server, always support the bottom of the server when
removing it from the chassis.
c. Remove the server:
•1U server
•2U server
Installing the chassis 33
Page 34
Installing the Smart Storage Battery for the regular Power Supply
Procedure
1. If installing a Smart Storage Battery or redundant fan option, remove the access panel.
NOTE: After the Smart Storage Battery is installed, it might take up to two hours to charge. Features
requiring backup power are not enabled until the battery is fully charged.
NOTE: The Smart Storage Battery is supported with P408i-p and P408e-p controllers only. Due to
thermal concerns on some configurations, you are recommended to remove the Smart Storage
Battery if these two cards are not installed in the chassis.
2. To install the Smart Storage Battery, do the following:
34 Installing the Smart Storage Battery for the regular Power Supply
Page 35
a. Remove the battery holder.
NOTE: Depending on the power supply installed, the design of the Smart Storage battery and the
fan cage and module may be slightly different.
b. Install the Smart Storage Battery into the holder and route the cable.
c. Connect the battery cable to the power distribution board and install the holder into the chassis.
Installing the chassis 35
Page 36
3. Install the access panel.
Installing the Smart Storage Battery for the H-Watt Power Supply
Procedure
1. To install the Smart Storage Battery, remove the bezel, the hot-plug drives, and the server from
the chassis.
2. Install the fan into the fan cage.
36 Installing the Smart Storage Battery for the H-Watt Power Supply
Page 37
3. Connect the fan cage cable to the power distribution board connector and install the fan cage.
4. Install the Smart Storage Battery into the holder. Route the battery cable and connect it to the power
distribution board.
Installing the chassis 37
Page 38
5. Install the battery holder.
6. Install the redundant fan option.
Installing the redundant fan option
Procedure
1. To install the redundant fan option, do the following:
NOTE: Depending on the power supply installed, the design of the SmartStorage battery and the fan
cage and module may be slightly different.
a. Disconnect the fan module cables for fans 1, 2, 3, and 4. Be careful not to remove any existing
adhesive tape.
b. Install the redundant fans.
38 Installing the redundant fan option
Page 39
c. Route the fan 1 and fan 5 cables through the grooves on the top of fan 5. Then secure the cables
on top of fan 5 with two strips of adhesive tape.
d. Repeat the previous step for fans 6, 7, and 8.
e. Connect all fan module cables.
Installing the chassis 39
Page 40
2. Install the access panel.
Installing the chassis into the rack
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the
equipment:
•Observe local occupational health and safety requirements and guidelines for manual material
handling.
•Remove all servers from the chassis before installing or moving the chassis.
•Use caution and get help to lift and stabilize the chassis during installation or removal, especially
when the chassis is not fastened to the rack.
40 Installing the chassis into the rack
Page 41
WARNING: To avoid risk of personal injury or damage to the equipment, do not stack anything on
top of rail-mounted equipment or use it as a work surface when extended from the rack.
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
•The rack is bolted to the floor using the concrete anchor kit.
•The leveling feet extend to the floor.
•The full weight of the rack rests on the leveling feet.
•The racks are coupled together in multiple rack installations.
•Only one component is extended at a time. If more than one component is extended, a rack
might become unstable.
WARNING: To reduce the risk of personal injury or equipment damage, be sure that the rack is
adequately stabilized before installing the chassis.
CAUTION: Always plan the rack installation so that the heaviest item is on the bottom of the rack.
Install the heaviest item first, and continue to populate the rack from the bottom to the top.
CAUTION: Be sure to keep the product parallel to the floor when installing the chassis. Tilting the
product up or down could result in damage to the slides.
CAUTION: Hewlett Packard Enterprise has not tested or validated this chassis with any third-party
racks. Before installing the chassis in a third-party rack, be sure to properly scope the limitations of
the rack. Before proceeding with the installation, consider the following:
•You must fully understand the static and dynamic load carrying capacity of the rack and be sure
that it can accommodate the weight of the chassis.
•Be sure sufficient clearance exists for cabling, installation and removal of the chassis, and
actuation of the rack doors.
IMPORTANT: When installing each chassis into the rack, be sure that the HPE Apollo Platform
Manager is at the top of the chassis to ensure proper orientation in the rack.
Installing the chassis 41
Page 42
Installing the 2U rack rail kit
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
•The leveling jacks are extended to the floor.
•The full weight of the rack rests on the leveling jacks.
•The stabilizing feet are attached to the rack if it is a single-rack installation.
•The racks are coupled together in multiple-rack installations.
•Only one component is extended at a time. A rack may become unstable if more than one
component is extended for any reason.
WARNING: If you are going to use a lift, be sure to use a lift that can handle the load of the
component.
Procedure
1. Insert the 2U rack rails into the columns.
2. Secure the 2U rails with two panhead screws at the rear of the rack.
•Square-hole rack
42 Installing the 2U rack rail kit
Page 43
•Round-hole rack
3. Install the chassis into the rack.
Installing the chassis 43
Page 44
4. Tighten the thumbscrews.
5. If the rack is being shipped, install the shipping bracket for the top chassis only.
44Installing the chassis
Page 45
Installing hardware options into the server
To install hardware options into the server, see the server user guide on the
website.
Installing the system components
If components were removed during the chassis installation or additional components were ordered,
install each device using the procedures in this section. If you perform any of the procedures in this
section after powering on the chassis, ensure proper airflow by ensuring that each bay inside the chassis
and at the rear of the chassis is populated with either a component or a blank. For component-specific
replacement information, see the maintenance guides on the Hewlett Packard Enterprise website.
Installing a server
CAUTION: To ensure proper thermal cooling, all server tray slots must be populated with servers or
server tray blanks.
Procedure
1. Install the server.
•1U server
Hewlett Packard Enterprise
Installing hardware options into the server45
Page 46
•2U server
Installing the server tray blank
CAUTION: To ensure proper thermal cooling, all server tray slots must be populated with servers or
server tray blanks.
Procedure
Install the server tray blank.
46 Installing the server tray blank
Page 47
Installing a hot-plug AC power supply
Procedure
1. Access the product rear panel.
2. Remove the power supply blank.
3. Slide the power supply into the bay until it clicks into place.
Installing a hot-plug AC power supply47
Page 48
4. Connect the power cord to the power supply.
5. To prevent accidental power cord disconnection when sliding the server in and out of the chassis,
secure the power cord in the strain relief strap attached to the power supply handle:
a. Unwrap the strain relief strap from the power supply handle.
CAUTION: Avoid tight bend radii to prevent damaging the internal wires of a power cord or a
server cable. Never bend power cords and server cables tight enough to cause a crease in
the sheathing.
b. Secure the power cord with the strain relief strap. Roll the extra length of the strap around the
power supply handle.
Installing a hot-plug DC power supply
The following input power cord option might be purchased from an authorized reseller:
J6X43A—HPE 12 AWG 48 V DC 3.0 m Power Cord
If you are not using an input power cord option, the power supply cabling should be made in consultation
with a licensed electrician and be compliant with local code.
If you are replacing the factory installed ground lug, use the KST RNB5-5 crimp terminal ring or
equivalent. Use an M5-0.80 x 8 screw to attach the ground lug to the power supply.
48 Installing a hot-plug DC power supply
Page 49
WARNING: To reduce the risk of electric shock, fire, and damage to the equipment, you must install
this product in accordance with the following guidelines:
•This power supply is intended only for installation in Hewlett Packard Enterprise servers located
in a restricted access location.
•This power supply is not intended for direct connection to the DC supply branch circuit. Only
connect this power supply to a power distribution unit (PDU) that provides an independent
overcurrent-protected output for each DC power supply. Each output overcurrent-protected
device in the PDU must be suitable for interrupting fault current available from the DC power
source and must be rated no more than 40A.
•The PDU output must have a shut-off switch or a circuit breaker to disconnect power for each
power supply. To completely remove power from the power supply, disconnect power at the
PDU. The end product may have multiple power supplies. To remove all power from the product,
disconnect the power for each power supply.
•In accordance with applicable national requirements for Information Technology Equipment and
Telecommunications Equipment, this power supply only connects to DC power sources that are
classified as SELV or TNV. Generally, these requirements are based on the International
Standard for Information Technology Equipment, IEC 60950-1. In accordance with local and
regional electric codes and regulations, the DC source must have one pole (Neutral/Return)
reliably connected to earth ground.
•You must connect the power supply ground screw located on the front of the power supply to a
suitable ground (earth) terminal. In accordance with local and regional electric codes and
regulations, this terminal must be connected to a suitable building ground (earth) terminal. Do
not rely on the rack or cabinet chassis to provide adequate ground (earth) continuity.
Procedure
1.Access the product rear panel.
2.Remove the power supply blank.
3.Remove the ring tongue.
Installing the chassis 49
Page 50
4.Crimp the ring tongue to the ground cable from the -48 V DC power source.
5.Remove the terminal block connector.
50Installing the chassis
Page 51
6.Loosen the screws on the terminal block connector.
7.Attach the ground (earthed) wire to the ground screw and washer and tighten to 1.47 N m (13 lb-in)
of torque. The ground wire must be connected before the -48 V wire and the return wire.
The ground wire must be connected before the -48 V wire and the return wire.
Installing the chassis 51
Page 52
8.Insert the -48 V wire into the left side of the terminal block connector, and then tighten the screw to
1.3 N m (10 lb-in) of torque.
9.Insert the return wire into the right side of the connector, and then tighten the screw to 1.3 N m (10
lb-in) of torque.
52Installing the chassis
Page 53
10. Install the terminal block connector in the power supply.
11. To prevent accidental power cord disconnection when sliding the server in and out of the chassis,
secure the power cord, wires, and/or cables in the strain relief strap attached to the power supply
handle:
a. Unwrap the strain relief strap from the power supply handle.
CAUTION: Avoid tight bend radii to prevent damaging the internal wires of a power cord or
a server cable. Never bend power cords and server cables tight enough to cause a crease
in the sheathing.
b. Secure the wires and cables with the strain relief strap. Roll the extra length of the strap around
the power supply handle.
Installing the chassis 53
Page 54
12. Slide the power supply into the bay until it clicks into place.
Drive options
The different chassis options support SAS, SATA, and NVMe drives. For more information on drive
support, see Drive bay numbering.
IMPORTANT: Depending on the chassis configuration and the components installed in the servers,
it might be necessary to limit the number of drives installed in the chassis. For more information, see
"Temperature requirements" in the server user guide.
Hot-plug drive guidelines
When adding drives to the server, observe the following general guidelines:
•The system automatically sets all device numbers.
•If only one drive is used, install it in the bay with the lowest device number.
•Drives should be the same capacity to provide the greatest storage space efficiency when drives are
grouped together into the same drive array.
54 Drive options
Page 55
Removing the drive blank
Remove the components as indicated.
•SFF drive blank
•Low-profile LFF drive blank
Installing a hot-plug SAS or SATA drive
Prerequisites
Before installing this option, be sure that you have the following:
The components included with the hardware option kit.
Procedure
1. Remove the drive blank.
2. Prepare the drive.
•SFF SmartDrive
Removing the drive blank55
Page 56
•Low-profile LFF hot-plug drive
3. Install the drive.
•SFF SmartDrive
•Low-profile LFF hot-plug drive
56Installing the chassis
Page 57
4. Determine the status of the drive from the drive LED definitions (Hot-plug drive LED definitions).
Installing the NVMe drives
NVMe drives are supported in the HPE Apollo r2600 Gen10 Chassis and in the HPE Apollo r2800 Gen10
Chassis with 16 NVMe. For more information, see Drive bay numbering.
Prerequisites
Before installing this option, be sure you have the following:
The components included with the hardware option kit.
Procedure
1. Observe the following alert:
CAUTION: To prevent improper cooling and thermal damage, do not operate the server unless
all bays are populated with either a component or a blank.
2. Remove the drive blank, if installed.
3. Press the Do Not Remove button to open the release handle.
Installing the NVMe drives57
Page 58
4. Install the drives.
5. Install an SFF drive blank in any unused drive bays.
Installing the optional 2U bezel
NOTE: Your chassis may look slightly different than shown.
Procedure
Install the bezel.
58 Installing the optional 2U bezel
Page 59
Installing the chassis 59
Page 60
Cabling
Cabling guidelines
The cable colors in the cabling diagrams used in this chapter are for illustration purposes only. Most of the
server cables are black.
Observe the following guidelines when working with server cables.
Before connecting cables
•Note the port labels on the PCA components. Not all of these components are used by all servers:
◦System board ports
◦Drive and power supply backplane ports
◦Expansion board ports (controllers, adapters, expanders, risers, and similar boards)
•Note the label near each cable connector. This label indicates the destination port for the cable
connector.
•Some data cables are pre-bent. Do not unbend or manipulate the cables.
•To prevent mechanical damage or depositing oil that is present on your hands, and other
contamination, do not touch the ends of the connectors.
When connecting cables
•Before connecting a cable to a port, lay the cable in place to verify the length of the cable.
•Use the internal cable management features to properly route and secure the cables.
•When routing cables, be sure that the cables are not in a position where they can be pinched or
crimped.
•Avoid tight bend radii to prevent damaging the internal wires of a power cord or a server cable. Never
bend power cords and server cables tight enough to cause a crease in the sheathing.
•Make sure that the excess length of cables are properly secured to avoid excess bends, interference
issues, and airflow restriction.
•To prevent component damage and potential signal interference, make sure that all cables are in their
appropriate routing position before installing a new component and before closing up the server after
hardware installation/maintenance.
When disconnecting cables
•Grip the body of the cable connector. Do not pull on the cable itself because this action can damage
the internal wires of the cable or the pins on the port.
•If a cable does not disconnect easily, check for any release latch that must be pressed to disconnect
the cable.
60 Cabling
Page 61
•Remove cables that are no longer being used. Retaining them inside the server can restrict airflow. If
you intend to use the removed cables later, label and store them for future use.
Cabling the chassis
WARNING: Be sure that all circuit breakers are in the off position before connecting any power
components.
CAUTION: To avoid damaging the fiber cables, do not drape cables from one side of the rack to the
other and do not run cables over a hard corner or edge.
CAUTION: To avoid damaging the cable, squeeze the thermal boot on the cable before
disconnecting from the connector.
CAUTION: To prevent loss of data and damage to the PDU, each power supply must be connected
to a dedicated circuit breaker. Do not connect multiple power supplies to a single circuit breaker.
Front I/O cabling
HPE Apollo r2200 Gen10 Chassis
Cabling the chassis61
Page 62
Cable colorDescription
OrangeLeft front I/O cable
BlueRight front I/O cable
HPE Apollo r2600 Gen10 Chassis and HPE Apollo r2800 Gen10 Chassis with 16 NVMe
Cable colorDescription
OrangeLeft front I/O cable
BlueRight front I/O cable
Drive backplane power cabling
HPE Apollo r2200 Gen10 Chassis
62 Drive backplane power cabling
Page 63
Cable colorDescription
Oranger2200 Gen10 Chassis power cable for server 1 and server 2
BlueChassis power cable for hot-plug drives
AmberChassis power cable for server 3 and server 4
PinkChassis pass-through power supply cable
HPE Apollo r2600 Gen10 Chassis
Cable colorDescription
Oranger2600/r2800 Gen10 Chassis power cable for server 1 and server 2
BlueChassis power cable for hot-plug drives
AmberChassis power cable for server 3 and server 4
PinkChassis pass-through power supply cable
Cabling63
Page 64
HPE Apollo r2800 Gen10 Chassis
Cable colorDescription
Oranger2600/r2800 Gen10 Chassis power cable for server 1 and server 2
BlueChassis power cable for hot-plug drives
AmberChassis power cable for server 3 and server 4
PinkChassis pass-through power supply cable
Fan power cabling
Cable colorDescription
OrangeFan power cable assembly for fans 1, 2, 5, and 6
BlueFan power cable assembly for fans 3, 4, 7, and 8
64 Fan power cabling
Page 65
Fan module cabling
Smart Storage Battery cabling
Fan module cabling65
Page 66
RCM 2.0 cabling
Connecting the chassis to the network
The optional RCM module can connect multiple chassis to the same network.
Installing the RCM module
Prerequisites
Observe the following rules and limitations when installing or replacing an RCM module:
•Before installing the RCM module, ensure that all servers in the chassis are powered down. For more
information on powering down the server, see the server user guide.
•If the RCM module is installed on the chassis, the iLO Management Ports in the servers will be
automatically disabled.
•Use either the APM port or an iLO port to connect to a network. Having both ports connected at the
same time results in a loopback condition.
•If using the RCM module iLO ports to connect multiple chassis to the network, the network must
operate at a speed of 1 Gb/s. The servers installed in the chassis cannot connect to the network if the
network is operating at a speed of 10/100 Mb/s or 10 Gb/s.
•If using the RCM module iLO ports to connect multiple chassis to the network, do not connect more
than one iLO port to the network at the same time. Only one iLO port can be connected to the network,
while the other iLO ports can be used to connect multiple chassis together. Having more than one iLO
port connected to the network at the same time results in a loopback condition.
Procedure
1. Remove the cover from the RCM cable connector.
66 RCM 2.0 cabling
Page 67
2. Remove the strain relief strap from the bottom power supply handle.
CAUTION: Avoid tight bend radii to prevent damaging the internal wires of a power cord or a
node cable. Never bend power cords and node cables tight enough to cause a crease in the
sheathing.
3. If only one power supply is installed, do the following:
a. Route the strain relief strap through the RCM module and around the handle of the bottom power
supply.
b. Install the RCM module onto the bottom power supply.
c. Secure the power cord in the strain relief strap.
4. If two power supplies are installed, do the following:
a. Install the RCM module onto the bottom power supply.
Cabling67
Page 68
b. Release the strain relief strap on the top power supply handle.
c. Secure both power cords in the strain relief strap on the top power supply handle.
Connecting multiple chassis to the network with the RCM module iLO ports
Procedure
If using the RCM module iLO ports to connect the chassis to a network, connect all cables to the RCM
module and the network.
68 Connecting multiple chassis to the network with the RCM module iLO ports
Page 69
NOTE: The arrow indicates the connection to the network.
Connecting the optional HPE APM module
Procedure
1. Connect the APM to the network.
2. Connect the APM to the RCM modules.
Connecting power cables and applying power to the
chassis
Procedure
1. Connect the chassis power supply cables to a PDU.
2. Apply power to the PDUs.
3. Be sure that each power supply LED is green.
Connecting the optional HPE APM module69
Page 70
Configuring the system
Power capping
The HPE ProLiant XL family of products provides a power capping feature that operates at the server
enclosure level. The capping feature can be activated using the HPE Apollo Platform Manager. After a
power cap is set for the enclosure, all the resident servers in the enclosure will have the same uniform
power cap applied to them until the cap is either modified or canceled.
Using APM, the enclosure-level power capping feature can be expanded, or different caps can be applied
to user-defined groups by using flexible zones within the same rack. A global power cap can also be
applied to all enclosures with one APM command. For more information on using the APM, see the HPEApollo Platform Manager User Guide on the Hewlett Packard Enterprise website (
support/APM_UG_en).
Power capping modes
The following Power Management modes are standard and are configurable in the power management
controller:
NOTE: Mode 4 is only supported through APM.
•Mode 0: No Redundancy
http://www.hpe.com/
All power-capping is disabled. This mode can be used to minimize any possible performance impact of
power-capping logic.
•Mode 1: Max Performance with Redundancy
This is the default power capping mode. This mode allows the maximum number of nodes to run by
engaging power-capping if the power draw from the chassis attempts to exceed the load supported by
the active power supplies. In this mode, the system is expected to survive (with the possibility of
degraded performance) an unexpected power loss to one or more of the power supplies.
•Mode 2: Not supported
•Mode 3: User Configurable Mode
The user can specify a valid power cap value from a pre-defined range. A cap cannot be set below a
minimum or above a maximum. The cap includes all server nodes, fans, and drives. User configurable
mode requires an iLO Scale Out or iLO Advanced license.
•Mode 4: Rack Level Dynamic Power Capping Mode
In conjunction with APM, the user can specify a maximum power capacity for the entire rack. The APM
dynamically allocates power to the applicable chassis within the rack to maximize performance given
the available power. For more information, see the HPE Apollo Platform Manager User Guide on the
Hewlett Packard Enterprise website (http://www.hpe.com/support/APM_UG_en).
•Mode 5: Power Feed Redundancy Mode
When used with an A+B power feed configuration, Power Feed Redundancy Mode throttles the
system 100%, bringing the nodes to a complete stop if a power feed loss is deduced. Full throttling
continues until the power feed is brought back online. In this mode, the system is expected to survive
an unexpected loss of an entire power feed to half of the power supplies.
70 Configuring the system
Page 71
Configuring a power cap
To configure power capping, you can use the HPE Apollo Platform Manager, a rack level device that can
control power caps for all enclosures in the rack. For more information, see the HPE Apollo PlatformManager User Guide (http://www.hpe.com/support/APM_UG_en) on the Hewlett Packard Enterprise
website.
Setting the chassis power cap mode with HPE APM
Procedure
1. Log in to APM.
a. When the system boots, a Login prompt appears.
b. At the prompt, enter Administrator.
2. Before setting the power cap, enter the following command to review the power baseline:
>show power baseline
The information displayed provides the minimum cap value, the maximum cap value, and the chassis
that meet the requirements for power capping.
3. To set the power cap for eligible chassis connected to the APM, enter the following command at the
prompt:
>SET POWER CAP<wattage>|NONE[zone_name]
The wattage value, if provided, represents the total wattage to be allocated among all the chassis that
are part of the baseline or partial baseline of a zone, if specified. This value is divided by the total
maximum wattage established by the baseline to calculate a percentage cap value. This percentage is
then multiplied against each chassis maximum wattage value to arrive at an appropriate cap value for
that individual chassis.
If NONE is specified instead of a cap wattage value, then APM removes all (or the specified zone) of
the power caps.
To remove baseline data from the EEPROM and to remove the power cap setting, enter the following
command:
>SET POWER BASELINE NONE
After this command is issued, the only way to re-establish a power baseline is to issue the SET POWER
BASELINE command. The system returns to the default power cap mode (mode 1).
Configuring a power cap71
Page 72
Troubleshooting
Troubleshooting resources
The HPE ProLiant Gen10 Troubleshooting Guide, Volume I: Troubleshooting provides procedures for
resolving common problems and comprehensive courses of action for fault isolation and identification,
issue resolution, and software maintenance on ProLiant servers and server blades. To view the guide,
select a language:
English
•
•French
•Spanish
•German
•Japanese
•Simplified Chinese
The HPE ProLiant Gen10 Troubleshooting Guide, Volume II: Error Messages provides a list of error
messages and information to assist with interpreting and resolving error messages on ProLiant servers
and server blades. To view the guide, select a language:
•English
•French
•Spanish
•German
•Japanese
•Simplified Chinese
72 Troubleshooting
Page 73
Electrostatic discharge
Preventing electrostatic discharge
To prevent damaging the system, be aware of the precautions you must follow when setting up the
system or handling parts. A discharge of static electricity from a finger or other conductor may damage
system boards or other static-sensitive devices. This type of damage may reduce the life expectancy of
the device.
Procedure
•Avoid hand contact by transporting and storing products in static-safe containers.
•Keep electrostatic-sensitive parts in their containers until they arrive at static-free workstations.
•Place parts on a grounded surface before removing them from their containers.
•Avoid touching pins, leads, or circuitry.
•Always be properly grounded when touching a static-sensitive component or assembly.
Grounding methods to prevent electrostatic discharge
Several methods are used for grounding. Use one or more of the following methods when handling or
installing electrostatic-sensitive parts:
•Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist
straps are flexible straps with a minimum of 1 megohm ±10 percent resistance in the ground cords. To
provide proper ground, wear the strap snug against the skin.
•Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet
when standing on conductive floors or dissipating floor mats.
•Use conductive field service tools.
•Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller
install the part.
For more information on static electricity or assistance with product installation, contact the HewlettPackard Enterprise Support Center.
Electrostatic discharge73
Page 74
Specifications
Environmental specifications
SpecificationValue
Temperature range
Operating10°C to 35°C (50°F to 95°F)
Non-operating-30°C to 60°C (-22°F to 140°F)
Relative humidity (noncondensing)—
Operating8% to 90%
1
—
28°C (82.4°F), maximum wet bulb temperature
Non-operating
1
All temperature ratings shown are for sea level. An altitude derating of 1.0°C per 305 m (1.8°F per 1000 ft) to 3050 m
(10,000 ft) is applicable. No direct sunlight allowed. Maximum rate of change is 20°C per hour (36°F per hour). The
upper limit and rate of change might be limited by the type and number of options installed.
For certain approved hardware configurations, the supported system inlet temperature range is extended:
•5°C to 10°C (41°F to 50°F) and 35°C to 40°C (95°F to 104°F) at sea level with an altitude derating of
1.0°C per every 175 m (1.8°F per every 574 ft) above 900 m (2953 ft) to a maximum of 3050 m
(10,000 ft).
•40°C to 45°C (104°F to 113°F) at sea level with an altitude derating of 1.0°C per every 125 m (1.8°F
per every 410 ft) above 900 m (2953 ft) to a maximum of 3050 m (10,000 ft).
Mechanical specifications
HPE Apollo r2200 Gen10 Chassis
SpecificationsValue
Dimensions—
Height8.76 cm (3.45 in)
5% to 95%
38.7°C (101.7°F), maximum wet bulb temperature
Depth87.93 cm (34.62 in)
Width44.80 cm (17.64 in)
Weight (approximate values)—
Weight (maximum)41.16 kg (90.75 lb)
Weight (minimum)13.10 kg (28.89 lb)
HPE Apollo r2600 Gen10 Chassis
74 Specifications
Page 75
SpecificationsValue
Dimensions—
Height8.76 cm (3.45 in)
Depth83.87 cm (33.02 in)
Width44.80 cm (17.64 in)
Weight (approximate values)—
Weight (maximum)36.20 kg (79.81 lb)
Weight (minimum)12.70 kg (28.00 lb)
HPE Apollo r2800 Gen10 Chassis with 16 NVMe
SpecificationsValue
Dimensions—
Height8.76 cm (3.45 in)
Depth83.87 cm (33.02 in)
Width44.80 cm (17.64 in)
Weight (approximate values)—
Weight (maximum)36.20 kg (79.81 lb)
Weight (minimum)12.70 kg (28.00 lb)
HPE Apollo r2800 Gen10 Chassis with 24 SFF
SpecificationsValue
Dimensions—
Height8.76 cm (3.45 in)
Depth83.87 cm (33.02 in)
Width44.80 cm (17.64 in)
Weight (approximate values)—
Weight (maximum)36.20 kg (79.81 lb)
Weight (minimum)12.70 kg (28.00 lb)
Power supply specifications
CAUTION: Do not mix power supplies with different efficiency and wattage in the chassis. Install
only one type of power supply. Verify that all power supplies have the same part number and label
color. The system becomes unstable and may shut down when it detects mismatched power
supplies.
Depending on installed options, the system is configured with one of the following power supplies:
•HPE 800W Flex Slot Platinum Hot Plug Low Halogen Power Supply
•HPE 800W Flex Slot Universal Hot Plug Low Halogen Power Supply
Power supply specifications75
Page 76
•HPE 800W Flex Slot -48VDC Hot Plug Low Halogen Power Supply
•HPE 1600W Flex Slot Platinum Hot Plug Low Halogen Power Supply
•HPE 2200W Flex Slot Platinum Hot Plug Low Halogen Power Supply
For more information about the power supply features, specifications, and compatibility, see the Hewlett
Packard Enterprise website .
HPE 800W Flex Slot Platinum Hot-plug Low Halogen Power Supply
HPE 800W Flex Slot Universal Hot-plug Low Halogen Power Supply
SpecificationValue
Input requirements—
Rated input voltage
Rated input frequency50 Hz to 60 Hz
Rated input current
Maximum rated input power
BTUs per hour
Power supply output—
200 VAC to 277 VAC
380 VDC
4.4 A at 200 VAC
3.1 A at 277 VAC
2.3 A at 380 VDC
869 W at 200 VAC
865 W at 230 VAC
861 W at 277 VAC
863 W at 380 VDC
2964 at 200 VAC
2951 at 230 VAC
2936 at 277 VAC
2943 at 380 VDC
Rated steady-state power
Maximum peak power
800 W at 200 VAC to 277 VAC input
800 W at 200 VAC to 277 VAC input
HPE 800W Flex Slot -48VDC Hot-plug Low Halogen Power Supply
SpecificationValue
Input requirements—
Rated input voltage
Rated input current
Rated input power (W)
-40 VDC to -72 VDC
-48 VDC nominal input
22.1 A at -40 VDC input
18.2 A at -48 VDC input, nominal input
12.0 A at -72 VDC input
874 W at -40 VDC input
865 W at -48 VDC input, nominal input
854 W at -72 VDC input
Table Continued
HPE 800W Flex Slot Universal Hot-plug Low Halogen Power Supply77
Page 78
SpecificationValue
Rated input power (BTUs per hour)
Power supply output—
Rated steady-state power (W)800 W at -40 VDC to -72 VDC
Maximum peak power (W)800 W at -40 VDC to -72 VDC
Maximum peak power
WARNING: To reduce the risk of electric shock or energy hazards:
•This equipment must be installed by trained service personnel, as defined by the NEC and IEC
60950-1, Second Edition, the standard for Safety of Information Technology Equipment.
•Connect the equipment to a reliably grounded secondary circuit source. A secondary circuit has
no direct connection to a primary circuit and derives its power from a transformer, converter, or
equivalent isolation device.
•The branch circuit overcurrent protection must be rated 27 A.
2983 at -40 VDC input
2951 at -48 VDC input, nominal input
2912 at -72 VDC input
800 W at -40 VDC to -72 VDC input
CAUTION: This equipment is designed to permit the connection of the earthed conductor of the DC
supply circuit to the earthing conductor at the equipment.
If this connection is made, all of the following must be met:
•This equipment must be connected directly to the DC supply system earthing electrode
conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply
system earthing electrode conductor is connected.
•This equipment must be located in the same immediate area (such as adjacent cabinets) as any
other equipment that has a connection between the earthed conductor of the same DC supply
circuit and the earthing conductor, and also the point of earthing of the DC system. The DC
system must be earthed elsewhere.
•The DC supply source is to be located within the same premises as the equipment.
•Switching or disconnecting devices must not be in the earthed circuit conductor between the DC
source and the point of connection of the earthing electrode conductor.
HPE 1600W Flex Slot Platinum Hot Plug Low Halogen Power Supply
SpecificationValue
Input requirements
Rated input voltage
78 HPE 1600W Flex Slot Platinum Hot Plug Low Halogen Power Supply
200 VAC to 240 VAC
240 VDC for China only
Table Continued
Page 79
SpecificationValue
Rated input frequency50 Hz to 60 Hz
Rated input current
Maximum rated input power
BTUs per hour
Power supply output
Rated steady-state power
Maximum peak power
8.7 A at 200 VAC
7.2 A at 240 VAC
1,734 W at 200 VAC
1,725 W at 240 VAC
5,918 at 200 VAC
5,884 at 240 VAC
1,600 W at 200 VAC to 240 VAC input
1,600 W at 240 VDC input
2,200 W for 1 ms (turbo mode) at 200 VAC to 240
VAC input
HPE 2200W Flex Slot Platinum Hot Plug Low Halogen Power Supply
SpecificationValue
Input requirements
Rated input voltage
Rated input frequency50 Hz to 60 Hz (Not applicable to 240VDC)
Rated input current
Maximum rated input power
BTUs per hour
Power supply output
200 VAC to 240 VAC
240 VDC for China only
10.0 A at 240 VAC
8.2 A at 240 VDC for China Only
1800 W at 200 VAC
2200 W at 240 VAC
1800 W at 240 VDC for China Only
6590 at 200 VAC
8096 at 240 VAC
6606 at 240 VDC for China only
Table Continued
HPE 2200W Flex Slot Platinum Hot Plug Low Halogen Power Supply79
Page 80
SpecificationValue
Rated steady-state power
Maximum peak power
1800 W at 200 VAC
2200 W at 240 VAC
1800 W at 240 VDC for China only
1800 W at 200 VAC
2200 W at 240 VAC
1800 W at 240 VDC for China only
Hot-plug power supply calculations
For hot-plug power supply specifications and calculators to determine electrical and heat loading for the
server, see the Hewlett Packard Enterprise Power Advisor website (http://www.hpe.com/info/
poweradvisor/online).
80 Hot-plug power supply calculations
Page 81
Websites
•Hewlett Packard Enterprise Information Library
•
Hewlett Packard Enterprise Support Center
•Contact Hewlett Packard Enterprise Worldwide
•Subscription Service/Support Alerts
•Software Depot
•Customer Self Repair
•Insight Remote Support
•Serviceguard Solutions for HP-UX
•Single Point of Connectivity Knowledge (SPOCK) Storage compatibility matrix
•Storage white papers and analyst reports
Websites81
Page 82
Support and other resources
Accessing Hewlett Packard Enterprise Support
•For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
http://www.hpe.com/assistance
•To access documentation and support services, go to the Hewlett Packard Enterprise Support Center
website:
http://www.hpe.com/support/hpesc
Information to collect
•Technical support registration number (if applicable)
•Product name, model or version, and serial number
•Operating system name and version
•Firmware version
•Error messages
•Product-specific reports and logs
•Add-on products or components
•Third-party products or components
Accessing updates
•Some software products provide a mechanism for accessing software updates through the product
interface. Review your product documentation to identify the recommended software update method.
•To download product updates:
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
www.hpe.com/support/downloads
Software Depot
www.hpe.com/support/softwaredepot
•To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
•To view and update your entitlements, and to link your contracts and warranties with your profile, go to
the Hewlett Packard Enterprise Support Center More Information on Access to Support Materials
page:
82 Support and other resources
Page 83
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT: Access to some updates might require product entitlement when accessed through
the Hewlett Packard Enterprise Support Center. You must have an HPE Passport set up with
relevant entitlements.
Customer self repair
Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a
CSR part needs to be replaced, it will be shipped directly to you so that you can install it at your
convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service
provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
http://www.hpe.com/support/selfrepair
Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event
notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your
product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for
remote support.
If your product includes additional remote support details, use search to locate that information.
Remote support and Proactive Care information
HPE Get Connected
www.hpe.com/services/getconnected
HPE Proactive Care services
www.hpe.com/services/proactivecare
HPE Proactive Care service: Supported products list
To view the warranty for your product or to view the Safety and Compliance Information for Server,
Storage, Power, Networking, and Rack Products reference document, go to the Enterprise Safety and
To view the regulatory information for your product, view the Safety and Compliance Information for
Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
Hewlett Packard Enterprise is committed to providing our customers with information about the chemical
substances in our products as needed to comply with legal requirements such as REACH (Regulation EC
No 1907/2006 of the European Parliament and the Council). A chemical information report for this product
can be found at:
www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance data,
including RoHS and REACH, see:
www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs, product
recycling, and energy efficiency, see:
www.hpe.com/info/environment
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us
improve the documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hpe.com). When submitting your feedback, include the document title, part number,
edition, and publication date located on the front cover of the document. For online help content, include
the product name, product version, help edition, and publication date located on the legal notices page.
84 Regulatory information
Page 85
Acronyms and abbreviations
AHCI
Advanced Host Controller Interface
CSR
Customer Self Repair
DDR
double data rate
GPU
graphics processing unit
HP SUM
HP Smart Update Manager
HPE APM
HPE Advanced Power Manager
HPE SSA
HPE Smart Storage Administrator
IEC
International Electrotechnical Commission
iLO
Integrated Lights-Out
IML
Integrated Management Log
ISO
International Organization for Standardization
LFF
large form factor
LOM
LAN on Motherboard
LRDIMM
load reduced dual in-line memory module
NIC
network interface controller
NMI
nonmaskable interrupt
NVRAM
nonvolatile memory
Acronyms and abbreviations85
Page 86
PCIe
Peripheral Component Interconnect Express
PDU
power distribution unit
POST
Power-On Self-Test
RBSU
ROM-Based Setup Utility
RCM
Rack Consolidation Management
RDIMM
registered dual in-line memory module
RDP
Remote Desktop Protocol
RoHS
Restriction of Hazardous Substances
SAS
serial attached SCSI
SATA
serial ATA
SFF
small form factor
SPP
Service Pack for ProLiant
SUV
serial, USB, video
TMRA
recommended ambient operating temperature
TPM
Trusted Platform Module
UEFI
Unified Extensible Firmware Interface
UID
unit identification
USB
universal serial bus
86Acronyms and abbreviations
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.