This document contains setup, installation, and configuration information for the HPE Apollo
2000 Chassis. This document is for the person who installs, administers, and troubleshoots
the system. Hewlett Packard Enterprise assumes that you are qualified in the servicing of
computer equipment and trained in using safe practices when dealing with hazardous energy
levels.
Part Number: 879112-003
Published: June 2018
Edition: 3
Copyright 2017, 2018 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession,
use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer
Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard
Enterprise website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in
the United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
Contents
HPE Apollo 2000 Gen10 System............................................................6
Planning the installation.........................................................................7
Acronyms and abbreviations...............................................................85
Contents5
HPE Apollo 2000 Gen10 System
Introduction
The HPE Apollo 2000 Gen10 System consists of a chassis and servers. There are four chassis options
with different storage configurations. To ensure proper thermal cooling, all server tray slots on the chassis
must be populated with servers or server tray blanks.
Chassis
•HPE Apollo r2200 Gen10 Chassis (12 low-profile LFF model)
•HPE Apollo r2600 Gen10 Chassis (24 SFF model, supports a maximum of 24 SFF SmartDrives or a
mix of 16 SFF SmartDrives and 8 NVMe drives)
•HPE Apollo r2800 Gen10 Chassis with 16 NVMe
•HPE Apollo r2800 Gen 10 Chassis (24 SFF model with storage expander backplane)
Servers
•HPE ProLiant XL170r Gen10 Server (1U)
•HPE ProLiant XL190r Gen10 Server (2U)
The chassis supports the combination of 1U and 2U servers. One chassis can support a maximum of the
following:
•Four 1U servers
•Two 1U servers and one 2U server
•Two 2U servers
For more information about product features, specifications, options, configurations, and compatibility, see
the product QuickSpecs on the
Hewlett Packard Enterprise website.
6 HPE Apollo 2000 Gen10 System
Planning the installation
Safety and regulatory compliance
For important safety, environmental, and regulatory information, see Safety and Compliance Information
for Server, Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise
website (
Product QuickSpecs
For more information about product features, specifications, options, configurations, and compatibility, see
the product QuickSpecs on the Hewlett Packard Enterprise website (http://www.hpe.com/info/qs).
Configuration guidelines
Operate the chassis only when a device or blank is installed in all device bays. Before powering up the
chassis, be sure to do the following:
•Install a drive or drive blank into all drive bays.
•Install a server or server tray blank into all server bays.
•Install a power supply or power supply blank into all power supply bays.
Validate power and cooling requirements based on location and installed components.
Power requirements
Installation of this equipment must comply with local and regional electrical regulations governing the
installation of IT equipment by licensed electricians. This equipment is designed to operate in installations
covered by NFPA 70, 1999 Edition (National Electric Code) and NFPA-75, 1992 (code for Protection of
Electronic Computer/Data Processing Equipment). For electrical power ratings on options, refer to the
product rating label or the user documentation supplied with that option.
WARNING: To reduce the risk of personal injury, fire, or damage to the equipment, do not overload
the AC supply branch circuit that provides power to the rack. Consult the electrical authority having
jurisdiction over wiring and installation requirements of your facility.
CAUTION: Protect the server from power fluctuations and temporary interruptions with a regulating
UPS. This device protects the hardware from damage caused by power surges and voltage spikes
and keeps the server in operation during a power failure.
HPE Apollo Platform Manager
HPE Apollo Platform Manager, formerly named HPE Advanced Power Manager, is a point of contact for
system administration.
To install, configure, and access HPE APM, see the HPE Apollo Platform Manager User Guide on the
Hewlett Packard Enterprise website (http://www.hpe.com/support/APM_UG_en).
Planning the installation7
Hot-plug power supply calculations
For more information on the hot-plug power supply and calculators to determine server power
consumption in various system configurations, see the Hewlett Packard Enterprise Power Advisor website
(http://www.hpe.com/info/poweradvisor/online).
Compiling the documentation
The documentation, while delivered individually and in various formats, works as a system. Consult these
documents before attempting installation. These documents provide the required important safety
information and decision-making steps for the configuration. To access these documents, see the Hewlett
Packard Enterprise website.
Warnings and cautions
WARNING: To reduce the risk of personal injury or damage to equipment, heed all warnings and
cautions throughout the installation instructions.
WARNING: To reduce the risk of personal injury or damage to the equipment, be sure that:
•The rack is bolted to the floor using the concrete anchor kit.
•The leveling feet extend to the floor.
•The full weight of the rack rests on the leveling feet.
•The racks are coupled together in multiple rack installations.
•Only one component is extended at a time. If more than one component is extended, a rack
might become unstable.
WARNING: The chassis is very heavy. To reduce the risk of personal injury or damage to the
equipment:
•Observe local occupational health and safety requirements and guidelines for manual material
handling.
•Remove all servers from the chassis before installing or moving the chassis.
•Use caution and get help to lift and stabilize the chassis during installation or removal, especially
when the chassis is not fastened to the rack.
WARNING: To reduce the risk of personal injury or damage to the equipment, you must adequately
support the chassis during installation and removal.
WARNING: Install the chassis starting from the bottom of the rack and work your way up the rack.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
8 Hot-plug power supply calculations
WARNING: To reduce the risk of electric shock or damage to the equipment:
•Never reach inside the chassis while the system is powered up.
•Perform service on system components only as instructed in the user documentation.
CAUTION: Always be sure that equipment is properly grounded and that you follow proper
grounding procedures before beginning any installation procedure. Improper grounding can result in
ESD damage to electronic components. For more information, refer to "Electrostatic discharge on
page 73."
CAUTION: When performing non-hot-plug operations, you must power down the server and/or the
system. However, it may be necessary to leave the server powered up when performing other
operations, such as hot-plug installations or troubleshooting.
CAUTION: Do not operate the server for long periods with the access panel open or removed.
Operating the server in this manner results in improper airflow and improper cooling that can lead to
thermal damage.
Space and airflow requirements
Installation of the chassis is supported in 1200 mm Gen10 racks.
To allow for servicing and adequate airflow, observe the following space and airflow requirements when
deciding where to install a rack:
•Leave a minimum clearance of 63.5 cm (25 in) in front of the rack.
•Leave a minimum clearance of 76.2 cm (30 in) behind the rack.
•Leave a minimum clearance of 121.9 cm (48 in) from the back of the rack to the back of another rack
or row of racks.
Front and rear rack doors must be adequately ventilated to allow ambient room air to enter the cabinet,
and the rear door must be adequately ventilated to allow the warm air to escape from the cabinet.
CAUTION: To prevent improper cooling and damage to the equipment, do not block the ventilation
openings.
When vertical space in the rack is not filled by a chassis or rack component, the gaps between the
components cause changes in airflow through the rack and across the components. Cover all gaps with
blanking panels to maintain proper airflow.
CAUTION: Always use blanking panels to fill empty vertical spaces in the rack. This arrangement
ensures proper airflow. Using a rack without blanking panels results in improper cooling that can
lead to thermal damage.
Space and airflow requirements9
CAUTION: If a third-party rack is used, observe the following additional requirements to ensure
adequate airflow and to prevent damage to the equipment:
•Front and rear doors—If the 42U rack includes closing front and rear doors, you must allow
5,350 sq cm (830 sq in) of holes evenly distributed from top to bottom to permit adequate airflow
(equivalent to the required 64 percent open area for ventilation).
•Side—The clearance between the installed rack component and the side panels of the rack must
be a minimum of 7 cm (2.75 in).
Temperature requirements
To ensure continued safe and reliable equipment operation, install or position the rack in a well-ventilated,
climate-controlled environment.
The operating temperature inside the rack is always higher than the room temperature and is dependent
on the configuration of equipment in the rack. Check the TMRA for each piece of equipment before
installation.
CAUTION: To reduce the risk of damage to the equipment when installing third-party options:
•Do not permit optional equipment to impede airflow around the chassis or to increase the internal
rack temperature beyond the maximum allowable limits.
•Do not exceed the manufacturer’s TMRA.
Grounding requirements
•The building installation must provide a means of connection to protective earth.
•The equipment must be connected to that means of connection.
•A service person must check whether the socket-outlet from which the equipment is to be powered
provides a connection to the building protective earth. If the outlet does not provide a connection, the
service person must arrange for the installation of a protective earthing conductor from the separate
protective earthing terminal to the protective earth wire in the building.
Connecting a DC power cable to a DC power source
WARNING: To reduce the risk of electric shock or energy hazards:
•This equipment must be installed by trained service personnel, as defined by the NEC and IEC
60950-1, Second Edition, the standard for Safety of Information Technology Equipment.
•Connect the equipment to a reliably grounded Secondary circuit source. A Secondary circuit has
no direct connection to a Primary circuit and derives its power from a transformer, converter, or
equivalent isolation device.
•The branch circuit overcurrent protection must be rated 20 A.
WARNING: When installing a DC power supply, the ground wire must be connected before the
positive or negative leads.
10 Temperature requirements
WARNING: Remove power from the power supply before performing any installation steps or
maintenance on the power supply.
CAUTION: The server equipment connects the earthed conductor of the DC supply circuit to the
earthing conductor at the equipment. For more information, see the documentation that ships with
the power supply.
CAUTION: If the DC connection exists between the earthed conductor of the DC supply circuit and
the earthing conductor at the server equipment, the following conditions must be met:
•This equipment must be connected directly to the DC supply system earthing electrode
conductor or to a bonding jumper from an earthing terminal bar or bus to which the DC supply
system earthing electrode conductor is connected.
•This equipment should be located in the same immediate area (such as adjacent cabinets) as
any other equipment that has a connection between the earthed conductor of the same DC
supply circuit and the earthing conductor, and also the point of earthing of the DC system. The
DC system should be earthed elsewhere.
•The DC supply source is to be located within the same premises as the equipment.
•Switching or disconnecting devices should not be in the earthed circuit conductor between the
DC source and the point of connection of the earthing electrode conductor.
To connect a DC power cable to a DC power source:
1. Cut the DC power cord ends no shorter than 150 cm (59.06 in).
2. If the power source requires ring tongues, use a crimping tool to install the ring tongues on the power
cord wires.
IMPORTANT: The ring terminals must be UL approved and accommodate 12 gauge wires.
IMPORTANT: The minimum nominal thread diameter of a pillar or stud type terminal must be 3.5
mm (0.138 in); the diameter of a screw type terminal must be 4.0 mm (0.157 in).
3. Stack each same-colored pair of wires and then attach them to the same power source. The power
cord consists of three wires (black, red, and green).
For more information, see the documentation that ships with the power supply.
Planning the installation11
Identifying components and LEDs
System components
ItemDescription
1RCM module (optional)
2Power supply
3HPE Apollo 2000 Gen10 Chassis
4Fan
5HPE ProLiant XL190r Gen10 Server
6HPE ProLiant XL170r Gen10 Server
7Server tray blank
Front panel components
HPE Apollo r2200 Gen10 Chassis
12 Identifying components and LEDs
ItemDescription
1Left bezel ear
2Low-profile LFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
HPE Apollo r2600 Gen10 Chassis
ItemDescription
1Left bezel ear
2SFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
5Non-removable bezel blank
HPE Apollo r2800 Gen10 Chassis with 16 NVMe
Identifying components and LEDs13
ItemDescription
1Left bezel ear
2NVMe drives
3Right bezel ear
4Chassis serial label pull tab
5Non-removable bezel blanks
HPE Apollo r2800 Gen10 Chassis (24 SFF model with storage expander backplane)
ItemDescription
1Left bezel ear
2SFF hot-plug drives
3Right bezel ear
4Chassis serial label pull tab
5Expander daughter module with power LED
1
When the LEDs described in this table flash simultaneously, a power fault has occurred. For more information, see
Front panel LEDs.
Front panel LEDs
1
14 Front panel LEDs
ItemDescriptionStatus
1Power On/Standby button and system power
LED (Server 1)
1
2Power On/Standby button and system power
LED (Server 2)
3Health LED (Server 2)1
4Health LED (Server 1)
5Health LED (Server 3)
1
1
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
6Health LED (Server 4)
1
7Power On/Standby button and system power
LED (Server 4)
1
Solid green = Normal
Flashing amber = System degraded
Flashing red = System critical
3
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
2
Table Continued
Identifying components and LEDs15
ItemDescriptionStatus
8UID button/LED
1
Solid blue = Activated Flashing blue:
• 1 flash per second = Remote management or
firmware upgrade in progress
• 4 flashes per second = iLO manual soft
reboot sequence initiated
• 8 flashes per second = iLO manual hard
reboot sequence in progress
Off = Deactivated
9Power On/Standby button and system power
LED (Server 3)
1
Solid green = System on
Flashing green = Performing power on
sequence Solid amber = System in standby
Off = No power present
1
When the LEDs described in this table flash simultaneously, a power fault has occurred.
2
Facility power is not present, power cord is not attached, no power supplies are installed, power supply failure has
occurred, or the front I/O cable is disconnected.
3
If the health LED indicates a degraded or critical state, review the system IML or use iLO to review the system health
status.
2
24 SFF with expander daughter board LEDs
ItemDescriptionStatus
1Expander daughter board power good LED
2Expander daughter board power fault LED
Power fault LEDs
The following table provides a list of power fault LEDs, and the subsystems that are affected. Not all
power faults are used by all servers.
16 Power fault LEDs
Solid green = Expander daughter board power
is good
Off = Expander daughter board power fault
LED will be on
Solid yellow = Expander daughter board power
fault has occurred
Off = Expander daughter board power good
LED will be on
SubsystemLED behavior
System board1 flash
Processor2 flashes
Memory3 flashes
Riser board PCIe slots4 flashes
FlexibleLOM5 flashes
Removable HPE Flexible Smart Array
controller
System board PCIe slots7 flashes
Power backplane or storage backplane8 flashes
Power supply9 flashes
Rear panel components
Four 1U servers
ItemDescription
6 flashes
1Server 4
2Server 3
3Power supply 2
4RCM module (optional)
5Power supply 1
6Server 2
7Server 1
Two 2U servers
Rear panel components17
ItemDescription
1Server 3
2Power supply 2
3RCM module (optional)
4Power supply 1
5Server 1
Power supply LEDs
18 Power supply LEDs
ItemDescriptionStatus
1Power supply 1 LED
2Power supply 2 LED
Fan locations
Solid green = Normal
Off = One or more of the following conditions
exists:
• Power is unavailable
• Power supply failed
• Power supply is in standby mode
• Power supply error
Solid green = Normal
Off = One or more of the following conditions
exists:
• Power is unavailable
• Power supply failed
• Power supply is in standby mode
• Power supply error
Drive bay numbering
IMPORTANT: Depending on the chassis configuration and the components installed in the servers,
it might be necessary to limit the number of drives installed in the chassis. For more information, see
"Temperature requirements" in the server user guide.
Fan locations19
Apollo r2200 Gen10 Chassis (1U servers in AHCI mode)
ItemDescription
1Server 1 drive bays
2Server 2 drive bays
3Server 3 drive bays
4Server 4 drive bays
Apollo r2200 Gen10 Chassis (2U servers in AHCI mode)
ItemDescription
1Server 1 drive bays
3Server 3 drive bays
Apollo r2200 Gen10 Chassis (1U and 2U servers using embedded SATA S100i or a Smart Array
controller)
One 1U server corresponds to a maximum of three low-profile LFF hot-plug drives.
• Server 1 corresponds to drive bays 1-1 through 1-3.
• Server 2 corresponds to drive bays 2-1 through 2-3.
• Server 3 corresponds to drive bays 3-1 through 3-3.
• Server 4 corresponds to drive bays 4-1 through 4-3.
One 2U server corresponds to a maximum of six low-profile LFF hot-plug drives.
• Server 1 corresponds to drive bays 1-1 through 2-3.
• Server 3 corresponds to drive bays 3-1 through 4-3.
20Identifying components and LEDs
Apollo r2600 Gen10 Chassis (1U servers in AHCI mode)
ItemDescriptionSupported drives
1Server 1 drive baysDrive bays 1, 2, 3, and 4 support SFF SmartDrives
only. Drive bays 5 and 6 support both SFF SmartDrives
and NVMe drives.
2Server 2 drive baysDrive bays 1 and 2 support both SFF SmartDrives and
NVMe drives. Drive bays 3, 4, 5, and 6 support SFF
SmartDrives only.
3Server 3 drive baysDrive bays 1, 2, 3, and 4 support SFF SmartDrives
only. Drive bays 5 and 6 support both SFF SmartDrives
and NVMe drives.
4Server 4 drive baysDrive bays 1 and 2 support both SFF SmartDrives and
NVMe drives. Drive bays 3, 4, 5, and 6 support SFF
SmartDrives only.
Apollo r2600 Gen10 Chassis (2U servers in AHCI mode)
Identifying components and LEDs21
ItemDescriptionSupported drives
1Server 1 drive baysDrive bays 5, 6, 9, and 10 support both SFF
SmartDrives and NVMe drives. All other drive bays
support SFF SmartDrives only.
3Server 3 drive baysDrive bays 5, 6, 9, and 10 support both SFF
SmartDrives and NVMe drives. All other drive bays
support SFF SmartDrives only.
Apollo r2600 Gen10 Chassis (1U and 2U servers using the embedded SATA S100i or a Smart Array
controller)
Drive bays 1-5, 1-6, 2-1, 2-2, 3-5, 3-6, 4-1, and 4-2 support both SFF SmartDrives and NVMe drives.
All other drives bays support SFF SmartDrives only.
One 1U server corresponds to a maximum of six drives.
• Server 1 corresponds to drive bays 1-1 through 1-6.
• Server 2 corresponds to drive bays 2-1 through 2-6.
• Server 3 corresponds to drive bays 3-1 through 3-6.
• Server 4 corresponds to drive bays 4-1 through 4-6.
One 2U server corresponds to a maximum of twelve drives.
• Server 1 corresponds to drive bays 1-1 through 2-6.
• Server 3 corresponds to drive bays 3-1 through 4-6.
Apollo r2800 Gen10 Chassis with 16 NVMe
IMPORTANT: The HPE Apollo r2800 Gen10 Chassis with 16 NVMe does not support servers using
the embedded SATA HPE Dynamic Smart Array S100i Controller or any type-p plug-in Smart Array
Controller with internal ports and cables.
One 1U server corresponds to a maximum of four NVMe drives.
• Server 1 corresponds to drive bays 1-1 through 1-4.
• Server 2 corresponds to drive bays 2-1 through 2-4.
• Server 3 corresponds to drive bays 3-1 through 3-4.
• Server 4 corresponds to drive bays 4-1 through 4-4.
One 2U server corresponds to a maximum of eight NVMe drives.
• Server 1 corresponds to drive bays 1-1 through 2-4.
• Server 3 corresponds to drive bays 3-1 through 4-4.
22Identifying components and LEDs
Apollo r2800 Gen10 Chassis (24 SFF with storage expander backplane)
The factory default configuration evenly distributes the 24 SFF drive bays in the HPE Apollo 2800
Chassis.
For detailed information and examples on drive bay mapping configuration changes in the HPE Apollo
r2800 Gen10 Chassis, see the iLO REST APIs to GitHub on the Hewlett Packard Enterprise website.
NOTE: While the layout of the drives are internally mapped as mentioned below. The Redfish response or
RESTful interface tool still continues to display a fixed 6 drives irrespective of 1U or 2U node inserted.
The HPE Apollo r2800 Chassis, featuring the storage expander backplane, supports the flexibility to
assign drive bays to specific server nodes. This feature provides secure, remote configuration flexibility
via iLO Redfish interface. To deploy HDD bay mapping configuration iLO user account privilege
"Configure iLO Settings" is required.
Drive bay mapping configuration changes may be made from any server node and take effect after all
server nodes in the HPE Apollo r2800 Chassis are turned off and the Chassis firmware is able to reset the
storage expander backplane. All nodes must remain powered off for at least 5 seconds after executing
the configuration changes. The server nodes may be remotely restarted through the iLO remote interface,
or may be locally restarted by pressing the power button for each node.
This feature requires the following minimum firmware versions:
•Apollo 2000 System Chassis firmware version 1.2.10 or later
•Storage Expander firmware version 1.0 or later
•iLO firmware version 1.20 or later
Six drive bays are allocated to each 1U node.
• Server 1 corresponds to drive bays 1 through 6
• Server 2 corresponds to drive bays 7 through 12
• Server 3 corresponds to drive bays 13 through 18
• Server 4 corresponds to drive bays 19 through 24
Twelve drive bays are allocated to each 2U node.
• Server 1 corresponds to drive bays 1 through 12
• Server 3 corresponds to drive bays 13 through 24
Identifying components and LEDs23
Hot-plug drive LED definitions
SmartDrive hot-plug drive LED definitions
ItemDescriptionStatus
1Locate
•Solid blue = The drive is being identified by a host
application.
•Flashing blue = The drive carrier firmware is being
updated or requires an update.
2Activity ring LED
24 Hot-plug drive LED definitions
•Rotating green = Drive activity.
•Off = No drive activity.
Table Continued
ItemDescriptionStatus
3Do not remove LED
•Solid white = Do not remove the drive. Removing
the drive causes one or more of the logical drives to
fail.
•Off = Removing the drive does not cause a logical
drive to fail.
4Drive status LED
•Solid green = The drive is a member of one or more
logical drives.
•Flashing green = The drive is rebuilding or
performing a RAID migration, strip size migration,
capacity expansion, or logical drive extension, or is
erasing.
•Flashing amber/green = The drive is a member of
one or more logical drives and predicts the drive will
fail.
•Flashing amber = The drive is not configured and
predicts the drive will fail.
•Solid amber = The drive has failed.
•Off = The drive is not configured by a RAID
controller.
Low-profile LFF hot-plug drive LED definitions
ItemDefinition
1Fault/UID (amber/blue)
2Online/Activity (green)
LED Activity
Low-profile LFF hot-plug drive LED definitions25
Online/Activity LED (green)Fault/UID LED (amber/blue)Definition
On, off, or flashingAlternating amber and blue
One or more of the following
conditions exist:
•The drive has failed.
•A predictive failure alert has
been received for this drive.
•The drive has been selected
by a management application.
On, off, or flashingSolid blue
One or both of the following
conditions exist:
•The drive is operating
normally.
•The drive has been selected
by a management application.
OnFlashing amberA predictive failure alert has been
received for this drive. Replace
the drive as soon as possible.
OnOffThe drive is online but is not
currently active.
1 flash per secondFlashing amber
1 flash per secondOff
Do not remove the drive.
Removing the drive might
terminate the current operation
and cause data loss.
The drive is part of an array that
is undergoing capacity expansion
or stripe migration, but a
predictive failure alert has been
received for this drive. To
minimize the risk of data loss, do
not remove the drive until the
expansion or migration is
complete.
Do not remove the drive.
Removing the drive might
terminate the current operation
and cause data loss.
The drive is rebuilding, erasing,
or is part of an array that is
undergoing capacity expansion
or stripe migration.
26Identifying components and LEDs
Table Continued
Loading...
+ 60 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.