This document is for the person who installs, administers, and troubleshoots compute modules
and storage systems. Hewlett Packard Enterprise assumes you are qualified in the servicing of
computer equipment and trained in recognizing hazards in products with hazardous energy levels.
Part Number: 813253-003
Published: July 2018
Edition: 3
Copyright 2016, 2018 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett Packard
Enterprise products and services are set forth in the express warranty statements accompanying such
products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett
Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use,
or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software
Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under
vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise
website.
Acknowledgments
Intel®, Itanium®, Pentium®, Intel Inside®, and the Intel Inside logo are trademarks of Intel Corporation in the
United States and other countries.
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
Adobe® and Acrobat® are trademarks of Adobe Systems Incorporated.
Java® and Oracle® are registered trademarks of Oracle and/or its affiliates.
UNIX® is a registered trademark of The Open Group.
5External USB 3.0 connector (behind the serial label
6Serial label pull tab
7Compute module handle release latch
8Compute module handle
Serial label pull tab information
The serial label pull tab is on the front panel of the compute module. To locate the serial label pull tab, see
Front panel components. The serial label pull tab provides the following information:
•Product serial number
•HPE iLO information
1
1
1
pull tab)
•QR code to mobile documentation
6 Component identification
Page 7
Front panel LEDs and buttons
ItemDescriptionStatus
1UID LED
2Health status LED
3Mezzanine NIC status LED
Solid blue = Activated
Flashing blue (1 Hz/cycle per sec)
= Remote management or
firmware upgrade in progress
Off = Deactivated
Solid green = Normal
Flashing green (1 Hz/cycle per
sec) = iLO is rebooting
Flashing amber = System
degraded
Flashing red (1 Hz/cycle per sec) =
System critical
Solid green= Link on any
Mezzanine NIC
Flashing green= Activity on any
Mezzanine NIC
Off = No link or activity on any
Mezzanine NIC
4Power On/Standby button and
system power LED
Solid green = System on
Flashing green (1 Hz/cycle per
sec) = Performing power on
sequence
Solid amber = System in standby
Off = No power present
Front panel LEDs and buttons7
1
Page 8
1
If all other LEDs are off, then no power is present to the compute module. (For example, facility power is not present,
power cord is not attached, no power supplies are installed, power supply failure has occurred, or the compute module is
not properly seated.) If the health LED is flashing green while the system power LED is off, the Power On/Standby button
service is initializing or that an iLO reboot is in progress.
Drive numbering
Depending on the configuration, the drive bay numbering on this compute module will vary. Supported
configurations on this compute module are shown in the following table.
Compute module modelConfigurationDrive support
SAS modelStandard backplane with
embedded controller or supported
controller option
SATA modelChipset SATA with embedded
controller or supported controller
option
Four SFF hot-plug SAS drive bays
with support for up to 4 SFF HDDs
or SSDs
Four SFF hot-plug SATA drive
bays with support for the following:
•Up to 4 SFF SATA HDDs or
SSDs
•Up to 8 uFF SATA HDDs (with
the SFF flash adapter for each
2 uFF drives)
The driveless model is not shown, as it does not have drive bays and does support for any drives.
*SATA drives are supported only with the HPE Synergy 480 Gen9 Compute Module backplane.
Hot-plug drive LED definitions
ItemLEDStatusDefinition
Drive box 2, drive bay 1
Drive box 2, drive bay 2
1LocateSolid blueThe drive is being identified by a host application.
Flashing blueThe drive carrier firmware is being updated or requires an update.
2Activity
ring
OffNo drive activity
3Do not
remove
OffRemoving the drive does not cause a logical drive to fail.
4Drive
status
Rotating greenDrive activity
Solid whiteDo not remove the drive. Removing the drive causes one or more of
the logical drives to fail.
Solid greenThe drive is a member of one or more logical drives.
Table Continued
Hot-plug drive LED definitions9
Page 10
ItemLEDStatusDefinition
Flashing greenThe drive is doing one of the following:
•Rebuilding
•Performing a RAID migration
•Performing a strip size migration
•Performing a capacity expansion
•Performing a logical drive extension
•Erasing
•Spare part activation
Flashing amber/
green
Flashing amber The drive is not configured and predicts the drive will fail.
Solid amberThe drive has failed.
OffThe drive is not configured by a RAID controller or a spare drive.
The drive is a member of one or more logical drives and predicts the
drive will fail.
NVMe SSD LED definitions
The NVMe SSD is a PCIe bus device. A device attached to a PCIe bus cannot be removed without allowing
the device and bus to complete and cease the signal/traffic flow.
CAUTION: Do not remove an NVMe SSD from the drive bay while the Do not remove LED is flashing.
The Do not remove LED flashes to indicate that the device is still in use. Removing the NVMe SSD
before the device has completed and ceased signal/traffic flow can cause loss of data.
ItemLEDStatusDefinition
1LocateSolid blueThe drive is being identified by a host application.
Flashing blueThe drive carrier firmware is being updated or requires an update.
2Activity
ring
OffNo drive activity
10 NVMe SSD LED definitions
Rotating greenDrive activity
Table Continued
Page 11
ItemLEDStatusDefinition
3Drive
status
Flashing greenThe drive is doing one of the following:
Flashing amber/
Flashing amber The drive is not configured and predicts the drive will fail.
Solid amberThe drive has failed.
OffThe drive is not configured by a RAID controller.
4Do not
remove
Solid greenThe drive is a member of one or more logical drives.
•Rebuilding
•Performing a RAID migration
•Performing a stripe size migration
•Performing a capacity expansion
•Performing a logical drive extension
•Erasing
The drive is a member of one or more logical drives and predicts the
green
Solid whiteDo not remove the drive. The drive must be ejected from the PCIe bus
drive will fail.
prior to removal.
Flashing whiteThe drive ejection request is pending.
OffThe drive has been ejected.
5PowerSolid greenDo not remove the drive. The drive must be ejected from the PCIe bus
prior to removal.
Flashing greenThe drive ejection request is pending.
OffThe drive has been ejected.
SFF flash adapter components and LED definitions
SFF flash adapter components and LED definitions11
Page 12
ItemComponentDescription
1Locate
•Off—Normal
•Solid blue—The drive is being identified by a host application.
•Flashing blue—The drive firmware is being updated or requires
an update.
2uFF drive ejection latchRemoves the uFF drive when released.
3Do not remove LED
•Off—OK to remove the drive. Removing the drive does not
cause a logical drive to fail.
•Solid white—Do not remove the drive. Removing the drive
causes one or more of the logical drives to fail.
4Drive status LED
•Off—The drive is not configured by a RAID controller or a spare
drive.
•Solid green—The drive is a member of one or more logical
drives.
•Flashing green (4 Hz)—The drive is operating normally and has
activity.
5Adapter ejection release latch
and handle
•Flashing green (1 Hz)—The drive is rebuilding, erasing, or
performing a RAID migration, stripe size migration, capacity
expansion, logical drive extension, or spare activation.
•Flashing amber/green (1 Hz)—The drive is a member of one or
more logical drives that predicts the drive will fail.
•Solid amber—The drive has failed.
•Flashing amber (1 Hz)—The drive is not configured and predicts
the drive will fail.
Removes the SFF flash adapter when released.
12 Component identification
Page 13
Front panel/drive cage numbering
ItemDrive cage number
11, 2
23, 4
System board components
ItemDescription
1Internal USB 3.0 connector
2System board handle
3System battery
Table Continued
Front panel/drive cage numbering13
Page 14
ItemDescription
4Drive backplane connector 1
5Processor 1 DIMM slots (12)
6Processor 3 DIMM slots (12)
7Mezzanine connectors (M1–M6)
8Management/power connector 1
9Management/power connector 2
10HPE Smart Storage Battery connector
11Processor 4 DIMM slots (12)
12Processor 2 DIMM slots (12)
13System maintenance switch
14Drive backplane connector 2
15TPM connector
16microSD card slot
17External USB connector
System maintenance switch
PositionDefa
1
S1
S2Off
S3OffReserved
S4OffReserved
1
S5
2
S61,
S7Off
Function
ult
Off
Off = HPE iLO security is enabled.
On = HPE iLO security is disabled.
Off = System configuration can be changed.
On = System configuration is locked.
Off
Off = Power-on password is enabled.
On = Power-on password is disabled.
Off
Off = No function
On = Restore default manufacturing settings
Off = Set default boot mode to UEFI.
On = Set default boot mode to legacy.
S8—Reserved
S9—Reserved
S10—Reserved
14 System maintenance switch
Table Continued
Page 15
PositionDefa
Function
ult
S11—Reserved
S12—Reserved
1
You can access the redundant ROM by setting S1, S5, and S6 to On.
2
When the system maintenance switch position 6 is set to the On position, the system is prepared to restore all
configuration settings to their manufacturing defaults.
When the system maintenance switch position 6 is set to the On position and Secure Boot is enabled, some
configurations cannot be restored. For more information, see Secure Boot configuration.
IMPORTANT: Before using the S7 switch to change to Legacy BIOS Boot Mode, be sure the HPE
Dynamic Smart Array B140i Controller is disabled. Do not use the B140i controller when the compute
module is in Legacy BIOS Boot Mode.
Mezzanine connector definitions
ItemConnector
identification
1Mezzanine
connector 1 (M1)
2Mezzanine
connector 2 (M2)
3Mezzanine
connector 3 (M3)
4Mezzanine
connector 4 (M4)
5Mezzanine
connector 5 (M5)
6Mezzanine
connector 6 (M6)
Supported
FabricSupported ICM bays
card types
Type C and
1ICM 1 and 4
Type D
Type C and
1
Type D
2ICM 2 and 5
Type C only3ICM 3 and 6
Type C and
1ICM 1 and 4
Type D
Type C and
2
Type D
2ICM 2 and 5
Type C3ICM 3 and 6
Mezzanine connector definitions15
Page 16
1
When installing a mezzanine option on mezzanine connector 2, processor 3 must be installed.
2
When installing a mezzanine option on mezzanine connector 5, processor 4 must be installed.
DIMM slot locations
DIMM slots are numbered sequentially (1 through 12) for each processor. The supported AMP modes use the
alpha assignments for population order, and the slot numbers designate the DIMM slot ID for spare
replacement.
The colored slots indicate the slot order within each channel:
•White — First slot of a channel
•Black — Second slot of a channel
•Blue — Third slot of a channel
The arrow points to the front of the compute module.
Component and LED identification for HPE Synergy
hardware
For more information about component and LED identification for HPE Synergy components, see the productspecific maintenance and service guide or the HPE Synergy 12000 Frame Setup and Installation Guide in the
Hewlett Packard Enterprise Information Library .
16 DIMM slot locations
Page 17
Operations
Powering up the compute module
To power up the compute module, press the Power On/Standby button after the power button LED has turned
amber.
Powering down the compute module
Before powering down the compute module for any upgrade or maintenance procedures, perform a backup of
the system and all data. Then, shut down, as appropriate, applications and operating systems. A successful
shutdown is indicated by the system power LED displaying amber.
IMPORTANT: Always attempt a graceful shutdown before forcing a nongraceful shutdown. Application
data can be lost when performing a nongraceful shutdown of applications and the OS.
Before proceeding, verify the following:
•The compute module is in standby mode by observing that the system power LED is amber.
•The UID LED is not flashing blue.
NOTE:
◦When the compute module is in standby mode, auxiliary power is still being provided to the system.
◦If the UID LED is flashing blue, a remote session is in progress.
To power down the compute module, use one of the following methods:
•To perform a graceful shutdown of applications and the OS when powering down the compute module to
standby mode, do one of the following:
◦Press and release the Power On/Standby button.
◦Select the Momentary press power off selection in HPE OneView.
◦Select the Momentary press virtual power button selection in HPE iLO.
•If a graceful shutdown fails to power down the compute module to standby mode when an application or
OS stops responding, force a nongraceful shutdown of applications and the OS. Do one of the following:
◦Press and hold the Power On/Standby button for more than four seconds.
◦Select the Press and hold power off selection in HPE OneView.
◦Select the Press and hold virtual power button selection in HPE iLO.
Removing the compute module
Operations17
Page 18
Procedure
1. Identify the proper compute module.
2. Power down the compute module.
3. Remove the compute module.
4. Place the compute module on a flat, level work surface.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the compute module
before beginning any installation procedure. Improper grounding can cause ESD.
Removing and replacing an access panel
Procedure
1. Power down the compute module.
2. Remove the compute module.
3. Press the access panel release button.
4. Slide the access panel towards the rear of the compute module, and then lift to remove the panel.
18 Removing and replacing an access panel
Page 19
To replace the component, reverse the removal procedure.
Installing the access panel
Procedure
1. Place the access panel on top of the compute module.
2. Slide the access panel forward until it clicks into place.
Removing the DIMM baffles
Procedure
1. Power down the compute module.
2. Remove the compute module.
3. Place the compute module on a flat, level work surface.
4. Remove the access panel.
5. When removing the center baffle, disconnect the HPE Smart Storage Battery cable if installed. For moreinformation, see "Smart Storage Battery cabling."
6. Remove one or more DIMM baffles.
Installing the access panel19
Page 20
Installing the DIMM baffles
Procedure
1. Install the DIMM baffles.
2. Connect the HPE Smart Storage battery cable, if installing. For more information, see "Smart Storage
Battery cabling."
3. Install the access panel.
4. Install the compute module.
Removing the drive blank
Remove the component as indicated.
20 Installing the DIMM baffles
Page 21
CAUTION: To prevent improper cooling and thermal damage, do not operate the compute module
unless all bays are populated with either a component or a blank.
Removing the front panel/drive cage assembly
Procedure
1. Power down the compute module.
2. Remove the compute module.
3. Place the compute module on a flat, level work surface.
4. Remove the access panel.
5. Remove all drives.
6. Remove all drive blanks.
7. Remove the front panel/drive cage assembly.
Removing the front panel/drive cage assembly21
Page 22
Installing the front panel/drive cage assembly
Procedure
1. Install the front panel/drive cage assembly.
2. Install all drives.
3. Install drive blanks in unpopulated drive bays.
4. Install the access panel.
5. Install the compute module.
22 Installing the front panel/drive cage assembly
Page 23
Setup
Installation overview
Use this section and the following procedure to install the HPE Synergy 660 Gen9 Compute Module in a
compute module for the first time.
Procedure
1. Install supported options for the compute module.
2. Install the compute module.
3. Complete the compute module configuration.
Installing the compute module options
Before installing and initializing the compute module, install any compute module options, such as an
additional processor, drive, or a mezzanine card.
Mezzanine options are classified as Type C and Type D mezzanine cards. The type of mezzanine card
determines where the card can be installed in the compute module. Some mezzanine cards require that an
interconnect module is installed in the associated ICM bay in the rear of the frame. Be sure to review the
mezzanine card mapping information in the HPE Synergy Configuration and Compatibility Guide on the
Hewlett Packard Enterprise website (http://www.hpe.com/info/synergy-docs).
For a list of supported options, see the product QuickSpecs on the Hewlett Packard Enterprise website
(http://www.hpe.com/info/qs).
For compute module options installation information, see Installing hardware options.
Installing the compute module
Procedure
1. Verify that the device bay is configured for a full-height compute module. For more information, see thesetup and installation guide for the compute module on the Hewlett Packard Enterprise website.
2. Remove the compute module end cap.
Setup23
Page 24
3. Prepare the compute module for installation by opening the compute module handle.
4. Install the compute module. Press the compute module handle near each release button to completely
close the handle.
24Setup
Page 25
5. Review the compute module front panel LEDs to determine the compute module status. For moreinformation on the compute module LEDs, see "Component identification."
Completing the configuration
When a compute module is added to an existing configuration, HPE OneView automatically detects the new
hardware. HPE OneView is hosted on the HPE Synergy Composer appliance installed in the HPE Synergy
12000 Frame. You can use HPE OneView to comprehensively manage an HPE Synergy system throughout
the hardware life cycle.
To configure the compute module for the first time, log in to HPE OneView from the frame using your
assigned user name and password. For more information, see the HPE Synergy 12000 Frame Setup andInstallation Guide on the Hewlett Packard Enterprise website (http://www.hpe.com/info/synergy-docs). For
more information about HPE OneView, see the HPE OneView User Guide on the Hewlett Packard Enterprise
website (http://www.hpe.com/info/synergy-docs).
Completing the configuration25
Page 26
Installing hardware options
Introduction
If more than one option is being installed, read the installation instructions for all the hardware options and
identify similar steps to streamline the installation process.
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
CAUTION: To prevent damage to electrical components, properly ground the compute module before
beginning any installation procedure. Improper grounding can cause electrostatic discharge.
Installing SAS, SATA, or solid state drives
Procedure
Remove the drive blank.
1.
2. Prepare the drive.
3. Install the drive.
26 Installing hardware options
Page 27
4. Determine the status of the drive from the drive LED definitions.
Installing the SFF flash adapter
CAUTION: To prevent improper cooling and thermal damage, do not operate the compute module or
the enclosure unless all drive and device bays are populated with either a component or a blank.
Prerequisites
The SFF flash adapter option is supported when any of the following components are installed:
2. Install the uFF drives in the SFF flash adapter.
Installing the SFF flash adapter27
Page 28
3. Install the SFF flash adapter by pushing firmly near the left-side adapter ejection handle until the latching
spring engages in the drive bay.
Installing the controller
To determine what controllers are supported, see the product QuickSpecs on the Hewlett Packard Enterprise
website (http://www.hpe.com/info/qs).
NOTE: Before installing or removing a controller or adapter, Hewlett Packard Enterprise recommends
performing a complete backup of all compute module data.
Additionally, in systems that use external data storage, be sure that the compute module is the first unit to be
powered down and the last to be powered back up. Taking this precaution ensures that the system does not
falsely mark the drives as failed when the compute module is powered up.
Procedure
1. Power down the compute module.
2. Remove the compute module.
3. Remove the access panel.
4. Remove the front panel/drive cage assembly.
5. Install the controller.
28 Installing the controller
Page 29
6. Install the front panel/drive cage assembly.
7. Install the access panel.
8. Install the compute module.
9. Power up the compute module.
Installing the HPE Smart Storage Battery option
Procedure
1.Power down the compute module.
2.Remove the compute module.
3.Place the compute module on a flat, level work surface.
4.Remove the access panel.
5.Remove the center DIMM baffle.
6.Install the Smart Storage Battery.
Installing the HPE Smart Storage Battery option29
Page 30
7.Install the DIMM baffle.
8.Connect the Smart Storage Battery to the system board. To locate the Smart Storage Battery connector,see the "System board components."
9.Install the access panel.
10. Install the compute module.
11. Power up the compute module.
Installing the mezzanine card options
IMPORTANT: For more information about the association between the mezzanine bay and theinterconnect bays, see the HPE Synergy 12000 Frame Setup and Installation Guide in the Hewlett
Packard Enterprise Information Library. Where you install the mezzanine card determines where you
need to install the interconnect modules.
30 Installing the mezzanine card options
Page 31
Procedure
1.Power down the compute module.
2.Remove the compute module.
3.Place the compute module on a flat, level work surface.
4.Remove the access panel.
5.Locate the appropriate mezzanine connector. To locate the connector, see the "System board
components."
6.Remove the mezzanine connector cover, if installed.
7.Install the mezzanine card. Press firmly on the PRESS HERE label above the mezzanine connector to
seat the card.
When installing an NVIDIA Tesla M6 GPU FIO Adapter for HPE Synergy 660 Gen9 Compute Module , an
additional mounting bracket is required.
IMPORTANT: When an NVIDIA Tesla M6 GPU FIO Adapter for HPE Synergy 660 Gen9 Compute
Module is installed in mezzanine connector 1, mezzanine connector 2 is not available for additional
mezzanine cards.
8.If you are installing an HPE Smart Array P542D Controller with an NVMe-enabled backplane and SAS
hard drives, you must connect a cable between the mezzanine card and the NVMe-enabled backplane.
Installing hardware options31
Page 32
9.Install the access panel.
10. Install the compute module.
11. Power up the compute module.
Memory options
IMPORTANT: This compute module does not support mixing LRDIMMs or RDIMMs. Attempting to mix
any combination of these DIMMs can cause the compute module to halt during BIOS initialization.
Depending on the processor model, the number of DIMMs installed, and whether LRDIMMs or RDIMMs are
installed, the memory clock speed can be reduced to 1600 MT/s.
32 Memory options
Page 33
DIMM typeDIMM rank1 DIMM per
RDIMMSingle-rank (8 GB)240021331866
RDIMMDual-rank (16 GB)240021331866
LRDIMMQuad-rank (32 GB)240024002133
SmartMemory
SmartMemory authenticates and unlocks certain features available only on Qualified memory and verifies
whether installed memory has passed Hewlett Packard Enterprise qualification and test processes. Qualified
memory is performance-tuned for ProLiant and BladeSystem servers and provides future enhanced support
through Active Health and manageability software.
Memory subsystem architecture
The memory subsystem in this compute module is divided into channels. Each processor supports four
channels, and each channel supports three DIMM slots.
channel
2 DIMMs per
channel
3 DIMMs per
channel
Memory subsystem
channel
1A E I12 11 101 2 3
2B F J9 8 74 5 6
3C G K1 2 312 11 10
4D H L4 5 69 8 7
Population orderSlot number (Processor2)Slot number (Processor
Single-, dual-, and quad-rank DIMMs
To understand and configure memory protection modes properly, an understanding of single-, dual-, and
quad-rank DIMMs is helpful. Some DIMM configuration requirements are based on these classifications.
A single-rank DIMM has one set of memory chips that is accessed while writing to or reading from the
memory. A dual-rank DIMM is similar to having two single-rank DIMMs on the same module, with only one
rank accessible at a time. A quad-rank DIMM is, effectively, two dual-rank DIMMs on the same module. Only
one rank is accessible at a time. The server memory control subsystem selects the proper rank within the
DIMM when writing to or reading from the DIMM.
Dual- and quad-rank DIMMs provide the greatest capacity with the existing memory technology. For example,
if current DRAM technology supports 8-GB single-rank DIMMs, a dual-rank DIMM would be 16 GB, and a
quad-rank DIMM would be 32 GB.
LRDIMMs are labeled as quad-rank DIMMs. There are four ranks of DRAM on the DIMM, but the LRDIMM
buffer creates an abstraction that allows the DIMM to appear as a dual-rank DIMM to the system. The
LRDIMM buffer isolates the electrical loading of the DRAM from the system to allow for faster operation. This
allows higher memory operating speed compared to quad-rank RDIMMs.
1)
DIMM identification
To determine DIMM characteristics, see the label attached to the DIMM and refer to the following illustration
and table.
SmartMemory33
Page 34
ItemDescriptionDefinition
1Capacity
2Rank1R = Single-rank 2R = Dual-rank
3Data width on DRAMx4 = 4-bit x8 = 8-bit
4Memory generationDDR4
5Maximum memory speed
6CAS latency
7DIMM typeR = RDIMM (registered) L =
8 GB
16 GB
32 GB
64 GB
128 GB
4R = Quad-rank 8R = Octal-rank
2133 MT/s
2400 MT/s
P=15
T=17
LRDIMM (load reduced)
Memory configurations
To optimize compute module availability, the compute module supports the following AMP modes:
•Advanced ECC—Provides up to 4-bit error correction and enhanced performance over Lockstep mode.
This mode is the default option for this compute module.
•Online spare memory—Provides protection against failing or degraded DIMMs. Certain memory is
reserved as spare, and automatic failover to spare memory occurs when the system detects a DIMM that
is degrading. This allows DIMMs that have a higher probability of receiving an uncorrectable memory error
(which would result in system downtime) to be removed from operation.
Advanced Memory Protection options are configured in the BIOS/Platform Configuration (RBSU). If the
requested AMP mode is not supported by the installed DIMM configuration, the compute module boots in
Advanced ECC mode. For more information, see the HPE UEFI System Utilities User Guide for ProLiant
Gen9 Servers on the Hewlett Packard Enterprise website.
34 Memory configurations
Page 35
Maximum capacity
DIMM typeDIMM rankOne processor
(GB)
RDIMMSingle (8 GB)96192288384
RDIMMSingle (16 GB)192384764956
RDIMMDual (16 GB)192384764956
RDIMMDual (32 GB)38476811521536
LRDIMMDual (32 GB)38476811521536
LRDIMMQuad (64 GB)768153623043072
LRDIMMOctal (128 GB)1536307238404608
Advanced ECC memory configuration
Advanced ECC memory is the default memory protection mode for this compute module. Standard ECC can
correct single-bit memory errors and detect multi-bit memory errors. When multi-bit errors are detected using
Standard ECC, the error is signaled to the compute module and causes the compute module to halt.
Advanced ECC protects the compute module against some multi-bit memory errors. Advanced ECC can
correct both single-bit memory errors and 4-bit memory errors if all failed bits are on the same DRAM device
on the DIMM.
Advanced ECC provides additional protection over Standard ECC because it is possible to correct certain
memory errors that would otherwise be uncorrected and result in a compute module failure. Using HPE
Advanced Memory Error Detection technology, the compute module provides notification when a DIMM is
degrading and has a higher probability of uncorrectable memory error.
Two
processors
(GB)
Three
processors
(GB)
Four
processors
(GB)
Online Spare memory configuration
Online spare memory provides protection against degraded DIMM s by reducing the likelihood of uncorrected
memory errors. This protection is available without any operating system support.
Online spare memory protection dedicates one rank of each memory channel for use as spare memory. The
remaining ranks are available for OS and application use. If correctable memory errors occur at a rate higher
than a specific threshold on any of the non-spare ranks, the compute module automatically copies the
memory contents of the degraded rank to the online spare rank. The compute module then deactivates the
failing rank and automatically switches over to the online spare rank.
Mirrored memory configuration
Mirroring provides protection against uncorrected memory errors that would otherwise result in compute
module downtime. Mirroring is performed at the channel level.
Data is written to both memory channels. Data is read from one of the two memory channels. If an
uncorrectable error is detected in the active memory channel, data is retrieved from the mirror channel. This
channel becomes the new active channel, and the system disables the channel with the failed DIMM .
General DIMM slot population guidelines
Observe the following guidelines for all AMP modes:
•Install DIMMs only if the corresponding processor is installed.
•When two processors are installed, balance the DIMMs across the two processors.
Advanced ECC memory configuration35
Page 36
•White DIMM slots denote the first slot of a channel (Ch 1-A, Ch 2-B, Ch 3-C, Ch 4-D).
•Do not mix RDIMMs and LRDIMMs.
•When one processor is installed, install DIMMs in sequential alphabetic order: A, B, C, D, E, F, and so
forth.
•When two processors are installed, install the DIMMs in sequential alphabetic order balanced between the
two processors: P1-A, P2-A, P1-B, P2-B, P1-C, P2-C, and so forth.
•When single-rank, dual-rank, and quad-rank DIMMs are populated for two DIMMs per channel or three
DIMMs per channel, always populate the higher number rank DIMM first (starting from the farthest slot).
For example, first quad-rank DIMM, then dual-rank DIMM, and then lastly single-rank DIMM.
•DIMMs should be populated starting farthest from the processor on each channel.
•Populate DIMMs in the following order:
•For DIMM spare replacement, install the DIMMs per slot number as instructed by the system software.
For more information about compute module memory, see the Hewlett Packard Enterprise website.
DIMM speeds are supported as indicated in the following tables.
Intel Xeon E5-4600 v4 processor installed
Populated slots (per
channel)
1Single, dual, or quadRDIMM or LRDIMM2400
2Single, dualRDIMM2133
2Dual, quad, or octalLRDIMM2400
3Single or dualRDIMM1866
3DualLRDIMM2133
Depending on the processor model, the number of DIMMs installed, and whether LRDIMMs or RDIMMs are
installed, the memory clock speed can be reduced to 1866 MT/s.
RankDIMMSpeeds supported
Advanced ECC population guidelines
For Advanced ECC mode configurations:
•Observe the general DIMM slot population guidelines.
•DIMMs may be installed individually.
Online spare population guidelines
(MT/s)
For Online Spare memory mode configurations, observe the following guidelines:
•Observe the general DIMM slot population guidelines.
•Each channel must have a valid online spare configuration.
36 Advanced ECC population guidelines
Page 37
•Each channel can have a different valid online spare configuration.
•Each populated channel must have a spare rank. A single dual-rank DIMM is not a valid configuration.
Mirrored Memory population guidelines
For Mirrored Memory mode configurations, observe the following guidelines:
•Observe the general DIMM slot population guidelines.
•Always install DIMMs in all channels for each installed processor.
•DIMMs installed on all channels of an installed processor must be identical.
•In multi-processor configurations, each processor must have a valid Mirrored Memory configuration.
•In multi-processor configurations, each processor may have a different valid Mirrored Memory
configuration.
Population order
For memory configurations with a single processor or multiple processors, DIMMs must be populated
sequentially in alphabetical order (A through L).
After installing the DIMM s, use the BIOS/Platform Configuration (RBSU) menu in the UEFI System Utilities to
configure supported AMP modes.
Preparing to install a DIMM
CAUTION: To avoid damage to the hard drives, memory, and other system components, the air baffle,
drive blanks, and access panel must be installed when the compute module is powered up.
CAUTION: To avoid damage to the hard drives, memory, and other system components, be sure to
install the correct DIMM baffles for your compute module model.
Procedure
1. Review the following population guidelines:
•Install DIMMs only if the corresponding processor is installed.
•When two processors are installed, balance the total capacity of the DIMMs across the two processors.
•Do not mix RDIMMs and LRDIMMs in the same system.
•Populate DIMMs in the following order:
2. When one processor is installed, install DIMMs in sequential alphabetic order: A, B, C, D, E, F, and so
forth. When two processors are installed, install DIMMs in sequential alphabetic order: P1-A, P2-A, P1-B,
P2-B, P1-C, P2-C, and so forth.
3. For DIMM spare replacement, install the DIMMs per slot number as instructed by the system software.
Mirrored Memory population guidelines37
Page 38
Installing a DIMM
CAUTION: To avoid damage to the hard drives, memory, and other system components, the air baffle,
drive blanks, and access panel must be installed when the compute module is powered up.
CAUTION: To avoid damage to the hard drives, memory, and other system components, be sure to
install the correct DIMM baffles for your compute module model.
Procedure
1.Power down the compute module.
2.Remove the compute module.
3.Place the compute module on a flat, level work surface.
4.Removing the access panel.
5.Remove the DIMM baffle.
6.Install the DIMM .
7.Install all DIMM baffles.
8.Install the access panel.
9.Install the compute module.
10. Power up the compute module.
To configure the memory mode, use UEFI System Utilities.
Installing the processor
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
38 Installing a DIMM
Page 39
CAUTION: To prevent possible compute module malfunction and damage to the equipment,
multiprocessor configurations must contain processors with the same part number.
CAUTION: The heatsink thermal interface media is not reusable and must be replaced if the heatsink is
removed from the processor after it has been installed.
CAUTION: To prevent possible compute module overheating, always populate processor socket 2 with
a processor and a heatsink or a processor socket cover and a heatsink blank.
CAUTION: To prevent damage to electrical components, properly ground the compute module before
beginning any installation procedure. Improper grounding can cause ESD.
IMPORTANT: Processor socket 1 must be populated at all times or the compute module does not
function.
Procedure
1.Update the system ROM.
Locate and download the latest ROM version from the Hewlett Packard Enterprise website (http://
www.hpe.com/support). Follow the instructions on the website to update the system ROM.
2.Power down the compute module.
3.Remove the compute module.
4.Remove the access panel.
5.Remove the DIMM baffle.
6.Remove the heatsink blank. Retain the heatsink blank for future use.
7.Open the processor locking levers in the order indicated, and then open the processor retaining bracket.
Installing hardware options39
Page 40
8.Remove the clear processor socket cover. Retain the processor socket cover for future use.
CAUTION: THE PINS ON THE SYSTEM BOARD ARE VERY FRAGILE AND EASILY
DAMAGED. To avoid damage to the system board, do not touch the processor or the processor
socket contacts.
9.Install the processor. Verify that the processor is fully seated in the processor retaining bracket by
visually inspecting the processor installation guides on either side of the processor. THE PINS ON THE
SYSTEM BOARD ARE VERY FRAGILE AND EASILY DAMAGED.
40Installing hardware options
Page 41
10. Close the processor retaining bracket. When the processor is installed properly inside the processor
retaining bracket, the processor retaining bracket clears the flange on the front of the socket.
CAUTION: Do not press down on the processor. Pressing down on the processor might damage
the processor socket and the system board. Press only in the area indicated on the processor
retaining bracket.
CAUTION: Close and hold down the processor cover socket while closing the processor locking
levers. The levers should close without resistance. Forcing the levers closed can damage the
processor and socket, requiring system board replacement.
11. Press and hold the processor retaining bracket in place, and then close each processor locking lever.
Press only in the area indicated on the processor retaining bracket.
Installing hardware options41
Page 42
12. Remove the thermal interface protective cover from the heatsink.
CAUTION: To avoid damage to the system board, processor socket, and screws, do not
overtighten the heatsink screws.
13. Install the heatsink. A T-15 screwdriver is required to install the heatsink.
14. Install the DIMM baffles.
15. Install the access panel.
16. Install the compute module.
HPE Trusted Platform Module option
When installing or replacing TPM, observe the following guidelines:
42 HPE Trusted Platform Module option
Page 43
•Do not remove an installed TPM. Once installed, the TPM becomes a permanent part of the system board.
•When installing or replacing hardware, Hewlett Packard Enterprise service providers cannot enable the
TPM or the encryption technology. For security reasons, only the customer can enable these features.
•When returning a system board for service replacement, do not remove the TPM from the system board.
When requested, Hewlett Packard Enterprise Service provides a TPM with the spare system board.
•Any attempt to remove an installed TPM from the system board breaks or disfigures the TPM security
rivet. Upon locating a broken or disfigured rivet on an installed TPM, administrators should consider the
system compromised and take appropriate measures to ensure the integrity of the system data.
•When using BitLocker, always retain the recovery key/password. The recovery key/password is required to
enter Recovery Mode after BitLocker detects a possible compromise of system integrity.
•Hewlett Packard Enterprise is not liable for blocked data access caused by improper TPM use. For
operating instructions, see the encryption technology feature documentation provided by the operating
system.
For more information about product features, specifications, options, configurations, and compatibility, see the
product QuickSpecs on the Hewlett Packard Enterprise website .
Use these instructions to install and enable a TPM on a supported compute module. This procedure includes
three sections:
1. Installing the Trusted Platform Module board.
2. Retaining the recovery key/password.
3. Enabling the Trusted Platform Module.
Enabling the TPM requires accessing BIOS/Platform Configuration (RBSU) in the UEFI System Utilities.
TPM installation requires the use of drive encryption technology, such as the Microsoft Windows BitLocker
Drive Encryption feature. For more information on BitLocker, see the Microsoft website.
CAUTION: Always observe the guidelines in this document. Failure to follow these guidelines can cause
hardware damage or halt data access.
Installing the Trusted Platform Module board
WARNING: To reduce the risk of personal injury from hot surfaces, allow the drives and the internal
system components to cool before touching them.
Procedure
1.Power down the compute module.
2.Remove the compute module.
3.Place the compute module on a flat, level work surface.
4.Removing the access panel.
5.Remove all drives.
6.Remove all drive blanks.
7.Remove all DIMM baffles.
Installing the Trusted Platform Module board43
Page 44
8.Remove the front panel/drive cage assembly.
9.Locate the TPM connector (System board components).
CAUTION: Any attempt to remove an installed TPM from the system board breaks or disfigures the
TPM security rivet. Upon locating a broken or disfigured rivet on an installed TPM, administrators
should consider the system compromised and take appropriate measures to ensure the integrity of
the system data.
10. Install the TPM board. Press down on the connector to seat the board.
11. Install the TPM security rivet by pressing the rivet firmly into the system board.
12. Install the front panel/drive cage assembly.
13. Install all drives.
14. Install the DIMM baffles.
15. Install the access panel.
44Installing hardware options
Page 45
16. Install the compute module.
17. Power up the compute module.
Retaining the recovery key/password
The recovery key/password is generated during BitLocker setup, and can be saved and printed after
BitLocker is enabled. When using BitLocker, always retain the recovery key/password. The recovery key/
password is required to enter Recovery Mode after BitLocker detects a possible compromise of system
integrity.
To help ensure maximum security, observe the guidelines listed in the following procedure list when retaining
the recovery key/password.
Procedure
•Always store the recovery key/password in multiple locations.
•Always store copies of the recovery key/password away from the compute module.
•Do not save the recovery key/password on the encrypted hard drive.
Enabling the Trusted Platform Module
CAUTION: When a TPM is installed and enabled on the compute module, data access is locked if you
fail to follow the proper procedures for updating the system or option firmware, replacing the system
board, replacing a hard drive, or modifying OS application TPM settings.
Procedure
1. During the server startup sequence, press the F9 key to access System Utilities.
2. From the System Utilities screen, select System Configuration > BIOS/Platform Configuration (RBSU)
> Server Security.
3. Select Trusted Platform Module Options, and press the Enter key.
4. To set the TPM operational state:
•If TPM 1.2 is installed, then select No Action, Enable, Disable, or Clear.
•If TPM 2.0 is installed, then select No Action or Clear.
5. Select Visible to set the TPM Visibility, if necessary.
6. Press the F10 key to save your selection.
7. When prompted to save the change in System Utilities, press the Y key.
8. Press the ESC key to exit System Utilities. Then, press the Enter key when prompted to reboot the
compute module.
The compute module then reboots a second time without user input. During this reboot, the TPM setting
becomes effective.
9. Enable TPM functionality in the OS, such as Microsoft Windows BitLocker or measured boot.
For more information on adjusting TPM usage in BitLocker, see the Microsoft website (http://support.microsoft.com).
Retaining the recovery key/password45
Page 46
For more information on the UEFI System Utilities, see the UEFI System Utilities User Guide for HPE
ProLiant Gen9 and Synergy Servers in theUEFI Information Library (http://www.hpe.com/info/
ProLiantUEFI/docs).
46Installing hardware options
Page 47
Cabling
Cabling resources
Cabling configurations and requirements vary depending on the product and installed options. For more
information about product features, specifications, options, configurations, and compatibility, see the product
QuickSpecs on the Hewlett Packard Enterprise website (
HPE Smart Storage Battery cabling
http://www.hpe.com/info/qs).
Dual NVMe-enabled backplane cabling
Cabling47
Page 48
HPE Smart Array P542D Controller cabling
48 HPE Smart Array P542D Controller cabling
Page 49
Removing and replacing the system battery
If the compute module no longer automatically displays the correct date and time, then replace the battery
that provides power to the real-time clock. Under normal use, battery life is 5 to 10 years.
WARNING: The computer contains an internal lithium manganese dioxide, a vanadium pentoxide, or an
alkaline battery pack. A risk of fire and burns exists if the battery pack is not properly handled. To reduce
the risk of personal injury:
•Do not attempt to recharge the battery.
•Do not expose the battery to temperatures higher than 60°C (140°F).
•Do not disassemble, crush, puncture, short external contacts, or dispose of in fire or water.
•Replace only with the spare designated for this product.
Procedure
1. Power down the compute module.
2. Remove the compute module.
3. Place the compute module on a flat, level work surface.
4. Remove the access panel.
5. Remove front panel/drive cage assembly.
6. Locate the battery on the system board.
7. Remove the battery.
IMPORTANT: Replacing the system board battery resets the system ROM to its default
configuration. After replacing the battery, use BIOS/Platform Configuration (RBSU) in the UEFI
System Utilities to reconfigure the system.
To replace the component, reverse the removal procedure.
Removing and replacing the system battery49
Page 50
For more information about battery replacement or proper disposal, contact an authorized reseller or an
authorized service provider.
50Removing and replacing the system battery
Page 51
Electrostatic discharge
Preventing electrostatic discharge
To prevent damaging the system, be aware of the precautions you must follow when setting up the system or
handling parts. A discharge of static electricity from a finger or other conductor may damage system boards or
other static-sensitive devices. This type of damage may reduce the life expectancy of the device.
Procedure
•Avoid hand contact by transporting and storing products in static-safe containers.
•Keep electrostatic-sensitive parts in their containers until they arrive at static-free workstations.
•Place parts on a grounded surface before removing them from their containers.
•Avoid touching pins, leads, or circuitry.
•Always be properly grounded when touching a static-sensitive component or assembly.
Grounding methods to prevent electrostatic discharge
Several methods are used for grounding. Use one or more of the following methods when handling or
installing electrostatic-sensitive parts:
•Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist straps
are flexible straps with a minimum of 1 megohm ±10 percent resistance in the ground cords. To provide
proper ground, wear the strap snug against the skin.
•Use heel straps, toe straps, or boot straps at standing workstations. Wear the straps on both feet when
standing on conductive floors or dissipating floor mats.
•Use conductive field service tools.
•Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized reseller install
the part.
For more information on static electricity or assistance with product installation, contact the Hewlett PackardEnterprise Support Center.
Electrostatic discharge51
Page 52
Specifications
Environmental specifications
SpecificationValue
Temperature range
Operating10°C to 35°C (50°F to 95°F)
Nonoperating-30°C to 60°C (-22°F to 140°F)
Relative humidity (noncondensing)
Operating10% to 90% @ 28°C (82.4°F)
Nonoperating5% to 95% @ 38.7°C (101.7°F)
Altitude
Operating3,050 m (10,000 ft)
Nonoperating9,144 m (30,000 ft)
1
The following temperature conditions and limitations apply:
•All temperature ratings shown are for sea level.
3
1
2
—
—
—
•An altitude derating of 1°C per 304.8 m (1.8°F per 1,000 ft) up to 3,048 m (10,000 ft) applies.
•No direct sunlight is allowed.
•The maximum permissible rate of change is 10°C/hr (18°F/hr).
•The type and number of options installed might reduce the upper temperature and humidity limits.
•Operating with a fan fault or above 30°C (86°F) might reduce system performance.
2
Storage maximum humidity of 95% is based on a maximum temperature of 45°C (113°F).
3
Maximum storage altitude corresponds to a minimum pressure of 70 kPa (10.1 psia).
Physical specifications
SpecificationValue
Height430.30 mm (16.94 in)
Depth606.6 mm (23.88 in)
Width63.50 mm (2.50 in)
Weight (maximum)17.41 kg (38.38 lb)
Weight (minimum)13.75 kg (30.31 lb)
52 Specifications
Page 53
Documentation and troubleshooting resources
for HPE Synergy
HPE Synergy documentation
The Hewlett Packard Enterprise Information Library (
repository. It includes installation instructions, user guides, maintenance and service guides, best practices,
and links to additional resources. Use this website to obtain the latest documentation, including:
•Learning about HPE Synergy technology
•Installing and cabling HPE Synergy
•Updating the HPE Synergy components
•Using and managing HPE Synergy
•Troubleshooting HPE Synergy
www.hpe.com/info/synergy-docs) is a task-based
HPE Synergy Configuration and Compatibility Guide
The HPE Synergy Configuration and Compatibility Guide is in the Hewlett Packard Enterprise Information
Library (www.hpe.com/info/synergy-docs). It provides an overview of HPE Synergy management and fabric
architecture, detailed hardware component identification and configuration, and cabling examples.
HPE Synergy Frame Link Module User Guide
The HPE Synergy Frame Link Module User Guide is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It outlines frame link module management, configuration, and security.
HPE OneView User Guide for HPE Synergy
The HPE OneView User Guide for HPE Synergy is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It describes resource features, planning tasks, configuration quick start
tasks, navigational tools for the graphical user interface, and more support and reference information for HPE
OneView.
HPE OneView Global Dashboard
The HPE OneView Global Dashboard provides a unified view of health, alerting, and key resources managed
by HPE OneView across multiple platforms and data center sites. The HPE OneView Global Dashboard UserGuide is in the Hewlett Packard Enterprise Information Library (www.hpe.com/info/synergy-docs). It
provides instructions for installing, configuring, navigating, and troubleshooting the HPE OneView Global
Dashboard.
HPE Synergy Image Streamer User Guide
The HPE Synergy Image Streamer User Guide is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It describes the OS deployment process using Image Streamer,
features of Image Streamer, and purpose and life cycle of Image Streamer artifacts. It also includes
authentication, authorization, and troubleshooting information for Image Streamer.
Documentation and troubleshooting resources for HPE Synergy53
Page 54
HPE Synergy Image Streamer GitHub
The HPE Synergy Image Streamer GitHub repository (github.com/HewlettPackard) contains sample
artifacts and documentation on how to use the sample artifacts. It also contains technical white papers
explaining deployment steps that can be performed using Image Streamer.
HPE Synergy Software Overview Guide
The HPE Synergy Software Overview Guide is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It provides detailed references and overviews of the various software
and configuration utilities to support HPE Synergy. The guide is task-based and covers the documentation
and resources for all supported software and configuration utilities available for:
•HPE Synergy setup and configuration
•OS deployment
•Firmware updates
•Troubleshooting
•Remote support
HPE Synergy Firmware Update Overview
The HPE Synergy Firmware Update Overview is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It provides information on how to update the firmware for HPE Synergy.
Best Practices for HPE Synergy Firmware and Driver Updates
The Best Practices for HPE Synergy Firmware and Driver Updates is in the Hewlett Packard Enterprise
Information Library (www.hpe.com/info/synergy-docs). It provides information on recommended best
practices to update firmware and drivers through HPE Synergy Composer, which is powered by HPE
OneView.
HPE OneView Support Matrix for HPE Synergy
The HPE OneView Support Matrix for HPE Synergy is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It maintains the latest software and firmware requirements, supported
hardware, and configuration maximums for HPE OneView.
HPE Synergy Image Streamer Support Matrix
The HPE Synergy Image Streamer Support Matrix is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It maintains the latest software and firmware requirements, supported
hardware, and configuration maximums for HPE Synergy Image Streamer.
HPE Synergy Glossary
The HPE Synergy Glossary, in the Hewlett Packard Enterprise Information Library (www.hpe.com/info/
synergy-docs), defines common terminology associated with HPE Synergy.
HPE Synergy troubleshooting resources
HPE Synergy troubleshooting resources are available within HPE OneView and in the Hewlett Packard
Enterprise Information Library (www.hpe.com/info/synergy-docs).
54 HPE Synergy Image Streamer GitHub
Page 55
Troubleshooting within HPE OneView
HPE OneView graphical user interface includes alert notifications and options for troubleshooting within HPE
OneView. The UI provides multiple views of HPE Synergy components, including colored icons to indicate
resource status and potential problem resolution in messages.
You can also use the Enclosure view and Map view to quickly see the status of all discovered HPE Synergy
hardware.
HPE Synergy Troubleshooting Guide
The HPE Synergy Troubleshooting Guide is in the Hewlett Packard Enterprise Information Library
(www.hpe.com/info/synergy-docs). It provides information for resolving common problems and courses of
action for fault isolation and identification, issue resolution, and maintenance for both HPE Synergy hardware
and software components.
Error Message Guide for HPE ProLiant Gen10 servers and HPE Synergy
The Error Message Guide for HPE ProLiant Gen10 servers and HPE Synergy is in the Hewlett Packard
Enterprise Information Library (www.hpe.com/info/synergy-docs). It provides information for resolving
common problems associated with specific error messages received for both HPE Synergy hardware and
software components.
HPE OneView Help and HPE OneView API Reference
The HPE OneView Help and the HPE OneView API Reference are readily accessible, embedded online help
available within the HPE OneView user interface. These help files include “Learn more” links to common
issues, as well as procedures and examples to troubleshoot issues within HPE Synergy.
The help files are also available in the Hewlett Packard Enterprise Information Library (www.hpe.com/info/
synergy-docs).
HPE Synergy QuickSpecs
HPE Synergy has system specifications as well as individual product and component specifications. For
complete specification information, see the HPE Synergy and individual HPE Synergy product QuickSpecs on
the Hewlett Packard Enterprise website (www.hpe.com/info/qs).
•For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
http://www.hpe.com/assistance
•To access documentation and support services, go to the Hewlett Packard Enterprise Support Center
website:
http://www.hpe.com/support/hpesc
Information to collect
•Technical support registration number (if applicable)
•Product name, model or version, and serial number
•Operating system name and version
•Firmware version
•Error messages
•Product-specific reports and logs
•Add-on products or components
•Third-party products or components
Accessing updates
•Some software products provide a mechanism for accessing software updates through the product
interface. Review your product documentation to identify the recommended software update method.
•To download product updates:
Hewlett Packard Enterprise Support Center
www.hpe.com/support/hpesc
Hewlett Packard Enterprise Support Center: Software downloads
www.hpe.com/support/downloads
Software Depot
www.hpe.com/support/softwaredepot
•To subscribe to eNewsletters and alerts:
www.hpe.com/support/e-updates
•To view and update your entitlements, and to link your contracts and warranties with your profile, go to the
Hewlett Packard Enterprise Support Center More Information on Access to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
58 Support and other resources
Page 59
IMPORTANT: Access to some updates might require product entitlement when accessed through the
Hewlett Packard Enterprise Support Center. You must have an HPE Passport set up with relevant
entitlements.
Customer self repair
Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product. If a CSR
part needs to be replaced, it will be shipped directly to you so that you can install it at your convenience.
Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized service provider will
determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
http://www.hpe.com/support/selfrepair
Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware event
notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution based on your
product's service level. Hewlett Packard Enterprise strongly recommends that you register your device for
remote support.
If your product includes additional remote support details, use search to locate that information.
Remote support and Proactive Care information
HPE Get Connected
www.hpe.com/services/getconnected
HPE Proactive Care services
www.hpe.com/services/proactivecare
HPE Proactive Care service: Supported products list
To view the warranty for your product or to view the Safety and Compliance Information for Server, Storage,
Power, Networking, and Rack Products reference document, go to the Enterprise Safety and Compliance
To view the regulatory information for your product, view the Safety and Compliance Information for Server,
Storage, Power, Networking, and Rack Products, available at the Hewlett Packard Enterprise Support Center:
Hewlett Packard Enterprise is committed to providing our customers with information about the chemical
substances in our products as needed to comply with legal requirements such as REACH (Regulation EC No
1907/2006 of the European Parliament and the Council). A chemical information report for this product can be
found at:
www.hpe.com/info/reach
For Hewlett Packard Enterprise product environmental and safety information and compliance data, including
RoHS and REACH, see:
www.hpe.com/info/ecodata
For Hewlett Packard Enterprise environmental information, including company programs, product recycling,
and energy efficiency, see:
www.hpe.com/info/environment
Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To help us
improve the documentation, send any errors, suggestions, or comments to Documentation Feedback
(docsfeedback@hpe.com). When submitting your feedback, include the document title, part number, edition,
and publication date located on the front cover of the document. For online help content, include the product
name, product version, help edition, and publication date located on the legal notices page.
60 Regulatory information
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.