Your right to copy this manual is limited by copyright law. Making copies or adaptations without prior written
authorization of Quantum Corporation is prohibited by law and constitutes a punishable violation of the law.
Trademark Statement:
Artico, Be Certain (and the Q brackets design), DLT, DXi, DXi Accent, DXi V1000, DXi V2000, DXi V4000,
DXiV-Series, FlexTier, Lattus, the Q logo, the Q Quantum logo, Q-Cloud, Quantum (and the Q brackets design),
the Quantum logo, Quantum Be Certain (and the Q brackets design), Quantum Vision, Scalar, StorageCare,
StorNext, SuperLoader, Symform, the Symform logo (and design), vmPRO, and Xcellis are either registered
trademarks or trademarks of Quantum Corporation and its affiliates in the United States and/or other countries.
All other trademarks are the property of their respective owners.
Products mentioned herein are for identification purposes only and may be registered trademarks or trademarks
of their respective companies. All other brand names or trademarks are the property of their respective owners.
Quantum specifications are subject to change.
iiQXS G2 Hardware Installation and Maintenance Guide
viiiQXS G2 Hardware Installation and Maintenance Guide
This guide provides information for the following 12G QXS systems:
• QXS-G2-312: 12-Drive (2-Port: FC or iSCSI)
• QXS-G2-324: 24-Drive (2-Port: FC or iSCSI)
• QXS-G2-412: 12-Drive (4-Port: FC or iSCSI)
• QXS-G2-424: 24-Drive (4-Port: FC or iSCSI)
• QXS-G2-484: 84-Drive (4-Port: FC or iSCSI)
Introduction
This guide provides information about initial hardware setup, and removal and installation of
customer-replaceable units (CRUs) for the QXS G2 systems (RAID chassis and expansion chassis).
The QXS G2 systems converged network controller (CNC) I/O modules has the following host
interfaces available:
The QXS G2 systems are SBB-compliant (Storage Bridge Bay) chassis. These chassis support large form
factor (LFF) drives or small form factor (SFF) drives in 2U12, 2U24 and 5U84 chassis. These chassis
form factors support RAID chassis and expansion chassis.
Chassis User Interfaces
The QXS G2 systems support applications for configuring, monitoring, and managing the storage
system. The web-based application GUI and the command-line interface are briefly described:
• The disk management utility is the web interface for the chassis, providing access to all common
management functions for virtual storage.
Refer to the
QXS G2 Disk Management Utility User Guide
for additional information.
About This Guideix
• The command-line interface (CLI) enables you to interact with the storage system using command
syntax entered via the keyboard or scripting.
Refer to the
QXS G2 CLI Reference Guide
CNC Ports Used for Host Connection
QXS G2 systems that use CNC technology allow you to select the desired host interface protocol from
the available Fibre Channel (FC) or Internet SCSI (iSCSI) host interface protocols supported by the
system.
NOTE: Refer to SFP Option for CNC Ports for additional information.
You can use the CLI to set all controller host ports to use either FC or iSCSI protocol using the set
host-port-mode
• 16Gb FC
• 8Gb FC
• 10GbE iSCSI
• 1GbE iSCSI
Alternatively, for QXS G2 systems you can use the CLI to set CNC ports to support a combination of
host interface protocols. When configuring a combination of host interface protocols, host ports 0
and 1 are set to FC (either both 16Gb/s or both 8Gb/s), and host ports 2 and 3 must be set to iSCSI
(either both 10GbE or both 1Gb/s), provided the CNC ports use qualified SFP connectors and cables
required for supporting the selected host interface protocol.
CLI command. The QXS G2 systems support the following link speeds:
for additional information.
NOTE: Refer to the
CNC controller modules ship with CNC ports initially configured for FC. When connecting CNC ports
to iSCSI hosts, you must use the CLI (not the disk management utility/GUI) to specify which ports will
use iSCSI. It is best to configure the ports before inserting the iSCSI SFPs into the CNC ports.
QXS G2 CLI Reference Guide
Intended audience
This guide is intended for storage customers and technicians.
NOTE: This guide provides information for initial hardware setup, and removal and installation of
CRUs for the QXS G2 systems (RAID chassis and expansion chassis).
for additional information.
xQXS G2 Hardware Installation and Maintenance Guide
Prerequisites
Prerequisites for planning, installing, and using this product include knowledge of:
• Servers and computer networks
• Network administration
• Storage system installation and configuration
• Storage area network (SAN) management and direct attach storage (DAS)
• Converged Network Controllers (CNCs)
• Fibre Channel (FC) protocols
• Serial Attached SCSI (SAS) protocol
• Internet SCSI (iSCSI) protocol
• Ethernet protocol
Related Documentation
Refer to the following table for related 12G QXS documentation.
Ta b l e 1Related Documentation
For Information AboutSee
Enhancements, known issues, and late-breaking information
not included in product documentation
Overview of hardware installation
Product hardware installation and maintenance
QXS G2 Release Notes
QXS G2 Quick Start Guide
QXS G2 Hardware Installation
and Maintenance Guide
Obtaining and installing a license to use licensed features
Using the web interface to configure and manage the product
QXS G2 Licensing Guide
QXS G2 Disk Management Utility
User Guide
Event codes and recommended actions
QXS G2 Event Descriptions
Reference Guide
Using the command-line interface (CLI) to configure and
manage the product
Cabinet information, QXS G2 specifications, and environment
and requirements
Regulatory compliance and safety and disposal information*
QXS G2 CLI Reference Guide
QXS G2 Site Planning Guide
Product Regulatory Compliance
and Safety
*Printed document included with product.
About This Guidexi
Document conventions and symbols
Ta b l e 2Document conventions
ConventionElement
Blue text Cross-reference links and e-mail addresses
Blue, underlined text
Bold text• Key names
Italic text
Monospace text• File and directory names
Monospace, italic text• Code variables
Monospace, bold textEmphasis of file and directory names, system output, code,
WARNING! Indicates that failure to follow directions could result in bodily injury.
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
Web site addresses
• Text typed into a GUI element, such as into a box
• GUI elements that are clicked or selected, such as menu
and list items, buttons, and check boxes
Text emph a s i s
• System output
• Code
• Text typed at the command-line
• Command-line variables
and text typed at the command-line
IMPORTANT: Provides clarifying information or specific instructions.
NOTE: Provides additional information.
TIP: Provides helpful hints and shortcuts.
xiiQXS G2 Hardware Installation and Maintenance Guide
Safety Guidelines
This chapter provides information for the following 12-G QXS systems:
• QXS-G2-312: 12-Drive (2-Port: FC or iSCSI)
• QXS-G2-324: 24-Drive (2-Port: FC or iSCSI)
• QXS-G2-412: 12-Drive (4-Port: FC or iSCSI)
• QXS-G2-424: 24-Drive (4-Port: FC or iSCSI)
• QXS-G2-484: 84-Drive (4-Port: FC or iSCSI)
This chapter provides the following sections:
• Safe Handling of Equipment
• Operation of the QXS G2 Systems
• Electrical Safety
• Rack System Safety Precautions
Chapter 1
Safe Handling of Equipment
Always follow these safety cautions when handling the QXS G2 equipment.
CAUTION: Use this equipment in a manner specified by the manufacturer: failure to do this may
cancel the protection provided by the equipment.
• Permanently unplug the chassis before you move it or if you think that it has become damaged in
any way.
• A safe lifting height is 20U.
• Always remove the power supply units (PSUs) to minimize weight before you move the chassis.
• Do not lift the chassis by the handles on the PSUs—they are not designed to take the weight.
CAUTION: Do not try to lift the chassis by yourself:
• Fully configured 2U12 chassis can weigh up to 32 kg (71 lb).
Safety Guidelines1
• Fully configured 2U24 chassis can weigh up to 30 kg (66 lb).
• Fully configured 5U84 chassis can weigh up to 135kg (298 lb).
• An unpopulated chassis weighs 46 kg (101 lb).
• Use appropriate lifting methods.
• Before lifting the chassis:
• Unplug all cables and label them for reconnection.
• Remove the drives from drawers and verify the drawers are closed and firmly locked.
• Use a minimum of three people to lift the chassis using the lifting straps provided.
• Avoid lifting the chassis using the handles on any of the CRUs because they are not
designed to take the weight.
• Do not lift the chassis higher than 20U. Use mechanical assistance to lift above this height.
• Observe the lifting hazard label affixed to the storage chassis.
Operation of the QXS G2 Systems
IMPORTANT:
• All systems must have the following installed:
• Two controllers in the RAID chassis or two IOMs in the expansion chassis.
• Two PSUs in all chassis.
• Five fans in the 5U84 chassis.
• All drive slots in the 2U12, 2U24, and 5U84 chassis must be filled with drives or drive blanks
• Operation of the chassis with any CRU modules missing will disrupt the airflow, and the chassis
will not receive sufficient cooling.
• It is essential that all slots hold modules before the QXS G2 system is used.
Ensure to follow these operation precautions:
• Observe the module bay caution label affixed to the module being replaced.
• Replace a defective power supply unit (PSU) with a fully operational PSU within 24 hours.
Do not remove a defective PSU unless you have a replacement model of the correct type ready for
insertion.
• Before removal/replacement of a PSU, disconnect supply power from the PSU to be replaced.
• Observe the hazardous voltage warning label affixed to PSUs.
2QXS G2 Hardware Installation and Maintenance Guide
Electrical Safety
Ensure to follow these electrical precautions:
• The 2U chassis must only be operated from a power supply input voltage range of 100–240 VAC,
50–60 Hz.
• The 5U chassis must only be operated from a power supply input voltage range of 200–240 VAC,
50–60 Hz.
• Provide a suitable power source with electrical overload protection to meet the requirements in
the technical specification.
• The power cord must have a safe electrical earth connection. Check the connection to earth of the
chassis before you switch on the power supply.
IMPORTANT: The chassis must be grounded before applying power.
• The plug on the power supply cord is used as the main disconnect device.
• The 2U and 5U chassis are intended to operate with two PSUs.
• Observe the power-supply disconnection caution label affixed to PSUs.
CAUTION: Do not remove covers from the PSU – there is a danger of electric shock inside.
• Return the PSU to your supplier for repair.
• When bifurcated power cords (Y-leads) are used, these cords must only be connected to a supply
range of 200–240 VAC.
IMPORTANT: The RJ-45 socket on the expansion input/output modules (IOMs) is for the Ethernet
connection only and must not be connected to a telecommunications network.
Rack System Safety Precautions
The following safety requirements must be considered when the chassis is mounted in a rack.
• The rack construction must be capable of supporting the total weight of the installed chassis.
The design should incorporate stabilizing features suitable to prevent the rack from tipping or
being pushed over during installation or in normal use.
• When loading a rack with chassis, fill the rack from the bottom up; and empty the rack from the
top down.
• Always remove all CRUs to minimize weight, before loading the chassis into the rack.
• Do not try to lift the chassis by yourself.
Safety Guidelines3
CAUTION: To avoid danger of the rack falling over, under no circumstances should more than one
chassis be moved out of the cabinet at any one time.
• The system must be operated with low pressure rear exhaust installation.
The back pressure created by rack doors and obstacles is not to exceed 5 pascals (0.5 mm water
gauge).
• The rack design should take into consideration the maximum operating ambient temperature for
the chassis, which is 35ºC (95ºF) for RAID chassis and 40ºC (104ºF) for expansion chassis.
• The rack should have a safe electrical distribution system.
• It must provide over-current protection for the chassis and must not be overloaded by the total
number of chassis installed in the rack.
• When addressing these concerns, consideration should be given to the electrical power
consumption rating shown on the nameplate.
• The electrical distribution system must provide a reliable earth connection for each chassis in the
rack.
• Each PSU in each chassis has an earth leakage current of 1.0mA. The design of the electrical
distribution system must take into consideration the total earth leakage current from all the PSUs
in all the chassis.
• The rack will require labeling with “High Leakage Current. Earth connection essential before
connecting supply.”
• The rack—when configured with the chassis—must meet the safety requirements of UL 60950-1
and IEC 60950-1.
4QXS G2 Hardware Installation and Maintenance Guide
This chapter provides information for the following 12-G QXS systems:
• QXS-G2-312: 12-Drive (2-Port: FC or iSCSI)
• QXS-G2-324: 24-Drive (2-Port: FC or iSCSI)
• QXS-G2-412: 12-Drive (4-Port: FC or iSCSI)
• QXS-G2-424: 24-Drive (4-Port: FC or iSCSI)
• QXS-G2-484: 84-Drive (4-Port: FC or iSCSI)
Chassis Configurations
The storage system supports three RAID chassis configurations.
• 2U12 (rack space) RAID chassis – see Figure 1:
Chapter 2
System Overview
• Holds up to 12 low profile (1-inch high) 3.5” form factor drives in a horizontal orientation.
• The 2U12 RAID chassis ships with 12 drives installed.
Figure 1 2U12-Drive System (Front)
System Overview5
• 2U24 (rack space) RAID chassis – see Figure 2:
• Holds up to 24 low profile (5/8 inch high) 2.5” form factor drives in a vertical orientation.
• The 2U24 RAID chassis ships with 24 drives installed.
Figure 2 2U24-Drive System (Front)
• 5U (rack space) RAID chassis – see Figure 3:
• Holds up to 84 low profile (1-inch high) 3.5" form factor drives in a vertical orientation within
the drive drawer.
• Two vertically-stacked drawers each hold 42 drives.
• If used, 2.5" drives require 3.5" adapters.
• The 5U84 RAID chassis ships with no drives installed.
• Drives ship in a 42-drive pack (two each 42-drive packs for a total of 84 drives).
Figure 3 5U84-Drive System (Front)
IMPORTANT: These same chassis form factors are used for supported expansion chassis; albeit with
different expansion I/O modules (IOMs). Each individual drive is hot pluggable and replaceable on
site.
6QXS G2 Hardware Installation and Maintenance Guide
NOTE: Throughout this guide—and the management interfaces documents used with this
guide—the RAID chassis has two RAID controllers installed and the expansion chassis has two
expansion I/O modules (IOMs) installed.
CompactFlash
During a power loss or controller failure, data stored in cache is saved off to non-volatile memory
(CompactFlash). The data is restored to cache, and then written to disk after the issue is corrected. To
protect against writing incomplete data to disk, the image stored on the CompactFlash is verified
before committing to disk. The CompactFlash memory card is located at the midplane-facing end of
the controller module. Do not remove the card; it is used for cache recovery only.
NOTE: In dual-controller configurations featuring one healthy partner controller, there is no need to
transport failed controller cache to a replacement controller because the cache is duplicated between
the controllers, provided that volume cache is set to standard on all volumes in the pool owned by
the failed controller.
Supercapacitor Pack
To protect controller module cache in case of power failure, each controller chassis model is equipped
with supercapacitor technology, in conjunction with CompactFlash memory, built into each controller
module to provide extended cache memory backup time.
The supercapacitor pack provides energy for backing up unwritten data in the write cache to the
CompactFlash, in the event of a power failure. Unwritten data in CompactFlash memory is
automatically committed to disk media when power is restored. In the event of power failure, while
cache is maintained by the supercapacitor pack, the Cache Status LED blinks at a rate of 1/10 second
on and 9/10 second off. See also Cache Overview and Status LED Details on page 45.
System Overview7
Front and Rear Views of QXS G2 Systems
This section provides the front and rear views of the QXS G2 Systems.
2U12-Drive System
Figure 4 provides a front view of the 2U12-drive system.
Figure 4 2U12-Drive System (Front)
Figure 5 provides a rear view of the 2U12-drive system. The 2U12 RAID chassis has dual-controllers
(4-port FC/iSCSI model shown) installed.
Figure 5 2U12-Drive System (Rear)
2U12-Drive System Serial Number Label Location
This section provides the system serial number label location for the QXS G2 2U12-drive systems.
NOTE: If you need Quantum support, you will need your system serial number. The 2U12 and the
2U24-drive system serial number label locations are the same.
8QXS G2 Hardware Installation and Maintenance Guide
The following illustrations provides a representative example of a system serial number label that is
placed on a chassis.
NOTE: Refer to Figure 6 and Figure 7 for the exact location of the system serial number label.
Do not confuse other “QTM” serial numbers on the chassis as the system serial number.
The 2U12 system serial number is located on a label attached to the rear of the chassis. The serial
number label can be located on the left-rear ear of the chassis or on the right-rear ear of the chassis.
• The primary location of the serial number label is on the left-rear ear of the chassis.
• However, some chassis might have a factory label on the left-rear ear of the chassis.
• When the chassis has a factory label on the left-rear ear, the system serial number label will be
located on the right-rear ear of the chassis.
Figure 6 provides the serial number label location on the left-rear ear of the chassis.
Figure 6 Serial Number Label on Left-rear Ear of Chassis
System Overview9
Figure 7 provides the serial number label location on the right-rear ear of the chassis.
Figure 7 Serial Number Label on Right-rear Ear of Chassis
2U24-Drive System
Figure 8 provides a front view of the 2U24-drive system.
Figure 8 2U24-Drive System (Front)
10QXS G2 Hardware Installation and Maintenance Guide
Figure 9 provides a rear view of the 2U24-drive system. The 2U24 RAID chassis has dual-controllers
(4-port FC/iSCSI model shown) installed.
Figure 9 2U24-Drive System (Rear)
2U24-Drive System Serial Number Label Location
This section provides the system serial number label location for the QXS G2 2U24-drive systems.
NOTE: If you need Quantum support, you will need your system serial number. The 2U12 and the
2U24-drive system serial number label locations are the same.
The following illustrations provides a representative example of a system serial number label that is
placed on a chassis.
NOTE: Refer to Figure 10 and Figure 11 for the exact location of the system serial number label.
Do not confuse other “QTM” serial numbers on the chassis as the system serial number.
The 2U24 system serial number is located on a label attached to the rear of the chassis. The serial
number label can be located on the left-rear ear of the chassis or on the right-rear ear of the chassis.
• The primary location of the serial number label is on the left-rear ear of the chassis.
• However, some chassis might have a factory label on the left-rear ear of the chassis.
• When the chassis has a factory label on the left-rear ear, the system serial number label will be
located on the right-rear ear of the chassis.
System Overview11
Figure 10 provides the serial number label location on the left-rear ear of the chassis.
Figure 10 Serial Number Label on Left-rear Ear of Chassis
Figure 11 provides the serial number label location on the right-rear ear of the chassis.
Figure 11 Serial Number Label on Right-rear Ear of Chassis
12QXS G2 Hardware Installation and Maintenance Guide
5U84-Drive System
Figure 12 provides a front view of the 5U84-drive system.
Figure 13 provides a rear view of the 5U84-drive system. The 5U84 RAID chassis has dual-controllers
(4-port FC/iSCSI model shown) installed.
Figure 12 5U84-Drive System (Front)
Figure 13 5U84-Drive System (Rear)
5U84-Drive System Serial Number Label Location
This section provides the system serial number label location for the QXS G2 5U84-drive systems.
NOTE: If you need Quantum support, you will need your system serial number.
System Overview13
The following illustrations provides a representative example of a system serial number label that is
placed on a chassis.
The 5U84 system serial number is located on a label attached to the rear of the chassis.
• Top-left of the chassis (right of the factory label)
• Above Controller A or Expansion IOM A
Figure 14 5U84 System Serial Number Label Location
Chassis Variants
The 2U chassis can be configured as a RAID chassis or an expansion chassis. The 5U chassis can be
configured as a RAID chassis or an expansion chassis.
Ta ble 3 provides the QXS G2 chassis variants.
Ta bl e 3Chassis Variants
ProductConfiguration DrivesPSUs
2U12 RAID
Chassis
12Gb/s direct
dock drives
12 LFF
• 3.5” SAS
• 3.5” SATA
2U12 Expansion
Chassis
12Gb/s direct
dock drives
12 LFF
• 3.5” SAS
• 3.5” SATA
14QXS G2 Hardware Installation and Maintenance Guide
22 ControllersN/A 1
22 Expansion
1
I/O ModulesFan
2
IOMs
Modules
3
N/A1
Bezel
4
Ta bl e 3Chassis Variants
ProductConfiguration DrivesPSUs
1
I/O ModulesFan
Modules
4
Bezel
3
2U24 RAID
Chassis
12Gb/s direct
dock drives
24 SFF
• 2.5” SAS
22 ControllersN/A 1
• 2.5” SATA
2U24 Expansion
Chassis
12Gb/s direct
dock drives
24 SFF
• 2.5” SAS
22 Expansion
2
IOMs
N/A1
• 2.5” SATA
5U84 RAID
Chassis
12Gb/s direct
dock drives
84 LFF or SFF
• 2.5” and 3.5” SAS
22 Controllers
3
5
• 3.5” SATA
5U84 Expansion
Chassis
12Gb/s direct
dock drives
84 LFF or SFF
• 2.5” and 3.5” SAS
22 Expansion
2
IOMs
3
5
• 3.5” SATA
1
: Redundant PSUs must be compatible modules of the same type (both AC). Power cords are shipped
PSUs
with all chassis.
2
IOMs
: Supported expansion IOMs are used in expansion chassis for adding storage.
3
Fan Modules
: The fan modules are separate CRUs and not integrated within the PSUs. Fan modules are
only used within the 5U84 drive chassis.
2
2
4
Bezel
: Bezels ship as follows:
• The 2U12 chassis has a bezel shipped in a separate box and must be installed on site.
• The 2U24 chassis has a bezel shipped in a separate box and must be installed on site.
• The 5U84 chassis ships with the bezels (2 each) installed on the upper drawer (Drawer 0) and the lower
drawer (Drawer 1).
Note: Ethernet cables are shipped with all RAID chassis.
IMPORTANT: The RAID chassis support dual-controller configuration only. If a partner controller
fails, the storage system will fail over and run on a single controller module until the redundancy is
restored. A RAID controller or expansion IOM must be installed in IOM slot to ensure sufficient air
flow through the chassis during operation.
2U Chassis Core Product
The design concept is based on a chassis subsystem together with a set of plug-in modules. A typical
chassis—as supplied—includes the following:
• A chassis which includes the midplane PCB and an integral operator’s (Ops) panel that is mounted
on the left ear flange at the front of the chassis.
• A bezel shipped in a separate box and must be installed on site.
System Overview15
• Two 580W, 100–240V AC PSUs (with power cords). See also Figure 41 on page 36.
1
2
0
3
4
5
6
7
8
9
10
11
01
2345
6
78
9
10 1112 1314 15
1617
18
1920
21 22
23
• Two RAID controllers (with Ethernet cable) or two expansion IOMs: 2 x SBB-compliant interface
slots.
• Up to 12 or 24 drive modules in the 2U chassis.
• Where appropriate the drive carriers will include an Interposer card.
• See also Chassis Variants on page 14. drive blanks must be installed in all empty drive slots.
• A rail kit for rack mounting.
NOTE: The following figures show component locations–together with CRU slot indexing–relative to
2U chassis front and rear panels.
2U12-Drive Chassis Front View
Integers on the drives indicate drive slot numbering sequence (0-11). Figure 15 provides a front view
of the 2U12-drive chassis.
Figure 15 2U12-Drive Chassis Front View
2U24-Drive Chassis Front View
Integers on the drives indicate drive slot numbering sequence (0-23). Figure 16 provides a front view
of the 2U24-drive chassis.
Figure 16 2U24-Drive Chassis Front View
QXS-G2-312 and QXS-G2-324 RAID Chassis Rear View (2-Host Port
Controllers)
NOTE: The 2U12-drive and 2U24-drive RAID chassis (2-host port controllers) rear views look
identical.
Figure 19 provides an illustration of the chassis rear view with two CNC controllers (2 FC/iSCSI ports).
16QXS G2 Hardware Installation and Maintenance Guide
Refer to the numbers on the CRUs, Figure 17, and the table to identify components within the 2U
1
2
3
4
chassis. PSU and controller modules are available as CRUs. The RAID chassis use 2-port controller
modules. Use expansion chassis for optionally added storage.
QXS-G2-312 and QXS-G2-324 RAID Chassis (CNC Controllers and 2FC/iSCSI ports)
Figure 17 provides a rear view of the 2U12-drive or 2U24-drive RAID chassis with two CNC controllers
Callout 1, Figure 18, is the Controller A location. Callout 2, Figure 18, is the Controller B location.
Both controllers are shown in the closed/locked position.
Figure 18 provides a rear view of the CNC FC/iSCSI controller used in the 2U12-drive or 2U24-drive
systems. You can configure the CNC host interface ports (Ports 0 and 1) with 8/16Gb/s FC SFPs; 10GbE
iSCSI SFPs; or 1Gb/s RJ-45 SFPs.
2
4
Figure 18 CNC FC/iSCSI Controller
3
412
Controller B
PSU1
5
87
1
3
5
7
SAS Port
USB Port
Reset
CNC Port 1
2
4
6
8
Ethernet Port
Serial Ports (service only)
CNC Port 0
Lock/Release Handle
6
System Overview17
QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID Chassis Rear View (4-Host
4
1
2
3
Port Controllers)
NOTE: The QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis (4-host port controllers) rear
views look identical.
Figure 19 provides an illustration of the chassis rear view with two CNC controllers (4 FC/iSCSI ports).
Refer to the numbers on the CRUs, Figure 19, and the table to identify components within the 2U
chassis. PSU and controller modules are available as CRUs. The RAID chassis use 4-port controller
modules. Use expansion chassis for optionally added storage.
RAID Chassis (CNC Controllers and 4 FC/iSCSI ports)
Figure 19 provides a rear view of the 2U12-drive or 2U24-drive RAID chassis with two CNC controllers
Callout 1, Figure 19, is the Controller A location. Callout 2, Figure 19, is the Controller B location.
Both controllers are shown in the closed/locked position.
Figure 20 provides a rear view of the CNC FC/iSCSI controller used in the 2U12-drive or 2U24-drive
systems. You can configure the CNC host interface ports (Ports 0, 1, 2 and 3) with 8/16Gb/s FC SFPs;
10GbE iSCSI SFPs; or 1Gb/s RJ-45 SFPs.
2345
1
2
4
Figure 20 CNC FC/iSCSI Controller
Controller B
PSU1
10
18QXS G2 Hardware Installation and Maintenance Guide
9
8
7
6
10
6
2
4
6
8
Ethernet Port
Serial Ports (service only)
CNC Port 0
CNC Port 2
Lock/Release Handle
1
3
5
7
9
SAS Port
USB Port
Reset
CNC Port 1
CNC Port 3
2U12-Drive/2U24-Drive Expansion Chassis Rear View
NOTE: The 2U12-drive and 2U24-drive expansion chassis rear views look identical. Figure 21
provides an illustration of the chassis rear view.
The top middle slot, Figure 21, is the Expansion IOM A location. The lower middle slot is the
Expansion IOM B location. Both IOMs are shown in the closed/locked position.
Expansion Chassis
Figure 21 provides a rear view of the 2U12-drive or 2U24-drive expansion chassis rear panel. The
expansion chassis attaches to the RAID chassis for additional storage.
Figure 22 provides a rear view of the expansion chassis IOM used in the 2U12-drive or 2U24-drive
systems. Ports A/B/C ship configured with 12Gb/s HD mini-SAS (SFF-8644) external connectors.
IOM A
PSU0
1
1
Figure 22 Expansion Chassis IOM
23
4
2
2
4
IOM B
PSU1
4
5
System Overview19
1
3
5
IMPORTANT: RAID chassis and expansion configurations:
• When the expansion chassis shown above (Figure 21) is used with RAID chassis for adding
• The Ethernet port on the expansion module is not used in RAID chassis and expansion
Serial Port
SAS Port B
Ethernet Port
storage, its middle HD mini-SAS expansion port (“B”) is disabled by the firmware.
configurations, and is disabled.
Power Supply Unit (PSU)
Figure 23 provides and illustration of the PSU used in the RAID chassis and the expansion chassis.
They are identical in the 12-drive and 24-drive chassis. Figure 23 shows an example of a PSU oriented
for use in the left PSU Slot 0 of the RAID or expansion chassis.
2
4
6
SAS Port A
SAS Port C
Lock/Release Handle
Figure 23 PSU
3
4
2
1
7
5
6
1
3
5
7
Fan Fail LED
PSU OK LED
Release/Lock Handle
Power Switch
2
4
6
AC Fail LED
DC Fail LED
Power Connect
2U RAID/Expansion Chassis
The 2U chassis consists of a sheet metal chassis with an integrated midplane PCB and module runner
system.
20QXS G2 Hardware Installation and Maintenance Guide
NOTE: Customers select a chassis type and drives separately. All empty drive slots in the 2U chassis
must have drive blanks installed.
The 2U chassis supports the following form factors:
• 2U12 chassis configured with 12 LFF disks (see Figure 15 on page 16) as follows:
• Twelve 3.5” SAS LFF disk drives, held horizontally
•Twelve 3.5” SATA LFF disk drives, held horizontally
• 2U24 chassis configured with 24 SFF disks (see Figure 16 on page 16) as follows:
• Twenty-four 2.5” SAS SFF disk drives, held vertically
• Twenty-four 2.5” SATA SFF disk drives, held vertically
• 2U12 empty chassis with midplane and module runner system: see Figure 24 on page 21
• 2U24 empty chassis with midplane: see Figure 25 on page 22
The chassis has a 19-inch rack mounting that enables it to be installed onto standard 19-inch racks
and uses two EIA units of rack space (3.5”) for a 2U chassis.
• The midplane PCB can support either 12 or 24 drive connections.
• There are either 12 or 24 drive slots at the front of the chassis, in horizontal (12) or vertical (24)
orientation, as defined by the chassis variant.
• See also Figure 15 and Figure 16 on page 16.
• Each drive slot holds a plug-in drive carrier module that can hold these drive types, dependent
upon the chassis type:
• At the rear, the chassis assembly can hold a maximum of two PSUs and two RAID controllers or
SBB-compliant expansion IOMs.
Front of 2U12 RAID or Expansion Chassis
Figure 24 provides an illustration of the front of the 2U12 RAID or expansion chassis (without drives
installed).
Figure 24 Front of 2U12 RAID or Expansion Chassis
System Overview21
Front of 2U24 RAID or Expansion Chassis
Figure 25 provides an illustration of the front of the 2U24 RAID or expansion chassis (without drives
installed).
Figure 25 Front of 2U24 RAID or Expansion Chassis
NOTE:
• Either 2U chassis can be configured as a RAID chassis (two controllers) or as an optional expansion
chassis for adding storage.
• The 2U12 expansion chassis uses the same expansion IOMs as the 2U24 and 5U84 expansion
chassis.
• The 2U24 expansion chassis uses the same expansion IOMs as the 2U12 and 5U84 expansion
chassis.
5U Chassis Core Product
The 5U chassis consists of a sheet metal chassis, integrated PCBs, and a module runner system. The
design concept is based on an chassis subsystem together with a set of plug-in modules. A typical
chassis system—as supplied—includes the following:
• A chassis consisting of:
• Two sliding drawers containing Disk Drive in Carrier (DDIC) modules (42 drive slots per drawer).
• An Operator’s (Ops) panel.
• A chassis drawer bezel.
• A midplane PCB into which other components and CRUs connect.
• Two power supply units (PSUs) with power cords.
• Five cooling fans.
• Two RAID controllers (with Ethernet cable) or two expansion IOMs: 2 x SBB-compliant interface
slots.
• Up to 84 DDIC modules populated within two drawers (42 DDICs per drawer; 14 DDICs per row)
as follows:
22QXS G2 Hardware Installation and Maintenance Guide
• 2.5” SFF or 3.5” LFF SAS drives
• 3.5” SATA LFF drives
• A rail kit for rack mounting.
IMPORTANT: To ensure sufficient circulation and cooling throughout the chassis, all PSU slots,
cooling fan slots, RAID controller, and expansion IOM slots must contain a functioning CRU. Do not
replace a faulty CRU until the replacement is available in hand.
5U Chassis Front View
Figure 26 provides an illustration of the front of the 5U84 RAID or expansion chassis (with bezel
installed). Drawer 0 is at the top of the chassis. Drawer 1 is at the bottom of the chassis.
Figure 26 Front of 5U84 RAID or Expansion Chassis
1
1
Drawer 0
5U Chassis Drive Slots View
Figure 27 provides an illustration of the 5U84 RAID or expansion chassis (drive slots view) with the
drawer open. There are a total of 84 drive slots in the two drawers.
• Drawer 0 has drive slots 0-41 (42 drives can be installed)
• Drawer 1 has drive slots 42-83 (42 drives can be installed)
IMPORTANT: Drawer sideplanes—also known as side cards—can be hot-swapped as
field-replaceable units (FRUs).
However, these FRUs require a special tool, and replacement should be performed by qualified service
personnel only.
Contact your service provider for more information.
2
2
Drawer 1
System Overview23
NOTE: Figure 27 displays the front of the drawer on the left side of the illustration. The back of the
0/42
2/44
1/43
3/45
4/46
5/47
6/48
7/49
8/50
9/51
10/52
11/53
12/54
13/55
14/56
15/57
16/58
23/65
18/60
19/61
20/62
21/63
22/64
41/83
17/59
25/67
26/68
27/69
28/70
29/71
30/72
31/73
24/66
37/79
38/80
35/77
39/81
33/75
34/76
36/78
40/82
32/74
drawer is on the right side of the illustration.
Figure 27 5U84 RAID or Expansion Chassis Drive Slots
2
1
4
1
3
5U84 RAID Chassis (Rear View/Two CNC Controllers)
Figure 28 on page 25 provides an illustration of the 5U84 RAID chassis rear view with two CNC
controllers (FC/iSCSI, 4-ports) installed.
NOTE: The 5U84 RAID chassis supports attachment of expansion chassis for adding storage.
Drawer 0/1 Front
Drawer 0/1 Rear (slides into chassis)
2
4
Drawer 0/1 Left Side
Drawer 0/1 Right Side
3
24QXS G2 Hardware Installation and Maintenance Guide
Refer to Figure 28 for all the 5U84 Raid chassis CRUs with CNC controllers.
Callout 1, Figure 28, is the Controller A location. Callout 2 Figure 28, is the Controller B location. Both
controllers are shown in the closed/locked position.
Controller A
Fan 1
Fan 4
4
1
5
2
5
8
Controller B
Fan 2
PSU 0
6
3
6
9
Fan 0
Fan 3
PSU 1
2
7
Figure 29 provides a rear view of the CNC FC/iSCSI controller used in the 5U84-drive systems. You can
configure the CNC host interface ports (Ports 0, 1, 2 and 3) with 8/16Gb/s FC SFPs; 10GbE iSCSI SFPs;
or 1Gb/s RJ-45 SFPs.
Figure 29 CNC FC/iSCSI Controller
1
2345
10
9
8
7
6
System Overview25
1
3
5
7
9
SAS Port
USB Port
Reset
CNC Port 1
CNC Port 3
10
5U84 Expansion Chassis (Rear View)
Figure 30 provides an illustration of the 5U84 expansion chassis rear view with two expansion IOMs
(2-SAS ports used) installed.
Refer to Figure 30 for all the 5U84 expansion chassis CRUs with expansion IOMs.
NOTE: The 5U84 expansion chassis uses the same expansion IOMs as the 2U12 and 2U24 expansion
chassis.
Figure 30 5U84 Expansion Chassis Rear View (SAS)
2
4
6
8
Ethernet Port
Serial Ports (service only)
CNC Port 0
CNC Port 2
Lock/Release Handle
1
2
3
8
1
4
7
IOM A
Fan 1
Fan 4
4
5
9
2
5
8
IOM B
Fan 2
PSU 0
6
3
6
9
Fan 0
Fan 3
PSU 1
7
26QXS G2 Hardware Installation and Maintenance Guide
Expansion Chassis IOM
Figure 31 provides a rear view of the expansion chassis IOM used in the 5U84-drive systems. Ports
A/B/C ship configured with 12Gb/s HD mini-SAS (SFF-8644) external connectors.
Figure 31 Expansion Chassis IOM
1
23
4
5
6
1
3
5
IMPORTANT: RAID chassis and expansion configurations:
• When the expansion chassis shown above (Figure 30) is used with RAID chassis for adding
• The Ethernet port on the expansion module is not used in RAID chassis and expansion
Serial Port
SAS Port B
Ethernet Port
storage, its middle HD mini-SAS expansion port (“B”) is disabled by the firmware.
configurations, and is disabled.
2
4
6
SAS Port A
SAS Port C
Lock/Release Handle
Power Supply Unit (PSU)
Figure 32 provides and illustration of the PSU used in the 5U84 RAID chassis and the expansion
chassis (oriented for use in the left PSU Slot 0 and right PSU Slot 1).
1
2
Figure 32 5U84 Chassis PSU
56
4
3
7
System Overview27
Fans
1
3
5
7
Figure 33 provides and illustration of the fan used in the 5U84 RAID chassis and the expansion
chassis. The 5U84 RAID or expansion chassis uses five fans for sufficient cooling and air flow for the
chassis. Figure 33 shows an example of a fan oriented for use in the Fan 0, Fan 1, Fan 2, Fan 3, and
Fan 4 slots of the RAID or expansion chassis.
Release Latch
PSU Fault
Power OK
Power Switch
2
4
6
Figure 33 5U84 Chassis Fan
Handle
AC Fail
Power Connect
1
1
3
5U Chassis
The 5U chassis consists of a sheet metal chassis with an integrated midplane PCB, module runner
system, and two drawers for drive modules.
NOTE: Customers select a chassis type and drives separately. Empty drive slots in the 5U chassis
require drive blanks. The 5U84 chassis (Figure 27 on page 24) can be configured as follows:
• 2.5” SFF or 3.5” LFF SAS drives
• 3.5” LFF SATA drives
Release Latch
Fan OK
2
3
4
2
4
Handle
Fan Fault
Additional characteristics include:
• The chassis has a 19-inch rack mounting that enables it to be installed onto standard 19-inch racks
and uses five EIA units of rack space (8.75") for a 5U84 chassis.
• At the front of the chassis two drawers can be opened and closed.
28QXS G2 Hardware Installation and Maintenance Guide
• Each drawer provides access to 42 slots for Disk Drive in Carrier (DDIC) modules.
• DDICs are top mounted into the drawers.
• The front of the chassis also provides chassis status LEDs and drawer status/activity LEDs.
• At the rear of the chassis, access is provided to rear panel CRUs:
• Two SBB-compliant RAID controllers or expansion IOMs
•Two PSUs
• Five fan cooling modules
5U Chassis Drawers
Each chassis drawer contains 42 slots, each of which can accept a single DDIC containing a 3.5" LFF
drive or a 2.5" SFF drive with an adapter. See Figure 26 on page 23 and Figure 27 on page 24.
Opening a drawer does not interrupt the functioning of the storage system, and DDICs can be
hot-swapped while the chassis is in operation. However, drawers must not be left open for longer
than 2 minutes, or airflow and cooling will be compromised.
IMPORTANT: During normal operation, drawers should be closed to ensure proper airflow and
cooling within the chassis.
A drawer is designed to support its own weight, plus the weight of installed DDICs, when fully
opened.
Safety features
The chassis safety features include:
• To reduce the possibility of toppling, only one drawer can be open at a time.
• The drawer locks into place when fully opened and extended.
• To reduce pinching hazards, two latches must be released before the drawer can be pushed back
into the drawer slot within the chassis.
Power and data are sent via three baseplanes and two sideplanes. The direct-dock SATA version of the
5U chassis has one active and one passive sideplane, and has dual power paths, but only a single data
path.
Locking Drawer
Each drawer can be locked shut by turning both anti-tamper locks clockwise using a screwdriver with
a Torx T20 bit. The anti-tamper locks are symmetrically placed on the left-hand and right-hand sides
of the drawer bezel.
Drawer status and activity LEDs can be monitored via two Drawer LEDs panels located next to the two
drawer-pull pockets located on the left-hand and right-hand side of each drawer. Figure 34 on
page 30 provides the location of the drawer LEDs panel (left and right side.
System Overview29
Figure 34 Drawer/Bezel LED Panel
3
4
5
6
7
1
Left Side of Chassis
4
Drawer Activity
7
Drawer Activity Indicators
Bar Graph
1
2
3
4
5
8
6
7
2
Right Side of Chassis
5
Logical Fault
8
Anti-tamper Lock
3
Sideplane OK/Power Good
6
Cable Fault
2U Operator’s (Ops) Panel
Each of the chassis supported by QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and
QXS-G2-484 systems feature an Ops panel located on the chassis left ear flange. The Ops panel for
2U12 and 2U24 chassis are identical.
A flexible cable connects the Ops panel to the midplane. The Ops panel is a passive component: the
midplane controls the panel, and the RAID controllers or expansion IOMs control all the panel’s
functions.
2U12 Chassis Ops Panel
Figure 35 provides a front view of the 2U12 chassis with the ops panel.
Figure 35 2U12 Chassis Ops Panel
0
4
8
1
2
10
2
6
3
7
11
1
5
9
30QXS G2 Hardware Installation and Maintenance Guide
1
Note: Drives are numbered from 0-11 (12 drives).
Ops Panel
2U24 Chassis Ops Panel
Figure 36 provides a front view of the 2U24 chassis with the ops panel.
01
1
2345
2
Figure 36 2U24 Chassis Ops Panel
2U12 Chassis
2
6
78
9
10 1112 13 14 15
16 171819 20
21 22
23
1
Note: Drives are numbered from 0-23 (24 drives).
Ops Panel
2U12 and 2U24 Ops Panel Functions
An integral part of the chassis, the Ops panel is not replaceable on site. The Ops panel provides the
functions shown in the illustration below and listed in the table. See also 2U Chassis Ops Panel LEDs
on page 118.
Figure 37 2U Chassis Ops Panel Functions
4
3
2
1
2
2U24 Chassis
5
LEDStatus
1
2
3
Unit Identification Display Green: Seven segment display: chassis sequence
IdentityBlue: Power On (5s) test state
Module FaultConstant or blinking amber: fault present
System Overview31
4
5
Note: Chassis has a thermal sensor behind the ops panel.
System Power On/Standby Constant Green: positive indication
Constant Amber: fault present
Mute ButtonNot used
System Power On/Standby LED (green/amber)
LED displays amber when only standby power is available. LED displays green when system power is
available.
Module Fault LED (amber)
LED illuminates when experiencing a system hardware fault. It may be associated with a Fault LED on
a PSU, RAID controller, or expansion IOM that helps the user to identify which component is causing
the fault.
Location LED (blue)
When activated, the Identity LED blinks at a rate of 1s on, 1s off to easily locate the chassis within a
data center. The locate function may be enabled/disabled through SES (SCSI Enclosure Services).
NOTE: Activate the Location LED by using the disk management utility (GUI) or CLI.
Unit Identification Display
The UID is a dual seven-segment display that can be used to provide feedback to the user. Its primary
function is to display a chassis unit identification number to assist users in setting and maintaining
multiple chassis systems.
Thermal sensor
The thermal sensor is located on the outside of the chassis, and it sends input to the chassis (RAID
controller/expansion IOM) about its external operating ambient temperature.
5U Chassis Ops Panel
The 5U84 chassis front panel has an Ops panel mounted on the left ear flange. A flexible cable
connects the Ops panel to the midplane. The Ops panel is a passive component: the midplane
controls the panel, and the RAID controllers or expansion IOMs control all the panel’s functions.
32QXS G2 Hardware Installation and Maintenance Guide
Chassis Ops Panel Location
Figure 38 provides a front view of the 5U84 chassis with the ops panel.
1
Figure 38 5U84 Chassis Ops Panel
2
3
4
1
3
Note: The 5U84 ops panel is different from the 2U12 and 2U24 units.
Ops Panel
Drawer 0
Ops Panel Functions
An integral part of the chassis, the Ops panel is not replaceable on site. The Ops panel provides the
functions shown in the illustration below and listed in the table. The 5U ops panel provides the
following:
• Unit Identification Display (UID)
• Mute/Input button
• Power On/Standby LED (green/amber)
• Module Fault LED (amber)
• Logical Status LED (amber)
• Top Drawer Fault LED (amber)
2
4
5U84 Chassis
Drawer 1
System Overview33
• Bottom Drawer Fault LED (amber)
Figure 39 5U Chassis Ops Panel Functions
1
3
2
4
5
6
7
LEDStatus
1
2
3
4
5
6
7
Unit Identification Display Green: Seven segment display: chassis sequence
Input SwitchNot used
System Power On/Standby Constant Green: positive indication
Constant Amber: system in standby (not operational)
Module FaultConstant or blinking amber: fault present
Logical StatusConstant or blinking amber: fault present
Top Drawer FaultConstant or blinking amber: fault present in drive, cable, or
sideplane.
Bottom Drawer FaultConstant or blinking amber: fault present in drive, cable, or
sideplane.
Unit Identification Display
The UID is a dual seven-segment display that can be used to provide feedback to the user. Its primary
function is to display a chassis unit identification number to assist users in setting and maintaining
multiple chassis systems.
System Power On/Standby LED (green/amber)
LED displays amber when only standby power is available (non-operational). LED displays green when
system power is available (operational).
Module Fault LED (amber)
LED illuminates when experiencing a system hardware fault. It may be associated with a Fault LED on
a PSU, fan, RAID controller or expansion IOM, DDIC, or drawer that helps the user to identify which
component is causing the fault.
34QXS G2 Hardware Installation and Maintenance Guide
Logical Status LED (amber)
This LED indicates a change of status or fault from something other than the chassis management
system. This may be initiated from the controller module or an external HBA. The indication is
typically associated with a DDIC and LEDs at each disk position within the drawer, which help to
identify the DDIC affected.
Drawer Fault LEDs (amber)
This LED indicates a disk, cable, or sideplane fault in the drawer indicated: Top (Drawer 0) or Bottom
(Drawer 1).
PSUs – 2U CRU
The 2U12 and 2U24 use the same type of PSUs. AC-DC power is provided by up to two auto-ranging
power PSUs with integrated axial cooling fans. The RAID controllers and expansion IOMs control fan
speed.
PSU Features
The 580W PSU voltage operating range is nominally 100V–240V AC, and operates at 50–60 Hz input
frequency. Figure 41 provides a representative illustration of the PSUs in a 2U chassis. PSU0 and PSU1
are installed 180 degrees as compared to each other (callouts 3 and 4).
NOTE: Controller A and Controller B are also installed 180 degrees as compared to each other
(callouts 1 and 2).
Figure 41 provides an illustration of the PSU (with LEDs) for the 2U12 and 2U24 chassis. The diametric
rear orientation in Figure 41 shows the PSU aligned for insertion into the right-hand PSU slot (PSU1)
located on the 2U12 or 2U24 chassis rear panel.
Figure 41 PSU 2U12/2U24 Chassis
4
1
3
Multiple PSUs
The 2U12 and 2U24 storage systems include two PSUs which provide redundant power control for
the system so that if one PSU fails, the other maintains the power supply, and chassis operation is not
affected while you replace the faulty PSU.
PSUs are hot-pluggable, and replacement should only take a few seconds to do. Replacement must
be completed as soon as possible after the removal of the defective PSU to avoid a thermal exception.
The replacement procedure should be completed within an absolute maximum of 2 minutes.
Operation of the chassis with any modules missing will disrupt the airflow, and the drives will not
receive sufficient cooling. It is essential that all slots are fitted with PSUs prior to powering on the
chassis.
1
2
3
PSU OK LED: Green
Fan Fail LED: Amber/blinking amber
2
4
DC Fail LED: Amber/blinking amber
AC Fail LED: Amber/blinking amber
System Airflow
The system must be operated with low pressure rear exhaust installation. Back pressure created by
rack doors and obstacles is not to exceed 5 pascals (0.5mm water gauge). The cooling system
provides sufficient capacity to ensure that maximum temperatures are not exceeded.
The environment in which the chassis operates must be dust-free to ensure adequate airflow.
36QXS G2 Hardware Installation and Maintenance Guide
PSU – 5U CRU
Power is provided by two 2,214W PSUs. The PSU voltage operating range is nominally 200V–240V
AC, and operates at 50–60 Hz input frequency. The dimetric rear orientation in Figure 42 shows the
PSU aligned for insertion into its slot located on the chassis rear panel.
2
3
Figure 42 PSU 5U84 CRU
1
1
3
NOTE: If any of the PSU LEDs are illuminated amber, a module fault condition or failure has
occurred.
Dual PSUs provide redundant power for the 5U system: if one PSU fails, the other keeps the system
running while you replace the faulty module. PSUs are hot-swappable. Replacement of a PSU can be
performed while the chassis is running, but the procedure must be completed within two minutes of
the removal of the defective module. Verify that you have a replacement PSU on hand before
removing the defective module.
PSU OK LED: Green
PSU Fail LED: Amber/blinking amber
2
AC Fail LED: Amber/blinking amber
System Overview37
Fan – 5U CRU
The five fans at the rear of the chassis maintain all system components below their maximum
temperature, assuming the ambient temperature is below 35ºC. Fan speed is governed by the
controller modules.
Fans are hot-swappable. Replacement of a fan can be performed while the chassis is running, but the
procedure must be completed within two minutes of the removal of the defective module. Verify that
you have a replacement module on hand before removing the defective fan.
Figure 43 provides an illustration of the fan (with LEDs) for the 5U84 chassis.
Figure 43 Fan 5U84 CRU
1
2
1
Note: If any of the fan LEDs are illuminated amber, a module fault condition or failure has occurred.
Fan OK LED: Green
2
Fan Fault LED: Amber/blinking amber
38QXS G2 Hardware Installation and Maintenance Guide
RAID Controller and Expansion IOMs
This section describes the RAID controllers and expansion IOMs used in QXS-G2-312, QXS-G2-324,
QXS-G2-412, QXS-G2-424, and QXS-G2-484 systems. They are mechanically and electrically
compliant to the latest SBB v2.1 specification.
The diametric rear orientation in Figure 44 shows a 4-port FC/iSCSI controller module aligned for use
in the top controller module slot located on the 2U chassis rear panel. The controller module is also
properly aligned for use in either controller slot located on the 5U chassis rear panel.
Figure 44 provides an illustration of the CNC controller (FC/iSCSI) for the 2U12, 2U24, and 5U84
chassis.
Figure 44 CNC Controller CRU
Each controller module maintains VPD (Vital Product Data) in EEPROM devices, and are
interconnected by SBB-defined I2C buses on the midplane. In this way, the SBB modules can discover
the type and capabilities of the partner SBB module(s), and vice versa, within the chassis. A system
alarm occurs when incompatible configurations are detected.
12Gb/s Controller LEDs
The diagrams with tables that immediately follow provide descriptions for the different controller
modules that can be installed into the rear panel of a RAID chassis. Showing controller modules
separately from the chassis enables improved clarity in identifying the component items called out in
the diagrams and described in the companion tables within the figure/table ensembles.
NOTE: Consider the following when viewing RAID controller or expansion IOM diagrams appearing
on the following pages:
• In each diagram, the canister is oriented for insertion into the top controller/IOM slot (A) of 2U
chassis.
• When oriented for use in the bottom controller/IOM slot (B) of 2U chassis, the controller/IOM
labels appear upside down.
• In each diagram, the canister is oriented for insertion into either controller/IOM slot of 5U84
chassis.
System Overview39
CNC Controller (2-Port FC and 10GbE SFPs) and LED
2
2
1
1
4
5
6
7
910
8
3
Figure 47 provides an illustration of the CNC controller (2-port FC and 10GbE SFPs) and LEDs for the
2U12, 2U24, and 5U84 chassis.
NOTE: This CNC controller is used in the QXS-G2-312 and QXS-G2-424.
Figure 45 CNC (FC and 10GbE SFPs) Controller LEDs
LED
1
2
3
4
5
6
7
8
DescriptionDefinition
Host 4/8/16Gb FC1 Link
Status/Link Activity
Host 10GbE iSCSI
Link Status/Link
Activity
OKGreen — The controller is operating normally.
FaultOff — The controller is operating normally.
OK to RemoveOff — The controller is not prepared for removal.
IdentityWhite — The controller module is being identified.
Cache Status
Network Port Activity
Status
5
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
2,3
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Blinking green — System is booting.
Off — The controller module is not OK, or is powered off.
Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
Blue — The controller module is prepared for removal.
Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache,
so a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
40QXS G2 Hardware Installation and Maintenance Guide
9
10
Network Port Link
4
Speed
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Expansion Port Status Off — The port is empty or the link is down.
Green — The port is connected and the link is up.
•1 When in FC mode, the SFPs must be qualified 8Gb or 16Gb fiber optic option.
• A 16Gb/s SFP can run at 16Gb/s, 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
• An 8Gb/s SFP can run at 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
2
•
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will
•
switch to the mode of operation.
4
•
When port is down, both LEDs are off.
CNC iSCSI Controller (2-Port 1Gb RJ-45 SFPs) and LED
Figure 48 provides an illustration of the CNC iSCSI controller (2-port 1Gb RJ-45 SFPs) and LEDs for the
2U12, 2U24, and 5U84 chassis.
NOTE: This CNC controller is used in the QXS-G2-312 and QXS-G2-424.
LED
1
2
3
4
Figure 46 CNC iSCSI Controller LEDs
3
5
910
8
7
4
6
2
DescriptionDefinition
FC SFP: not used in this
example
Host 1GbE iSCSI
1
2,3
Link Status/Link
Activity
OKGreen — The controller is operating normally.
FaultOff — The controller is operating normally.
The FC SFP is not shown in this example.
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Blinking green — System is booting.
Off — The controller module is not OK, or is powered off.
Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
1
12
System Overview41
10
5
6
7
8
9
OK to RemoveOff — The controller is not prepared for removal.
Blue — The controller module is prepared for removal.
IdentityWhite — The controller module is being identified.
Cache Status
Network Port Activity
4
Status
Network Port Link
5
Speed
Expansion Port Status Off — The port is empty or the link is down.
•1 When in FC mode, the SFPs must be qualified 8Gb or 16Gb fiber optic option.
• A 16Gb/s SFP can run at 16Gb/s, 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
• An 8Gb/s SFP can run at 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
2
•
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will
•
switch to the mode of operation.
4
•
When port is down, both LEDs are off.
Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache,
so a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Green — The port is connected and the link is up.
CNC Controller (4-Port FC and 10GbE SFPs) and LED
Figure 47 provides an illustration of the CNC controller (4-port FC and 10GbE SFPs) and LEDs for the
2U12, 2U24, and 5U84 chassis.
NOTE: This CNC controller is used in the QXS-G2-412, QXS-G2-424, and QXS-G2-484.
Figure 47 CNC (FC and 10GbE SFPs) Controller LEDs
10
83
2
1
5679
4
10
22
1
1
1
2
42QXS G2 Hardware Installation and Maintenance Guide
LED
1
2
3
4
5
6
7
DescriptionDefinition
Host 4/8/16Gb FC1 Link
Status/Link Activity
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
2,3
Host 10GbE iSCSI
Link Status/Link
Activity
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
OKGreen — The controller is operating normally.
Blinking green — System is booting.
Off — The controller module is not OK, or is powered off.
FaultOff — The controller is operating normally.
Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
OK to RemoveOff — The controller is not prepared for removal.
Blue — The controller module is prepared for removal.
IdentityWhite — The controller module is being identified.
Cache Status
4
Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache,
so a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
8
9
10
Network Port Activity
5
Status
Network Port Link
5
Speed
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Expansion Port Status Off — The port is empty or the link is down.
Green — The port is connected and the link is up.
•1 When in FC mode, the SFPs must be qualified 8Gb or 16Gb fiber optic option.
• A 16Gb/s SFP can run at 16Gb/s, 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
• An 8Gb/s SFP can run at 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
2
•
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
•
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will
switch to the mode of operation.
4
•
Cache Status LED supports power on behavior and operational (cache status) behavior.
5
•
When port is down, both LEDs are off.
CNC iSCSI Controller (4-Port 1Gb RJ-45 SFPs) and LED
Figure 48 provides an illustration of the CNC iSCSI controller (4-port 1Gb RJ-45 SFPs) and LEDs for the
2U12, 2U24, and 5U84 chassis.
System Overview43
NOTE: This CNC controller is used in the QXS-G2-412, QXS-G2-424, and QXS-G2-484.
Figure 48 CNC iSCSI Controller LEDs
56791083
4
10
2
11
22
1
1
2
LED
1
2
3
4
5
6
7
DescriptionDefinition
FC SFP: not used in this
example
Host 1GbE iSCSI
Link Status/Link
Activity
OKGreen — The controller is operating normally.
FaultOff — The controller is operating normally.
OK to RemoveOff — The controller is not prepared for removal.
IdentityWhite — The controller module is being identified.
Cache Status
1
2,3
4
The FC SFP is not shown in this example.
Off — No link detected.
Green — The port is connected and the link is up.
Blinking green — The link has I/O or replication activity.
Blinking green — System is booting.
Off — The controller module is not OK, or is powered off.
Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore
error.
Blue — The controller module is prepared for removal.
Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache,
so a Green cache status LED does not, by itself, indicate that any user data is at
risk or that any action is necessary.
Off — In a working controller, cache is clean (contains no unwritten data).
This is an occasional condition that occurs while the system is booting.
Blinking green — A CompactFlash flush or cache self-refresh is in progress,
indicating cache activity.
8
9
10
44QXS G2 Hardware Installation and Maintenance Guide
Network Port Activity
5
Status
Network Port Link
5
Speed
Expansion Port Status Off — The port is empty or the link is down.
1
•
When in FC mode, the SFPs must be qualified 8Gb or 16Gb fiber optic option.
Off — The Ethernet link is not established, or the link is down.
Green — The Ethernet link is up (applies to all negotiated link speeds).
Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
Green — The port is connected and the link is up.
• A 16Gb/s SFP can run at 16Gb/s, 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
• An 8Gb/s SFP can run at 8Gb/s, 4Gb/s, or auto-negotiate its link speed.
2
•
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
•
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will
switch to the mode of operation.
4
•
Cache Status LED supports power on behavior and operational (cache status) behavior.
5
•
When port is down, both LEDs are off.
Cache Overview and Status LED Details
This section provides the following information:
• Cache Overview
• Power On/Off Behavior
• Cache Status Behavior
Cache Overview
To enable faster data access from disk storage, the following types of caching are performed:
• Write-back caching.
• The controller writes user data into the cache memory in the controller module rather than
directly to the disks.
• Later, when the storage system is either idle or aging —and continuing to receive new I/O
data—the controller writes the data to the disks.
• Read-ahead caching.
• The controller detects sequential data access, reads ahead into the next sequence of
data—based upon settings—and stores the data in the read-ahead cache.
• Then, if the next read access is for cached data, the controller immediately loads the data into
the system memory, avoiding the latency of a disk access.
TIP: See the
and settings.
QXS G2 Disk Management Utility User Guide
Power On/Off Behavior
The storage chassis unified complex programmable logic device (CPLD) provides integrated Power
Reset Management (PRM) functions. During power on, discrete sequencing for power on display
states of internal components is reflected by blinking patterns displayed by the Cache Status LED.
Ta ble 4 provides the cache status LED behavior.
for more information about cache options
Ta bl e 4Cache Status LED – Power On Behavior
ItemDisplay states reported by Cache Status LED during power on sequence
Display state01234567
System Overview45
Ta bl e 4Cache Status LED – Power On Behavior
ItemDisplay states reported by Cache Status LED during power on sequence
ComponentVPSCSAS BEASICHostBootNormalReset
Blink patternOn 1/Off 7 On 2/Off 6 On 3/Off 5 On 4/Off 4 On 5/Off 3 On 6/Off 2 Solid/OnSteady
NOTE: Once the chassis has completed the power on sequence, the Cache Status LED displays
Solid/On (Normal), before assuming the operating state for cache purposes.
Cache Status Behavior
If the LED is blinking evenly, a cache flush is in progress. When a controller module loses power and
write cache is dirty (contains data that has not been written to drives), the supercapacitor pack
provides backup power to flush (copy) data from write cache to CompactFlash memory. When cache
flush is complete, the cache transitions into self-refresh mode.
If the LED is blinking momentarily slowly, the cache is in a self-refresh mode. In self-refresh mode, if
primary power is restored before the backup power is depleted (3–30 minutes, depending on various
factors), the system boots, finds data preserved in cache, and writes it to drives. This means the
system can be operational within 30 seconds, and before the typical host I/O time-out of 60 seconds,
at which point system failure would cause host-application failure. If primary power is restored after
the backup power is depleted, the system boots and restores data to cache from CompactFlash,
which can take about 90 seconds.
The cache flush and self-refresh mechanism is an important data protection feature; essentially four
copies of user data are preserved: one in controller cache and one in CompactFlash of each controller.
The Cache Status LED illuminates solid green during the boot-up process. This behavior indicates the
cache is logging all POSTs, which will be flushed to the CompactFlash the next time the controller
shuts down.
IMPORTANT: If the Cache Status LED illuminates solid green—and you wish to shut down the
controller—do so from the user interface, so unwritten data can be flushed to CompactFlash.
Controller Failure/Single-Controller Operational
Cache memory is flushed to CompactFlash in the case of a controller failure or power loss. During the
write to CompactFlash process, only the components needed to write the cache to the CompactFlash
are powered by the supercapacitor.
This process typically takes 60 seconds per 1Gbyte of cache. After the cache is copied to
CompactFlash, the remaining power left in the supercapacitor is used to refresh the cache memory.
While the cache is being maintained by the supercapacitor, the Cache Status LED blinks at a rate of
1/10 second on and 9/10 second off.
46QXS G2 Hardware Installation and Maintenance Guide
If the controller has failed or does not start, is the Cache Status LED on/blinking?
Ta bl e 5Controller Failure/Single-Controller Operational
AnswerAction
No, the Cache LED status is off, and the controller does
not boot.
No, the Cache Status LED is off, and the controller
boots.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller does
not boot.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller
boots.
Yes, at a blink 1:1 rate - 2 Hz, and the controller does
not boot.
Yes, at a blink 1:1 rate - 1 Hz, and the controller boots.The system is in self-refresh mode. If the problem
If the problem persists, replace the controller module.
The system has flushed data to disks. If the problem
persists, replace the controller module.
You may need to replace the controller module.
The system is flushing data to CompactFlash. If the
problem persists, replace the controller module.
You may need to replace the controller module.
persists, replace the controller module.
12Gb/s Expansion IOM
If optional expansion chassis have been cabled to add storage, the supported chassis are configured
with dual expansion IOMs.
Figure 49 provides an illustration of the expansion IOM and LEDs for the 2U12, 2U24, and 5U84
chassis.
LED
1
2
3
4
Figure 49 Expansion IOM LEDs
1
DescriptionDefinition
IdentityBlue — The IOM is being identified.
FaultOff — The IOM is operating normally.
OKGreen — The expansion module is operating normally.
HD mini-SAS connector LEDs
(A/B/C)
2
3
Amber — A fault has been detected or a service action is
required.
Blinking green — System is booting.
Off — The expansion module is powered off.
See Table 6 for Activity (Green) and Fault (Amber) LED states.
4
5
6
System Overview47
5
6
Ta bl e 6IOM LED Activity States
ConditionActivity (Green)Fault (Amber)
No cable presentOffOff
Cable present: all links up/no activityOnOff
Cable present: all links up/with aggregate port activityBlinkingOff
Ethernet Port Link/Active
Status (Left)
Ethernet Port Link Speed
(Right)
Ta ble 6 provides companion data for Figure 49 above relative to LED states for A/B/C SAS port
expansion.
Not used in this configuration.
Not used in this configuration.
Critical fault: Any fault causing operation of the cable to cease
or fail to start (e.g., over current trip).
Non-critical fault: any fault that does not cause the connection
to cease operation (e.g., not all links are established; over
temperature).
IMPORTANT: RAID and expansion chassis configurations:
• When the expansion IOM shown above (Figure 49) is used with the QXS-G2-412, QXS-G2-424 and
QXS-G2-484 system controller modules for adding storage, its middle HD mini-SAS expansion port
(“B”) is disabled by the firmware.
• The Ethernet port on the expansion IOM is not used in RAID and expansion chassis configurations,
and is disabled.
Drive Modules
The QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 systems support
different drive modules for use in 2U and 5U chassis. The drive modules used in 2U chassis are
referred to as drive carrier modules, whereas those used in 5U chassis are referred to as a Disk Drive in
Carrier (DDIC).
OffOn
BlinkingBlinking: 1s on/1s off
48QXS G2 Hardware Installation and Maintenance Guide
Drive Carrier Module in 2U Chassis
1
2
0
3
4
5
6
7
8
9
10
11
The drive carrier module comprises a hard disk held by a carrier.
• Each 2U12 drive slot holds a single low profile 1.0-inch high, 3.5-inch form factor drive in its
carrier.
• The drives are horizontal.
• The 2U12 chassis accommodates 3.5” SAS and 3.5” SATA drives.
• A special interposer is required for SATA drives (included with the SATA drive and its carrier
when purchased).
• Each 2U24 drive slot holds a single low profile 5/8-inch high, 2.5-inch form factor disk drive in its
carrier.
• The drives are vertical.
• The 2U24 chassis accommodates 2.5” SAS or SATA drives.
• A special interposer is required for SATA drives (included with the SATA drive and its carrier
when purchased).
The carriers have mounting locations for:
• Direct dock SAS drives.
• A sheet steel carrier holds each drive, which provides thermal conduction, radio frequency, and
electro-magnetic induction protection, and physically protects the drive.
The front cap also has an ergonomic handle which gives the following functions:
• Secure location of the carrier into and out of drive slots.
• Positive spring-loading of the drive/midplane connector.
• The carrier can use this interface:
• Dual path direct dock Serial Attached SCSI.
Populating 2U12 Chassis with Drives
The 2U12 chassis ships with drives installed. Please review these rules:
• The minimum number of drives supported by the chassis is 1.
• Hard disk drives (HDD) and solid state drives (SDD) can be mixed in the chassis.
NOTE: If the chassis has no drives installed, always install the first drive into slot 0, and then
populate slots 1-11 sequentially with any additional drives.
Integers on the drives indicate drive slot numbering sequence (0-11). Figure 50 provides a front view
of the 2U12-drive chassis fully populated with drives.
Figure 50 2U12-Drive Chassis Front View
System Overview49
Populating 2U24 Chassis with Drives
01
2345
6
78
9
10 1112 1314 15
1617
18
1920
21 22
23
The 2U24 chassis ships with drives installed. Please review these rules:
• The minimum number of drives supported by the chassis is 1.
• Hard disk drives (HDD) and solid state drives (SDD) can be mixed in the chassis.
NOTE: If the chassis has no drives installed, always install the first drive into slot 0, and then
populate slots 1-23 sequentially with any additional drives.
Integers on the drives indicate drive slot numbering sequence (0-23). Figure 51 provides a front view
of the 2U24-drive chassis fully populated with drives.
Figure 51 2U24-Drive Chassis Front View
NOTE: Diametric pictorial views of supported drive carriers are provided in the following
illustrations. Modules are shown oriented for insertion into drive slots located on the chassis front
panel.
LFF 3.5” Drive Carrier Module (SAS Drive)
Figure 52 provides an illustration of the dual path LFF 3.5” drive carrier module for the 2U12 chassis.
Green and amber LEDs on the front of each drive carrier module indicate drive status. The SEP
controls these LEDs.
Drive Blanks
Drive blanks, also known as dummy drive carrier modules, are provided in 3.5" (2U12) and 2.5" (2U24)
form factors. They must be installed in empty disk drive slots to create a balanced air flow.
Figure 54 provides an illustration of the 3.5” drive blank for the 2U12 chassis and the 2.5” drive blank
for the 2U24 chassis
Figure 54 2.5” and 3.5” Drive Blank
2
1
1
3.5” Drive Blank
DDIC in 5U chassis
Each drive is housed in a carrier that enables secure insertion of the drive into the drawer with the
appropriate SAS carrier transition card.
2
2.5” Drive Blank
System Overview51
DDIC with 3.5” Drive
Figure 55 shows a DDIC with a 3.5" drive for the 5U84 chassis.
Figure 55 DDIC with 3.5” Drive
DDIC with Adapter and 2.5” Drive
Figure 56 shows a DDIC with adapter and 2.5" drive for the 5U84 chassis.
Figure 56 DDIC with Adapter and 2.5” Drive
The DDIC features a latch button (center of top view) and a slide latch with arrow label (left of latch
button). These features allow you to install and secure the DDIC into the drive slot within the drawer.
They also allow you to disengage the DDIC from its slot, and remove it from the drawer. The DDIC
features a single Drive Fault LED (top view on right), which illuminates amber when the disk has a
fault.
52QXS G2 Hardware Installation and Maintenance Guide
Populating Drawers with DDICs
The 5U84 chassis does not ship with DDICs installed. Please review these rules:
• The minimum number of drives supported by the chassis is 14.
• DDICs must be added to drive slots in rows (14 drives at a time).
• Beginning at the front of the drawer(s), install DDICs consecutively by number, and alternately
between the top drawer and the bottom drawer.
• Namely, install first at slots 0 through 13 in the top drawer, and then 42 through 55 in the
bottom drawer.
• After that, install slots 14 through 27, and so on.
• The number of populated rows must not differ by more than one row between the top and
bottom drawers.
• Hard disk drives (HDD) and solid state drives (SDD) can be mixed in the same drawer.
• HDDs installed in the same row should have the same rotational speed.
• Although DDICs holding 3.5" disks can be intermixed with DDICs holding 2.5" disks within the
chassis, each row should be populated with drives of the same form factor (all LFF or all SFF).
NOTE: For additional information, see Populating 5U84 Drawers on page 195 within 5U84 Chassis
CRU Replacement on page 190.
Figure 57 shows a 5U84 chassis drawer fully populated with DDICs (42 drives).
Controller modules actively manage the chassis. Each controller has a SAS expander with its own
storage enclosure processor (SEP) that provides a SES target for a host to interface to through the
ANSI SES Standard. If one of these controllers fails, the other module will continue to operate.
Refer to the controller’s specification or the SES Interface specification for definitions of the module’s
functions and its SES control.
System Overview53
Management Interfaces
Upon completing the hardware installation, you can access the controller module’s web-based
management interface—disk management utility (GUI)—to configure, monitor, and manage the
storage system. See also “Accessing the disk management utility”.
The controller module also provides a CLI in support of command entry and scripting.
54QXS G2 Hardware Installation and Maintenance Guide
This chapter provides installation information for the following 12-G QXS systems:
• QXS-G2-312: 12-Drive (2-Port: FC or iSCSI)
• QXS-G2-324: 24-Drive (2-Port: FC or iSCSI)
• QXS-G2-412: 12-Drive (4-Port: FC or iSCSI)
• QXS-G2-424: 24-Drive (4-Port: FC or iSCSI)
• QXS-G2-484: 84-Drive (4-Port: FC or iSCSI)
Installation Checklist
This section shows how to plan for and successfully install of your system into an industry standard
19-inch rack cabinet.
Chapter 3
Installation
CAUTION: To install the system, use only the power cords supplied, or power cables that match the
specification quoted in AC Power Cords on page 223.
Ta ble 7 outlines the steps required to install the chassis, and initially configure and provision the
storage system. To ensure successful installation, perform the tasks in the order presented.
Ta bl e 7Installation Checklist
StepTaskWhere to Find Procedure
1Unpack the chassis. See Unpacking Chassis on page 59.
2Install the RAID chassis and optional expansion
chassis in the rack.
1
• See Required Tools on page 61.
• See Requirements for rackmount installation
on page 62.
• See Installing 2U Chassis on page 62.
• See Installing 5U Chassis on page 64.
Installation55
Ta bl e 7Installation Checklist (continued)
StepTaskWhere to Find Procedure
32U12 and 2U24 Chassis: ship with drives installed. • See Full Disk Encryption (FDE) on page 65.
• See Populating 2U12 Chassis with Drives on
page 49 (if required).
• See Populating 2U24 Chassis with Drives on
page 50 (if required).
5U84 Chassis: populate drawers with drives
(DDIC–disk in drive carrier).
• See Populating Drawers with DDICs on
page 53.
• See Populating Drawers with DDICs on
NOTE: Drives ship in two separate boxes (42
drive in each box).
page 53.
• See Installing a 5U84 DDIC on page 194.
4Connect power cords.See Power Cord Connection on page 71.
5Test chassis connectivity.See Testing Chassis Connections on page 72.
6For CNC models, verify the host interface protocol
setting (not necessary for SAS models).
• See CNC Technology on page 73.
• The CNC controllers allow for setting the host
interface protocol for qualified SFP options.
• See Change CNC Port Mode on page 98.
7Install required host software.See Host System Requirements on page 72.
8
Connect hosts.
2
See Host Connection on page 76.
9Connect remote management hosts.See Connecting Management Host to Network on
page 83.
10Obtain IP values and set network port IP
properties on the RAID chassis.
• See Obtaining IP Values and System Settings
on page 85.
• For USB CLI port and cable see USB Device
Connection on page 225.
11
Perform initial configuration tasks:
• Sign-in to the web-browser interface to access
the application GUI.
• Verify firmware revisions and update if
necessary.
3
Topics below correspond to bullets at left:
See the “Getting Started” chapter in QXS G2 Disk
Management Utility Guide.
• See Updating Firmware on page 84.
• Also see the same topic in the QXS G2 Disk
Management Utility Guide.
• Initially configure and provision the system
using the disk management utility (GUI).
See the topics about configuring the system and
provisioning (One Button Configuration: OBC) the
system in the QXS G2 Disk Management Utility
Guide.
122U12 and 2U24 Chassis: Install bezel.See Installing a 2U Bezel on page 167.
•1 The environment in which the chassis operates must be dust-free to ensure adequate airflow.
2
•
For more information about hosts, see the About hosts topic in the
Utility User Guide
.
QXS G2 Disk Management
56QXS G2 Hardware Installation and Maintenance Guide
•3 The
QXS G2 Disk Management Utility User Guide
Utility (GUI) on page 112. See the
additional information.
Planning for Installation
Before beginning the chassis installation, familiarize yourself with the system configuration
requirements. The figures listed below show the locations for each plug-in module (CRU):
• 2U12 front panel: see 2U12-Drive Chassis Front View on page 16
• 2U24 front panel: see 2U24-Drive Chassis Front View on page 16
• 2U RAID chassis rear panel:
• 4-host port controllers
• See QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID Chassis Rear View (4-Host Port
Controllers) on page 18
• 2U RAID chassis rear panel:
• 2-host port controllers
• See QXS-G2-312 and QXS-G2-324 RAID Chassis Rear View (2-Host Port Controllers) on page 16
is introduced in Accessing Disk Management
QXS G2 Disk Management Utility User Guide
or online help for
• 2U expansion chassis rear panel: see 2U12-Drive/2U24-Drive Expansion Chassis Rear View on
page 19
• 5U front panel: see 5U Chassis Drive Slots View on page 23
• 5U RAID chassis rear panel: see 5U84 RAID Chassis (Rear View/Two CNC Controllers) on page 24
• 5U expansion chassis rear panel: see 5U84 Expansion Chassis (Rear View) on page 26
IMPORTANT: Installation work should be performed by qualified service personnel.
Ta ble 8 provides storage system configuration information.
Ta bl e 8Storage System Configuration
Module/CRULocationDescription
Drive carrier module2U front panelAll drive slots must hold either a drive carrier or dummy drive
carrier module. Empty slots are not allowed. At least one drive
must be installed.
DDIC5U front panelMaximum 84 disks are installed (42 disks per drawer).
• Minimum 14 disks are required. Follow drawer population
rules.
• See Populating Drawers with DDICs on page 53.
PSU2U rear panelTwo PSUs provide full power redundancy, allowing the system
to continue to operate while a faulty PSU is replaced.
PSU5U rear panelTwo PSUs provide full power redundancy, allowing the system
to continue to operate while a faulty PSU is replaced.
Fan5U rear panelFive fans provide airflow circulation, maintaining all system
components below the maximum temperature allowed.
Installation57
Ta bl e 8Storage System Configuration (continued)
Module/CRULocationDescription
RAID ControllerRear panelTwo RAID controllers must be installed for this configuration
(RAID chassis).
Expansion IOMRear panelTwo expansion IOMs must be installed for this configuration
(expansion chassis).
NOTE: Although different drive modules and rear panel CRUs are used by the different chassis form
factors, whether used for RAID chassis or expansion chassis configuration, the RAID controllers and
expansion IOMs are common across 2U and 5U chassis.
Preparing for Installation
NOTE: Chassis configurations:
• 2U RAID chassis are delivered without drives installed (RAID controllers and PSUs are installed).
• 2U expansion chassis are delivered without drives installed (expansion IOMs and PSUs are
installed).
• 5U RAID chassis are delivered:
• Without drives installed within the drawers.
• Drives ship in a 42-drive pack (two each 42-drive packs for a total of 84 drives).
• RAID controllers, fans, and PSUs are installed within the chassis.
• 5U expansion chassis are delivered:
• Without drives installed in the drawers.
• Drives ship in a 42-drive pack (two each 42-drive packs for a total of 84 drives).
• Expansion IOMs, fans, and PSUs are installed within the chassis.
CAUTION: Lifting chassis:
• A 2U chassis—together with all its component parts—is too heavy for one person to lift and install
into the rack cabinet. Two people are required to safely move a 2U chassis.
• A 5U chassis—delivered without DDICs installed—requires four people to lift it from the box. A
suitable mechanical lift is required to hoist the chassis for positioning in the rack.
Make sure you wear an effective anti-static wrist or ankle strap and obey conventional ESD
precautions when touching modules and components. Do not touch midplane, motherboard, or
module connectors. See also ESD Precautions on page 165. This section provides important
preparation requirements and handling procedures for use during product installation.
58QXS G2 Hardware Installation and Maintenance Guide
Preparing Site and Host Server
Before beginning the chassis installation, verify that the site where you will install your storage
system has the following:
• A standard AC power supply from a independent source or a rack power distribution unit with an
Uninterruptible Power Supply (UPS).
• A host computer configured with the appropriate software, BIOS, and drives. Contact your
supplier for the correct software configurations.
Before installing the chassis, verify the existence of the following:
• Qualified cable options for host connection
Depending upon the controller module: FC or iSCSI HBA and appropriate switches (if used)
• One power cord per PSU
• Rail kit (for rack installation)
Please refer to your supplier for a list of qualified accessories for use with the chassis. The accessories
box contains the power cords and other accessories.
Unpacking Chassis
NOTE: The bezel assembly and applicable cables and/or power cords are shipped in separate boxes.
Unpack the chassis as follows:
Installation59
1 Examine the packaging for crushes, cuts, water damage, or any other evidence of mishandling
during transit.
• If you suspect that damage has happened, photograph the package before opening, for
possible future reference.
• Retain original packaging materials for use with returns.
2 The unpacking sequence pertaining to 2U chassis is shown in Figure 58.
Figure 58 Unpack 2U Chassis (2U24 Chassis Shown)
The 2U chassis are supplied with the midplane PCB, PSUs, drives, and RAID controllers and/or
expansion IOMs installed. For information about plug-in module replacement, see “Module removal
and replacement”. Drive blanks must be installed in unused drive slots.
3 The unpacking procedure pertaining to the 5U84 is shown in Figure 59.
CAUTION: The chassis does not include DDICs, but all rear panel modules are installed.
• This partially-populated chassis is quite heavy: 64 kg (142 lb).
• Verify that each strap is securely wrapped and buckled.
• Four people are required to lift the 5U84 from the box.
60QXS G2 Hardware Installation and Maintenance Guide
Figure 59 Unpack 5U84 Chassis
The railkit and accessories box is located immediately below the box lid (Figure 58 and Figure 59).
CAUTION: With four persons—positioned one at each corner of the chassis—grip the straps
securely by the loops, and lift the chassis out of the box, using appropriate lifting technique. Place the
chassis in a static-protected area.
NOTE: If your product model uses CNC ports for FC or iSCSI, you must locate and install the SFPs.
See also Locate SFP Transceivers on page 231.
Required Tools
Required tools include:
• Flat blade screwdriver
• Torx T10/T20 bits for locks and select CRU replacement
Installation61
Requirements for rackmount installation
You can install the chassis in an industry standard 19-inch cabinet capable of holding 2U form factors.
• Minimum depth: 707 mm (27.83") from rack posts to maximum extremity of chassis (includes rear
panel cabling and cable bend radii).
• Weight:
• Up to 32 kg (71 lb), dependent upon configuration, per 2U chassis.
• Up to 128 kg (282 lb), dependent upon configuration, per 5U chassis.
• The rack should cause a maximum back pressure of 5 pascals (0.5 mm water gauge).
• Before you begin, ensure that you have adequate clearance in front of the rack for installing the
rails.
Rackmount rail kit
Various sets of rack mounting rails are available for use in 19-inch rack cabinets. These rails have been
designed and tested for the maximum chassis weight, and to make sure that multiple chassis may be
installed without loss of space within the rack. Use of other mounting hardware may cause some loss
of rack space. Contact your supplier to make sure suitable mounting rails are available for the rack
you plan to use.
Installing 2U Chassis
Install the chassis as follows:
1 Remove the rack mounting rail kit from the accessories box, and examine for damage.
2 Use the following procedure to attach the rail kit brackets to the rack post as shown in Figure 60.
a Set the location pin at the rear of the rail into a rear rack post hole.
• Attach the bracket to the rear rack post: use the washers and screws supplied (callout 9, A
screw).
• Leave the screws loose.
b Extend the rail to fit between the front and rear rack posts.
c Attach the bracket to the front rack post using the washers and screws supplied (callout 3,
A screw).
Leave the screws loose.
d Tighten the locking screw (callout 7) located along the inside of the rear section of the rack
bracket.
62QXS G2 Hardware Installation and Maintenance Guide
3 Repeat the above sequence of steps for the companion rail.
2
AA
Figure 60 2U Secure Brackets to Rail
8
7
4
5
9
6
10
3
11
1
C
11
B
10
2
4
6
8
Fastening screw: B
Front rack post: square hole
Left hand (LH) rail
Rear rack post: square hole
2U Chassis fastening screw: C
1
3
5
7
9
2U Left ear (ops panel cover exploded to show
left ear flange fastening screw)
Clamping screw (front): A
Rail location pins
Locking screw
Clamping screw (rear): A
Screws: Rail kit fasteners
4 Install the chassis into the rack:
a Lift the chassis and align it with the installed rack rails, taking care to ensure that the chassis
remains level.
b Carefully insert the chassis into the rack rails and push fully in.
c Tighten the mounting screws (callout 9, A screw) in the rear rail kit brackets.
d Slide the chassis forward until it reaches the hard stops—approximately 400 mm (15.75”)
—and tighten the mounting screws (callout 3, A screws) in the front rail kit bracket.
e Return the chassis to the fully home position.
5 Install the 2u chassis fastening screw (callout 10, C screw) to secure the chassis to the back of the
rack.
Installation63
Installing 5U Chassis
AA
The 5U84 chassis is delivered without the disks installed. Due to the weight of the chassis, install the
chassis into the rack without DDICs installed, and remove the rear panel CRUs to lighten the chassis
weight.
The adjustment range of the rail kit from the inside of the front post to the inside of the rear post is
660mm – 840mm(64”-xx”). This range suits a one meter deep rack within Rack Specification IEC
60297.
Install the chassis as follows:
1 To facilitate access, remove the doors from the cabinet.
2 Ensure that the pre-assembled rails are at their shortest length (refer to the reference label on the
inside of the rail).
3 Locate/seat the rail location pins (callout 5) inside the front of the rack, and extend the length of
the rail assembly to enable the rear location pins to locate/insert into the back of the rack (callout
7).
• Ensure the pins are fully located/seated in the square or round holes in the rack posts.
• See also Figure 61.
Figure 61 5U Secure Brackets to Rail
11
7
8
9
4
6
3
5
10
11
12
1
2
10
12
2
4
6
8
Front left portion 5U chassis (for reference)
Clamping screw: B
Left hand (LH) rail
Clamping screw: B
Rear left portion 5U chassis (for reference)
Screws: Rail kit fasteners
1
3
5
7
9
Front rack post: square hole
Fastening screw: A
Rail location pins
Rear rack post: square hole
Fastening screw: C
Middle slide locking screws
B
C
4 Fully tighten all clamping screws.
64QXS G2 Hardware Installation and Maintenance Guide
• Front: Callout 4 (screw B)
• Rear: Callout 8 (screw B)
5 Tighten the middle slide locking screws (callout 11).
6 Ensure the rear spacer clips (x4) are fitted to the edge of the rack post.
7 Slide the chassis fully home on its rails.
8 Fasten the front of the chassis using the chassis fastening screws (callout 3, screw A, x4) as shown
in Figure 61.
9 Fix the rear of the chassis to the sliding bracket with the rear chassis fastening screws (callout 9,
screw C).
CAUTION: Use only power cords supplied, or power cords that comply with the specifications in AC
Power Cords on page 223.
CAUTION: Once the chassis is installed in the rack, dispose of the lifting straps.
Due to the difficulty in attaching the straps once the chassis is installed into the rack, the straps are
not suitable for removing the chassis from the rack.
10 Reinsert all of the rear panel CRUs, and install all of the DDICs into the drawers accessed from the
front panel per the following instructions:
• Install five fans Installing a 5U84 System Fan on page 206
• Install two PSUs Installing a 5U84 System PSU on page 204
• Installing two RAID controllers or expansion IOMs Installing a 5U84 RAID Controller or
Expansion IOM on page 210
• Install all DDIC Installing a 5U84 DDIC on page 194
• See also:
• Accessing 5U84 Drawers on page 191
• Populating 5U84 Drawers on page 195
11 If you removed the cabinet doors, install them now.
Full Disk Encryption (FDE)
The Full Disk Encryption (FDE) feature available via the management interfaces requires use of
self-encrypting drives (SED) which are also referred to as FDE-capable drive modules. When installing
FDE-capable drive modules, follow the same procedures for installing drives that do not support FDE.
The procedures for using the FDE feature, such as securing the system, viewing drive FDE status, and
clearing and importing keys are performed using the disk management utility (GUI) or CLI commands.
Installation65
NOTE: When moving FDE-capable drive modules for a disk group, stop I/O to any volumes in the
disk group before removing the drive modules.
Follow the procedures in “Module removal and replacement” for replacing drive modules relative to
the chassis type (2U or 5U). Import the keys for the drives so that the drive content becomes
available.
Federal Information Processing Standards (FIPS)
FIPS are U.S. government computer security standards that specify requirements for cryptography
modules. FIPS is a widely recognized and demanded cert when it comes to government agencies and
industry’s subject to strict security compliance. FIPS certification is available on self-encrypting drives
(SEDs).
Connecting RAID Chassis and Expansion Chassis
Use this section to connect the RAID chassis to the expansion chassis. Connecting/cabling suggestions
are provided. The customer can select the applicable connecting/cabling best suited for their
installation.
NOTE: Reverse cabling/straight-through cabling for the RAID chassis and expansion is provided.
Reverse cabling is suggested because it provides redundancy (alternate path to access chassis).
Ta ble 9 provides the RAID chassis system, drive form factor per chassis, and the number of expansion
chassis supported in a RAID chassis and expansion chassis combination.
Ta bl e 9RAID Chassis with Supported Expansion Chassis
Drive
Expansion
Chassis
Drive Form
Factor Per
Chassis
2U12 Expansion
Chassis
2U24 Expansion
Chassis
5U84 Expansion
Chassis
Intermix 2U
Expansion
Chassis
QXS-G2-312
RAID Chassis
Attach
12-drives (3.5”
LFF)
Up to 3Up to 9Up to 3Up to 9N/A
Up to 3Up to 9Up to 3Up to 9N/A
N/AUp to 3N/AUp to 3Up to 3
Up to 3Up to 9Up to 3Up to 9N/A
QXS-G2-412
RAID Chassis
Attach
12-drives (3.5”
LFF)
QXS-G2-324
RAID Chassis
Attach
24-drives (2.5”
SFF)
QXS-G2-424
RAID Chassis
Attach
24-drives (2.5”
SFF)
QXS-G2-484
RAID Chassis
Attach
84-drives (2.5”
SFF or 3.5” LFF)
The chassis support both straight-through and reverse SAS cabling. Reverse cabling allows any drive
chassis to fail—or be removed—while maintaining access to other chassis. Fault tolerance and
66QXS G2 Hardware Installation and Maintenance Guide
performance requirements determine whether to optimize the configuration for high availability or
high performance when cabling.
Cabling diagrams in this section show fault-tolerant cabling patterns. RAID controller and expansion
IOMs are identified by chassis ID and IOM ID, such as 0A and 0B for controllers (RAID chassis), 1A and
1B for the first expansion chassis IOMs in a cascade, and so forth. When connecting multiple
expansion chassis, use reverse cabling to ensure the highest level of fault tolerance, enabling
controllers to access remaining expansion chassis if an expansion chassis fails.
Cable Requirements for Expansion Chassis
When adding storage, use only Quantum or OEM-qualified cables, and observe the following
guidelines:
• When installing SAS cables to expansion IOMs, use only supported HD mini-SAS x4 cables.
• Qualified HD mini-SAS to HD mini-SAS 0.5 m (1.64') cables are used to connect cascaded chassis in
the rack.
• The maximum expansion cable length allowed in any configuration is 2 m (6.56').
• When adding more than two expansion chassis, you may need to purchase additional cables,
depending upon number of RAID chassis, expansion chassis, and cabling method used.
• You may need to order additional or longer cables when reverse-cabling a fault-tolerant
configuration.
The rear panel view of the 2U12 and 2U24 RAID chassis are identical to one another. The rear panel
views of the expansion chassis are also identical to one another.
Whether configured as a RAID chassis or expansion chassis, the rear panel view of the 5U84 is very
different than the 2U12 or 2U24. Although the 5U84 uses the same controllers and expansion IOMs
used in the 2U chassis, the remaining CRUs accessible from the chassis rear panel differ from those
used in the 2U chassis.
NOTE: For clarity, the schematic diagrams show only relevant details such as the controller and
expansion IOM face plate outlines and expansion ports.
For RAID chassis and expansion chassis rear view detailed illustrations refer to the following sections:
• QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID Chassis Rear View (4-Host Port Controllers) on
page 18
• QXS-G2-312 and QXS-G2-324 RAID Chassis Rear View (2-Host Port Controllers) on page 16
• 2U12-Drive/2U24-Drive Expansion Chassis Rear View on page 19
Figure 62 (left) shows reverse cabling of a RAID chassis dual-controller 2U chassis and supported 2U
drive chassis configured with dual expansion IOMs. Controller module 0A is connected to expansion
module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to
the lower expansion module (9B), of the last expansion chassis, with connections moving in the
opposite direction (green). Reverse cabling allows any expansion chassis to fail—or be
removed—while maintaining access to other chassis.
The diagram at right (above) shows the same storage components connected to use straight-through
cabling. Using this method, if an expansion chassis fails, the chassis that follow the failed chassis in
the chain are no longer accessible until the failed chassis is repaired or replaced.
The 2U expansion chassis shown in Figure 62 can either be of the same type or they can be a mixture
of the 2U12 and 2U24 models. Given that supported expansion chassis models use 12Gb/s SAS
link-rate and SAS 3.0 expanders, they can be ordered in desired sequence within the system,
following the RAID chassis. The middle SAS ports (Port B) on expansion modules are not used. Refer
to Expansion Chassis IOM on page 19 for additional information.
68QXS G2 Hardware Installation and Maintenance Guide
Mixing 2U and 5U84 chassis when configuring a storage system is not supported. Representative
0A0B
1A
1B
2A
2B
3A
3B
A
A
A
A
A
A
C
C
C
C
C
C
Controller A
Controller B
0A
1A
2A
3A
A
A
A
C
C
C
A
A
A
C
C
C
Straight-through Cabling
0B
1B
2B
3B
examples showing supported configurations and configuration limits are provided above for 2U12
and 2U24, and on the following pages for 5U84.
Reverse Cabling for 5U84 Systems
Figure 63 provides an illustration of connecting RAID chassis and expansion chassis in a reverse
cabling (fault tolerant) configuration for 5U84 systems.
Figure 63 5U84 Reverse Cabling
Controller A
Reverse Cabling/Fault Tolerant
Straight-Through Cabling for 5U84 Systems
Figure 64 provides an illustration of connecting RAID chassis and expansion chassis in a
straight-through cabling configuration for 5U84 systems.
Figure 64 5U84 Straight-Through Cabling
Controller B
NOTE: Figure 63 and Figure 64 show maximum configuration cabling for the 5U84 RAID chassis
with the 5U84 expansion chassis (total of 4 chassis cabled together).
CAUTION: Refer to Connecting RAID Chassis and Expansion Chassis on page 66 for comparative
characteristics for using reverse cabling (fault tolerant) or straight-through cabling configurations.
Installation69
Reverse Cabling for 2U to 5U84 Systems
Controller A
Controller B
0A
0B
1A
2A
3A
A
A
A
C
C
C
A
A
A
C
C
C
1B
2B
3B
2U to 5U Reverse Cabling
Controller A
Controller B
0A
0B
1A
2A
3A
A
A
A
C
C
C
A
A
A
C
C
C
1B
2B
3B
Figure 65 provides an illustration of connecting a 2U RAID chassis and three 5U84 expansion chassis
NOTE: Figure 65 and Figure 66 show maximum configuration cabling for a 2U RAID chassis with the
5U84 expansion chassis (total of 4 chassis cabled together).
70QXS G2 Hardware Installation and Maintenance Guide
Power Cord Connection
Connect a power cord from each PSU on the chassis rear panel to the PDU (power distribution unit)
as shown in the illustrations below.
Figure 67 provides an illustration of connecting 2U RAID chassis PSUs to an AC PDU.
Figure 67 Connecting 2U RAID chassis PSUs to AC PDU
Figure 68 provides an illustration of connecting 5U RAID chassis PSUs to an AC PDU.
Figure 68 Connecting 5U RAID chassis PSUs to AC PDU
IMPORTANT: When more than one PSU is fitted, all power cords must be connected to at least two
separate and independent power supplies to ensure redundancy. When the storage system is ready
for operation, ensure that each PSU power switch is set to the On position.
CAUTION: Power connection concerns:
• Always remove the power connections before you remove the PSU from the chassis.
• When bifurcated power cords (Y leads) are used, these cords must only be connected to a supply
range of 200–240V AC.
Installation71
Testing Chassis Connections
See Powering On/Powering Off on page 105. Once the power-on sequence succeeds, the storage
system is ready to be connected as described in Host Connection on page 76.
Grounding Checks
The product must only be connected to a power source that has a safety electrical earth connection.
CAUTION: If more than one chassis goes into a rack, the importance of the earth connection to the
rack increases because the rack will have a larger Earth Leakage Current (Touch Current).
Examine the earth connection to the rack before power on. An electrical engineer who is qualified to
the appropriate local and national standards must do the examination.
Host System Requirements
NOTE: Refer to the QXS-G2 (12G) Quantum Interoperability and Certification Matrix for Quantum
model numbers, firmware release, firmware required for host systems, RAID hardware certifications,
and HBA attach.
Hosts connected to QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 systems
must meet the following requirements:
• Depending on your system configuration, host operating systems may require that multipathing is
supported.
• If fault tolerance is required, then multipathing software may be required.
• Host-based multipath software should be used in any configuration where two logical paths
between the host and any storage volume may exist at the same time.
• This would include most configurations where there are multiple connections to the host or
multiple connections between a switch and the storage.
• Use native Microsoft MPIO DSM support with Windows Server 2008, Windows Server 2012, and
Windows Server 2016.
Use either the Server Manager or the
See the following web sites for information about using native Microsoft MPIO DSM:
•
https://support.microsoft.com
•
https://technet.microsoft.com
(search the site for “multipath I/O overview”)
Cabling Considerations
mpclaim CLI tool to perform the installation.
Common cabling configurations address hosts, RAID chassis, expansion chassis, and switches. Host
interface ports on the QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID
chassis can connect to respective hosts via direct-attach or switch-attach. Cabling systems to enable
use of the optional replication feature—to replicate volumes—is yet another important cabling
consideration. See Connecting Management Host to Network on page 83. The FC and iSCSI product
models can be licensed to support replication.
72QXS G2 Hardware Installation and Maintenance Guide
Use only QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 system or
OEM-qualified cables to connect host to RAID chassis:
• Qualified Fibre Channel SFP and cable options
• Qualified 10GbE iSCSI SFP and cable options
• Qualified 1Gb RJ-45 SFP and cable options
A host identifies an external port to which the storage system is attached. The external port may be a
port in an I/O adapter (such as an FC HBA) in a server. Cable connections vary depending on
configuration. This section describes host interface protocols supported by QXS-G2-312,
QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis, while showing a few
common cabling configurations.
NOTE: QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 controllers use
Unified LUN Presentation (ULP), which enables a host to access mapped volumes through any
controller host port.
ULP can show all LUNs through all host ports on both controllers, and the interconnect information is
managed by the controller firmware. ULP appears to the host as an active-active storage system,
allowing the host to select any available path to access the LUN, regardless of disk group ownership.
TIP: See the topic about configuring system settings in the
Guide
to initially configure the system, or change system configuration settings (such as Configuring
host ports).
CNC Technology
The QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 FC/iSCSI models use
Converged Network Controller (CNC) technology, allowing you to select the desired host interface
protocol(s) from the available FC or iSCSI host interface protocols supported by the system. The small
form-factor pluggable (SFP transceiver or SFP) connectors used in CNC ports are further described in
the subsections below.
Refer to the following for additional CNC information:
• QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID Chassis Rear View (4-Host Port Controllers) on
page 18
• QXS-G2-312 and QXS-G2-324 RAID Chassis Rear View (2-Host Port Controllers) on page 16
NOTE: Controller modules are shipped with SFPs installed (FC or iSCSI per customer order).
QXS G2 Disk Management Utility User
CAUTION: Use the set host-port-mode CLI command to set the host interface protocol for CNC
ports using qualified SFP options (if required).
• QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 models ship with CNC
ports configured for FC or iSCSI per customer order.
• Using FC or iSCSI SFPs in combination is not supported currently.
Installation73
• If the customer changes SFPs, you must use the CLI (not the disk management utility, GUI) to
specify which ports will use FC or iSCSI.
• It is best to do this before inserting the SFPs into the CNC ports (see Change CNC Port Mode on
page 98 for instructions).
Fibre Channel (FC) Protocol
NOTE: A FC protocol connectivity kit must be purchased from Quantum; it contains qualified SFPs
and cables to support the FC protocol.
The QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis support two
controller modules using the Fibre Channel interface protocol for host connection.
• The QXS-G2-312 and QXS-G2-324 FC controller module provides two host ports.
• The QXS-G2-412, QXS-G2-424, and QXS-G2-484 FC controller module provides four host ports.
• CNC ports are designed for use with an FC SFP supporting data rates up to 16Gb/s.
The controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies.
Loop protocol can be used in a physical loop or for direct connection between two devices.
Point-to-point protocol is used to connect to a fabric switch. Point-to-point protocol can also be used
for direct connection, and it is the only option supporting direct connection at 16Gb/s.
See the
syntax and details about parameter settings relative to supported link speeds. Fibre Channel ports are
used for attachment to FC hosts directly, or through a switch used for the FC traffic. The host
computer must support FC and optionally, multipath I/O.
The Fibre Channel ports are used in either of two capacities:
• To connect two storage systems through a switch for use of replication.
• For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the replication feature, whereas the second option
requires that the host computer supports Ethernet, FC, and optionally, multipath I/O.
TIP: Use the disk management utility (GUI) to set FC port speed. Within the
set host-parameters command within the
Management Utility User Guide
Use the
command to view information about host ports.
set host-parameters CLI command to set FC port options, and use the show ports CLI
10GbE iSCSI Protocol
The QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis 10GbE iSCSI
controllers support two controller modules using the Internet SCSI interface protocol for host
connection.
QXS G2 CLI Reference Guide
, see the topic about configuring host ports.
for command
QXS G2 Disk
• The QXS-G2-312 and QXS-G2-324 10GbE iSCSI controller module provides two host ports.
• The QXS-G2-412, QXS-G2-424, and QXS-G2-484 10GbE iSCSI controller module provides four
host ports.
• CNC ports are designed for use with a 10GbE iSCSI SFP supporting data rates up to 10Gb/s, using
either one-way or mutual CHAP (Challenge-Handshake Authentication Protocol).
74QXS G2 Hardware Installation and Maintenance Guide
The 10GbE iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of replication.
• For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the replication feature, whereas the second option
requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
TIP: See the topic about configuring CHAP in the
TIP: Use the disk management utility (GUI) to set iSCSI port options. Within the
Management Utility User Guide
Use the
command to view information about host ports.
set host-parameters CLI command to set iSCSI port options, and use the show ports CLI
1Gb iSCSI Protocol
The QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis 1Gb iSCSI
controllers support two controller modules using the Internet SCSI interface protocol for host port
connection.
• The QXS-G2-312 and QXS-G2-324 1Gb iSCSI controller module provides two host ports.
• The QXS-G2-412, QXS-G2-424, and QXS-G2-484 1Gb iSCSI controller module provides four host
ports.
• The CNC ports are designed for use with an RJ-45 SFP supporting data rates up to 1Gb/s, using
either one-way or mutual CHAP.
QXS G2 Disk Management Utility User Guide
, see the topic about configuring host ports.
.
QXS G2 Disk
TIP: See the topic about configuring CHAP in the
TIP: Use the disk management utility (GUI) to set iSCSI port options. Within the
Management Utility User Guide
Use the
command to view information about host ports.
The 1Gb iSCSI ports are used in either of two capacities:
• To connect two storage systems through a switch for use of replication.
• For attachment to 1Gb iSCSI hosts directly, or through a switch used for the 1Gb iSCSI traffic.
The first usage option requires valid licensing for the replication feature, whereas the second option
requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
set host-parameters CLI command to set iSCSI port options, and use the show ports CLI
, see the topic about configuring host ports.
QXS G2 Disk Management Utility User Guide
QXS G2 Disk
.
Installation75
Host Connection
The QXS-G2-312 and QXS-G2-324 RAID chassis support up to four direct-connect server connections,
two per controller module.
• Use only qualified cables purchased from Quantum for all host connections.
• Connect appropriate cables from the server’s HBAs to the controller module’s host ports as
described below, and shown in the following illustrations.
The QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis support up to eight direct-connect
server connections, four per controller module.
• Use only qualified cables purchased from Quantum for all host connections.
• Connect appropriate cables from the server’s HBAs to the controller module’s host ports as
described below, and shown in the following illustrations.
Fibre Channel Host Connection
To connect controller modules supporting (4/8/16Gb) FC host interface ports to a server HBA or
switch, using the controller’s CNC ports, select a qualified FC SFP option.
Qualified options support cable lengths of 1 m (3.28'), 2 m (6.56'), 5 m (16.40'), 15 m (49.21'), 30 m
(98.43'), and 50 m (164.04') for OM4 multimode optical cables and OM3 multimode FC cables,
respectively. A 0.5 m (1.64') cable length is also supported for OM3. In addition to providing host
connection, these cables are used for connecting two storage systems via a switch, to facilitate use of
the optional replication feature.
10GbE iSCSI Host Connection
To connect controller modules supporting 10GbE iSCSI host interface ports to a server HBA or switch,
using the controller’s CNC ports, select a qualified 10GbE SFP option.
Qualified options support cable lengths of 0.5 m (1.64'), 1 m (3.28'), 3 m (9.84'), 5 m (16.40'), and 7
m (22.97') for copper cables; and cable lengths of 0.65 m (2.13'), 1 m (3.28'), 1.2 m (3.94'), 3 m
(9.84'), 5 m (16.40'), and 7 m (22.97') for direct attach copper (DAC) cables. In addition to providing
host connection, these cables are used for connecting two storage systems via a switch, to facilitate
use of the optional replication feature.
1Gb iSCSI Host Connection
To connect controller modules supporting 1Gb iSCSI host interface ports to a server HBA or switch,
using the controller’s CNC ports, select a qualified 1Gb RJ-45 copper SFP option supporting (CAT5-E
minimum) Ethernet cables of the same lengths specified for 10GbE iSCSI above. In addition to
providing host connection, these cables are used for connecting two storage systems via a switch, to
facilitate use of the optional replication feature.
Connecting Direct Attach Configurations
A dual-controller configuration improves application availability because in the event of a controller
failure, the affected controller fails over to the healthy partner controller with little interruption to
data flow. A failed controller can be replaced without the need to shut down the storage system. The
QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis are configured
with dual controller modules.
76QXS G2 Hardware Installation and Maintenance Guide
NOTE: In the examples that follow, a single diagram represents CNC host connections for
Server
2U12/2U24 RAID Chassis
0A
0B
2U12/2U24 RAID Chassis
0A
0B
QXS-G2-312, QXS-G2-324, QXS-G2-412, QXS-G2-424, and QXS-G2-484 RAID chassis respectively. The
location and sizes of the host ports are very similar. Blue cables show controller A paths and green
cables show controller B paths for host connection.
2U12/2U24 (4-Host Ports) One Server/One HBA/Dual Path
Figure 69 provides an illustration of connecting a 2U12/2U24 to one server/one HBA with dual path.
Figure 69 2U12/2U24 One Server/One HBA/Dual Path
2U12/2U24 (2-Host Ports) One Server/One HBA/Dual Path
Figure 70 provides an illustration of connecting a 2U12/2U24 to one server/one HBA with dual path.
Figure 70 2U12/2U24 One Server/One HBA/Dual Path
Installation77
5U84 One Server/One HBA/Dual Path
Server
5U84 RAID Chassis
0A
0B
Server 1
2U12/2U24 RAID Chassis
0B
Server 2
0A
2U12/2U24 RAID Chassis
Figure 71 provides an illustration of connecting a 5U84 to one server/one HBA with dual path.
Figure 71 5U84 One Server/One HBA/Dual Path
2U12/2U24 (4-Host Ports) Two Servers/One HBA Per Server/Dual Path
Figure 72 provides an illustration of connecting a 2U12/2U24 to two servers/one HBA per server, with
dual path.
Figure 72 2U12/2U24 Two Servers/One HBA Per Server/Dual Path
2U12/2U24 (2-Host Ports) Two Servers/One HBA Per Server/Dual Path
Figure 73 provides an illustration of connecting a 2U12/2U24 to two servers/one HBA per server, with
dual path.
Figure 73 2U12/2U24 Two Servers/One HBA Per Server/Dual Path
78QXS G2 Hardware Installation and Maintenance Guide
5U84 Two Servers/One HBA Per Server/Dual Path
Server 1
0A
0B
Server 2
Server 1
2U12/2U24 RAID Chassis
0B
Server 2
0A
Server 4
Server 3
Figure 74 provides an illustration of connecting a 5U84 to two servers/one HBA per server, with dual
path.
Figure 74 5U84 Two Servers/One HBA Per Server/Dual Path
5U84 RAID Chassis
2U12/2U24 Four Servers/One HBA Per Server/Dual Path
Figure 75 provides an illustration of connecting a 2U12/2U24 to four servers/one HBA per server, with
dual path.
Figure 75 2U12/2U24 Four Servers/One HBA Per Server/Dual Path
Installation79
5U84 Four Servers/One HBA Per Server/Dual Path
Server 1
5U84 RAID Chassis
0A
0B
Server 2
Server 3
Server 4
2U12/2U24 RAID Chassis
Server 1Server 2
Switch A
Switch B
0A
0B
Figure 76 provides an illustration of connecting a 5U84 to four servers/one HBA per server, with dual
path.
Figure 76 5U84 Four Servers/One HBA Per Server/Dual Path
Switch Attach
A switch attach solution—or SAN—places a switch between the servers and the RAID chassis within
the storage system. Using switches, a SAN shares a storage system among multiple servers, reducing
the number of storage systems required for a particular environment. Using switches increases the
number of servers that can be connected to the storage system.
2U12/2U24 (4-Host Ports) Two Servers/Two Switches
Figure 77 provides an illustration of connecting a 2U12/2U24 to two servers and two switches.
Figure 77 2U12/2U24 Two Servers/Two Switches
80QXS G2 Hardware Installation and Maintenance Guide
2U12/2U24 (2-Host Ports) Two Servers/Two Switches
2U12/2U24 RAID Chassis
5U84 RAID Chassis
Server 1Server 2
Switch A
Switch B
0A
0B
Figure 78 provides an illustration of connecting a 2U12/2U24 to two servers and two switches.
Figure 78 2U12/2U24 Two Servers/Two Switches
5U84 Two Servers/Two Switches
Figure 79 provides an illustration of connecting a 5U84 to two servers and two switches.
Figure 79 5U84 Two Servers/Two Switches
Installation81
2U12/2U24 Four Servers/Multiple Switches/SAN Fabric
2U12/2U24 RAID Chassis
Server 1Server 2
0A
0B
SAN
Server 3
Server 4
5U84 RAID Chassis
Server 1Server 2
0A
0B
SAN
Server 3
Server 4
Figure 80 provides an illustration of connecting a 2U12/2U24 to four servers, multiple switches, and a
SAN fabric.
Figure 80 2U12/2U24 Four Servers/Multiple Switches/SAN Fabric
5U84 Four Servers/Multiple Switches/SAN Fabric
Figure 81 provides an illustration of connecting a 5U84 to four servers, multiple switches, and a SAN
fabric.
Figure 81 5U84 Four Servers/Multiple Switches/SAN Fabric
82QXS G2 Hardware Installation and Maintenance Guide
Connecting Management Host to Network
The management host directly manages storage systems out-of-band over an Ethernet network.
1 Connect an RJ-45 Ethernet cable to the network port on each controller(Figure 82 and Figure 83).
2 Connect the other end of each Ethernet cable to a network that your management host can
access (preferably on the same subnet).
3 Do not interconnect iSCSI and management Ethernet on the same network.
Figure 82 2U RAID Chassis to Management Network
Figure 83 5U84 RAID Chassis to Management Network
NOTE: Connections to this device must be made with shielded cables—grounded at both ends with
metallic RFI/EMI connector hoods—in order to maintain compliance with FCC Rules and Regulations.
- If you connect the iSCSI and management ports to the same physical switches, separate VLANs are
recommended.
- See also the topic about configuring controller network ports in the Storage Management Guide.
Installation83
Updating Firmware
This section provides the following information:
• Important Firmware Notes
• Verifying Firmware
Important Firmware Notes
CAUTION: Reverting to a previous firmware version is not recommended. Notify Quantum support
for additional information.
Current firmware release is available from Quantum.
Always update controller firmware to the latest when:
• Installing a new system
• Adding expansion chassis
• Replacing a RAID chassis or an expansion chassis
• Replacing a controller module(s)
• Controller replacement should auto-level with PFU enabled.
•Refer to RAID Controller on page 185 for CRU and PFU information.
• Replacing an expansion I/O module(s)
NOTE: Updating controller firmware with expansion I/O modules active ensures that the controller
firmware and expansion I/O module(s) firmware are at a compatible level.
Verify drive firmware and update using the disk management utility.
Verifying Firmware
NOTE: Refer to the QXS G2 G265xxxx Release Notes for the latest supported firmware for the
applicable system you are installing.
After installing the hardware and powering on the storage system components for the first time,
verify that the controller modules, expansion IOMs, and disk drives are using the current firmware
release.
• Using the disk management utility (GUI), in the System topic, select
• The
Update Firmware panel opens.
Action > Update Firmware.
• The
84QXS G2 Hardware Installation and Maintenance Guide
Update Controller Module tab shows versions of firmware components currently installed
in each controller.
NOTE: The disk management utility (GUI) provides an option for enabling or disabling Partner
Firmware Update for the partner controller.
• To enable or disable the setting, use the set advanced-settings CLI command, and set the
partner-firmware-upgrade parameter.
• See the
Optionally, you can update firmware using FTP or SFTP as described in the
Utility User Guide
See the topic about updating firmware in the
performing a firmware update. Partner Firmware Update (PFU) is enabled by default on QXS-G2-412,
QXS-G2-424, and QXS-G2-484 systems.
QXS G2 CLI Reference Guide
.
for more information about command parameter syntax.
QXS G2 Disk Management Utility User Guide
Obtaining IP Values and System Settings
You can manually set static IP values (default method) for each controller, or you can specify that IP
values should be set automatically for both controllers through communication with a Dynamic Host
Configuration Protocol (DHCP) server.
See the topic about configuring network ports in the
This section provides the following information:
• Updating System Settings using the Disk Management Utility (DMU) GUI
• Setting Network Port IP Addresses Using DHCP
• Setting Network Port IP Addresses Using CLI Port and Cable
QXS G2 Disk Management Utility User Guide
QXS G2 Disk Management
before
.
Updating System Settings using the Disk Management Utility (DMU) GUI
Network ports on controller module A and controller module B are configured with the following
default values:
• Network port IP address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
• IP subnet mask: 255.255.255.0
• Gateway IP address: 10.0.0.1
NOTE: Refer to the
Complete only the steps that are needed. You may only need to set up the IP addresses for your
system administrator to access the system and to configure it. You may also need to converse with
your system administrator to complete the applicable steps needed for this system install.
Complete the applicable steps to update the system settings:
QXS G2 Disk Management Utility User Guide
for additional information.
Installation85
1 Log in to the disk management utility, GUI.
2 Click on System > Action > System Settings.
NOTE: When you change any setting in the System Settings panel, the Apply and Close and the
Apply buttons will become active. To save your changes and continue changing other settings click
the Apply button. To save your changes and exit click the Apply and Close button.
86QXS G2 Hardware Installation and Maintenance Guide
The following screen appear.
For additional information, click on the “Help” icon which is the “?” at the top-right of the screen and
the following screen appear.
NOTE: Notice there are hyperlinks that will take you directly to help content corresponding to the
eight tabs on the left side of System Settings panel.
Installation87
3 Set the date and time so that entries in system logs and notifications have correct time stamps.
• You can set the date and time manually or configure the system to use NTP to obtain them from a
network-attached server.
• When NTP is enabled, and if an NTP server is available, the system time and date can be obtained
from the NTP server.
• This allows multiple storage devices, hosts, and log files to be synchronized.
Click on “Manage Users”.
To secure the storage system, set a new password for each default user.
• A password is case sensitive and can have 8-32 characters.
• If the password contains only printable ASCII characters, then it must contain at least one
uppercase character, one lowercase character, and one non-alphabetic character.
• A password can include printable UTF-8 characters except for the following: a space or “ , < > \
88QXS G2 Hardware Installation and Maintenance Guide
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.