Dot Hill Systems 6844, 6854, 6544, 6554, 4824 Setup Manual

...
Page 1
AssuredSAN 6004 Series Setup Guide
For firmware release G222
Abstract
This document describes initial hardware setup for AssuredSAN 6004 Series controller enclosures, and is intended for use by storage system administrators familiar with servers and computer networks, network administration, storage system installation and configuration, storage area network management, and relevant protocols.
P/N 83-00006900-10-01 Revision A January 2016
Page 2
Copyright © 2016 Dot Hill Systems Corp. All rights reserved. Dot Hill Systems Corp., Dot Hill, the Dot Hill logo, and the AssuredSAN logo are trademarks of Dot Hill Systems Corp. All other trademarks and registered trademarks are proprietary to their respective owners.
The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, changes in the product design can be made without reservation and without notification to its users.
Page 3

Contents

About this guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
AssuredSAN 6004 Series enclosure user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
CNC ports used for host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
HD mini-SAS ports used for host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Intended audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Document conventions and symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
48-drive enclosure front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2U48 enclosure bezel installed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2U48 enclosure bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2U48 enclosure drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
56-drive enclosure front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4U56 enclosure bezel installed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4U56 enclosure bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4U56 enclosure drawers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Disk drives used in 6004 Series enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Controller enclosure — rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2U48 controller enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4U56 controller enclosure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6844/6854 CNC controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6544/6554 SAS controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Supported drive enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
J6X48 drive enclosure rear panel components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
J6X56 drive enclosure rear panel components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
J6X56 expansion module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Component installation and replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CompactFlash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Supercapacitor pack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2 Installing the enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Populating Drawers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Populating drawers in 2U48 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Opening and closing a 2U16 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Aligning an AMS or disk module for installation into a 2U16 drawer . . . . . . . . . . . . . . . . . . . . . . 28
Installing an AMS into a 2U16 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Installing a disk module into a 2U16 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Populating drawers in 4U56 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Opening and closing a 4U28 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Aligning a disk module for installation into a 4U56 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Installing a disk module into a 4U28 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Removing a disk module from a 4U28 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
FDE considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Network Equipment-Building System (NEBS) Level 3 compliance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Generic Requirements (GRs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Exceptions to GRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Product documentation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Connecting the controller enclosure and drive enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
AssuredSAN 6004 Series Setup Guide 3
Page 4
Connecting the 6004 Series controller to the 2U48 drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Connecting the 6004 Series controller to the 4U56 drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Connecting the 6004 Series controller to mixed model drive enclosures . . . . . . . . . . . . . . . . . . . . . . . 37
Cable requirements for storage enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Summary of drive enclosure cabling illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Testing enclosure connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Powering on/powering off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Power supplies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
AC PSU (2U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Connect power cord to AC power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
AC PSU (4U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
DC PSU (4U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 Connecting hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Host system requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Cabling considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Connecting the enclosure to hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
CNC technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
10GbE iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1 Gb iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
HD mini-SAS technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
12 Gb HD mini-SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Connecting direct attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Fibre Channel host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
10GbE iSCSI host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
1 Gb iSCSI host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
HD mini-SAS host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Dual-controller configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Connecting switch attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dual-controller configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Connecting a management host on the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Connecting two storage systems to replicate volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Cabling for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
CNC ports and replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Dual-controller configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Obtaining IP values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Setting network port IP addresses using DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Setting network port IP addresses using the CLI port and cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Change the CNC port mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Set CNC port mode to iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Set CNC port mode to FC and iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Configure the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4 Basic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Accessing the SMC or RAIDar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Configuring and provisioning the storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
USB CLI port connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Fault isolation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Options available for performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Use the SMC or RAIDar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Use the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Monitor event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
View the enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Gather fault information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4Contents
Page 5
Determine where the fault is occurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Review the event logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Isolate the fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
If the enclosure does not initialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Correcting enclosure IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Stopping I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Diagnostic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Is the enclosure front panel Fault/Service Required LED amber?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Is the controller back panel FRU OK LED off?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Is the controller back panel Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Is the drawer panel Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Are both disk drive module LEDs off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Is the 2U48 disk drive module LED blinking blue (4Hz blink rate)? . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Is the 4U56 disk drive module Fault LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Is a connected host port Host Link Status LED off?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Is a connected port Expansion Port Status LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Is a connected port’s Network Port link status LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Is the fan control module Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Is the power supply Input Power Source LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Is the Voltage/Fan Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Controller failure in a single-controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
If the controller has failed or does not start, is the Cache Status LED on/blinking? . . . . . . . . . . . . . . . . 82
Transporting cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Isolating a host-side connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Host-side connection troubleshooting featuring CNC ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Host-side connection troubleshooting featuring SAS host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Isolating a controller module expansion port connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Isolating replication faults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Cabling for replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Replication setup and verification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Diagnostic steps for replication setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Virtual replication using the SMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Linear replication using RAIDar. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Resolving voltage and temperature warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Sensor locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Power supply sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Cooling fan sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Temperature sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Power supply module voltage sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
A LED descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Front panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Enclosure bezels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Enclosure bezel attachment and removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Enclosure bezel attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Enclosure bezel removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
48-drive enclosure front panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
LEDs visible with enclosure bezel installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
LEDs visible with enclosure bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Disk drive LED (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
56-drive enclosure front panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
LEDs visible with enclosure bezel installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
LEDs visible with enclosure bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Disk drive LEDs (4U56). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Controller enclosure — rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
2U48 controller enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4U56 controller enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6844/6854 CNC controller module — rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6544/6554 SAS controller module—rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
AssuredSAN 6004 Series Setup Guide 5
Page 6
Cache Status LED details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Power supply LEDs for supported PSUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Fan control module LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Optional drive enclosures for storage expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
J6X48 drive enclosure rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
J6X56 drive enclosure rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
B Specifications and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Safety requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Site requirements and guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Site wiring and AC power requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Site wiring and DC power requirements (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Weight and placement guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Electrical guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Ventilation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Cabling requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Management host requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Physical requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Environmental requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Electrical requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Site wiring and power requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Power cable requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
C Electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Preventing electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Grounding methods to prevent electrostatic discharge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
D USB device connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Rear panel USB ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
USB CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Emulated serial port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Supported host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Command-line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Device driver/special operation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Microsoft Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Obtaining the software download. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Setting parameters for the device driver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Using the CLI port and cable—known issues on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Workaround . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
E SFP option for CNC ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Locate the SFP transceivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Install an SFP transceiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Verify component operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6Contents
Page 7

Figures

1 2U48 enclosure: front panel with bezel installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 2U48 enclosure: front panel with bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 2U48 enclosure: drawer front and side views with disk slot numbering. . . . . . . . . . . . . . . . . . . . . . . 14
4 2U48 enclosure: sample drawer population . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5 4U56 enclosure: front panel with bezel installed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 4U56 enclosure: front panel with bezel removed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7 4U56 enclosure: drawer front and side views with disk slot numbering. . . . . . . . . . . . . . . . . . . . . . . 17
8 4U56 disk drive module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
9 6004 Series controller enclosure: rear panel (2U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
10 6004 Series controller enclosure: rear panel (4U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
11 6844/6854 controller module face plate (FC or 10GbE iSCSI). . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
12 6844/6854 controller module face plate (1 Gb RJ-45) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
13 6544/6554 controller module face plate (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
14 Drive enclosure rear panel view (2U form factor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
15 Drive enclosure rear panel view (4U form factor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
16 6554 controller module face plate (J6X56 HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
17 CompactFlash card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
18 Opening a 2U16 drawer: loosen the drawer stop screw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
19 Opening a 2U16 drawer: revolve the handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
20 Opening and closing a 2U16 drawer: pull or push drawer along slide . . . . . . . . . . . . . . . . . . . . . . 28
21 Align AMS or disk module for installation into the 2U16 drawer . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
22 Orient the AMS for installation (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
23 Secure the AMS into the disk bay (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
24 AMS insert for a single disk slot (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
25 Orient the disk for installation (2U48). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
26 Secure the disk module into the drive slot (2U48). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
27 Opening a 4U28 drawer: loosen screws and move the drawer toggle . . . . . . . . . . . . . . . . . . . . . . . 31
28 Opening a 4U28 drawer: revolve the handle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
29 Align the disk module for installation into the open a 4U28 drawer . . . . . . . . . . . . . . . . . . . . . . . . . 33
30 Install a disk into a drawer slot (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
31 Remove a disk from a drawer slot (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
32 Drawer 0 with full complement of disks (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
33 Drawer 1 with full complement of disks (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
34 Cabling connections between a 2U controller enclosure and one 2U drive enclosure . . . . . . . . . . . . . 39
35 Cabling connections between a 2U controller enclosure and one 4U drive enclosure . . . . . . . . . . . . . 39
36 Cabling connections between a 4U controller enclosure and a 4U drive enclosure . . . . . . . . . . . . . . 40
37 Cabling connections between a 4U controller enclosure and a 2U drive enclosure . . . . . . . . . . . . . . 40
38 Fault-tolerant cabling between a dual-IOM 2U enclosure and three 2U drive enclosures . . . . . . . . . . . 41
39 Reverse cabling between a dual-controller 2U enclosure and three 4U drive enclosures.. . . . . . . . . . . 42
40 Straight-through cabling between a dual-controller 2U enclosure and three 4U drive enclosures . . . . . 42
41 Reverse cabling between a dual-controller 2U enclosure and mixed drive enclosures . . . . . . . . . . . . . 43
42 Reverse-cabling between a dual-controller 4U enclosure and three 4U drive enclosures . . . . . . . . . . . 43
43 Straight-through cabling between a dual-controller 4U enclosure and three 4U drive enclosures . . . . . 44
44 Reverse-cabling between a dual-controller 4U enclosure and three 2U drive enclosures . . . . . . . . . . . 44
45 Straight-through cabling between a dual-controller 4U enclosure and three 2U drive enclosure . . . . . . 45
46 Reverse-cabling between a dual-controller 4U enclosure and mixed-model drive enclosures. . . . . . . . . 45
47 AC PSU (2U). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
48 AC power cord connect (2U). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
49 AC PSU with power switch (4U). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
50 DC PSU with power switch (4U). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
51 DC power cable featuring 2-circuit header and lug connectors (4U). . . . . . . . . . . . . . . . . . . . . . . . . 49
52 Connecting hosts: direct attach—one server/one HBA/dual path (2U) . . . . . . . . . . . . . . . . . . . . . . . 55
53 Connecting hosts: direct attach—one server/one HBA/dual path (4U CNC) . . . . . . . . . . . . . . . . . . . 55
54 Connecting hosts: direct attach—one server/one HBA/dual path (4U HD mini-SAS) . . . . . . . . . . . . . 55
55 Connecting hosts: direct attach—two servers/one HBA per server/dual path (2U) . . . . . . . . . . . . . . . 56
AssuredSAN 6004 Series Setup Guide 7
Page 8
56 Connecting hosts: direct attach—two servers/one HBA per server/dual path (4U CNC). . . . . . . . . . . 56
57 Connecting hosts: direct attach—two servers/one HBA per server/dual path (4U HD mini-SAS) . . . . . 56
58 Connecting hosts: direct attach—four servers/one HBA per server/dual path (2U). . . . . . . . . . . . . . . 57
59 Connecting hosts: direct attach—four servers/one HBA per server/dual path (4U CNC) . . . . . . . . . . 57
60 Connecting hosts: direct attach—four servers/one HBA per server/dual path (4U HD mini-SAS) . . . . . 58
61 Connecting hosts: switch attach—two servers/two switches (2U CNC) . . . . . . . . . . . . . . . . . . . . . . . 58
62 Connecting hosts: switch attach—two servers/two switches (4U CNC) . . . . . . . . . . . . . . . . . . . . . . . 59
63 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric (2U CNC) . . . . . . . . . . . 59
64 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric (4U CNC) . . . . . . . . . . . 60
65 Connecting two storage systems for replication: multiple servers/one switch/one location . . . . . . . . . 63
66 Connecting two storage systems for replication: multiple servers/one switch/one location . . . . . . . . . 64
67 Connecting two storage systems for replication: multiple servers/switches/one location . . . . . . . . . . . 65
68 Connecting two storage systems for replication: multiple servers/switches/one location . . . . . . . . . . . 65
69 Connecting two storage systems for replication: multiple servers/switches/two locations . . . . . . . . . . 66
70 Connecting two storage systems for replication: multiple servers/switches/two locations . . . . . . . . . . 66
71 Connecting two storage systems for replication: multiple servers/SAN fabric/two locations . . . . . . . . 67
72 Connecting a USB cable to the CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
73 Front panel enclosure bezel: 48-drive enclosure (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
74 Front panel enclosure bezel: 56-drive enclosure (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
75 Partial assembly showing bezel alignment with 2U48 chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
76 Drawer detail showing handle rotation and drawer travel (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . 98
77 Partial assembly showing bezel alignment with 4U56 chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
78 Drawer detail showing handle rotation and drawer travel (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . 99
79 LEDs: 2U48 enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
80 LEDs: 2U48 drawer front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
81 LEDs: Disk drive modules (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
82 LEDs: 4U56 enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
83 LEDs: 4U56 drawer front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
84 LEDs: Disk drive modules (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
85 6004 Series controller enclosure: rear panel (2U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
86 6004 Series controller enclosure: rear panel (4U) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
87 LEDs: 6844/6854 CNC controller module (FC and 10GbE SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . 110
88 LEDs: 6844/6854 CNC controller module (1 Gb RJ-45 SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
89 LEDs: 6544/6554 SAS controller module (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
90 LEDs: Power supply units — rear panel (2U48) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
91 LEDs: Power supply units — rear panel (4U56) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
92 LEDs: Fan control module — rear panel (4U56). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
93 LEDs: J6X48 drive enclosure — rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
94 LEDs: J6X56 drive enclosure — rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
95 Rackmount enclosure dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
96 USB device connection — CLI port. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
97 Install a qualified SFP option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8Figures
Page 9

Tables

1 Related documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Summary of drive enclosures supported by controller enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
5 Summary of cabling connections for AssuredSAN 6004 Series storage enclosures . . . . . . . . . . . . . . . 38
6 Terminal emulator display settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
7 Terminal emulator connection settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8 Diagnostics LED status: Front panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
9 Diagnostics LED status: Rear panel “FRU OK” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
10 Diagnostics LED status: Rear panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
11 Diagnostics LED status: Drawer panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12 Diagnostics LED status: Drawer slot “Disk Power/Activity” and “Disk Fault Status” . . . . . . . . . . . . . . . . 79
13 Diagnostics LED status: Disk drive fault status (2U48 modules) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
14 Diagnostics LED status: Disk drive fault status (4U56 modules) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
15 Diagnostics LED status: Rear panel “Host Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
16 Diagnostics LED status: Rear panel “Expansion Port Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
17 Diagnostics LED status: Rear panel “Network Port Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
18 Diagnostics LED status: Rear panel fan control module “Fault/Service Required” (4U56). . . . . . . . . . . . 81
19 Diagnostics LED status: Rear panel power supply “Input Power Source” . . . . . . . . . . . . . . . . . . . . . . . 81
20 Diagnostics LED status: Rear panel power supply “Voltage/Fan Fault/Service Required” . . . . . . . . . . . 81
21 Diagnostics LED status: Rear panel “Cache Status”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
22 Diagnostics for replication setup: Using the replication feature (v3) . . . . . . . . . . . . . . . . . . . . . . . . . . 87
23 Diagnostics for replication setup: Viewing information about remote links (v3) . . . . . . . . . . . . . . . . . . . 88
24 Diagnostics for replication setup: Creating a replication set (v3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
25 Diagnostics for replication setup: Replicating a volume (v3). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
26 Diagnostics for replication setup: Checking for a successful replication (v3). . . . . . . . . . . . . . . . . . . . . 89
27 Diagnostics for replication setup: Using the replication feature (v2) . . . . . . . . . . . . . . . . . . . . . . . . . . 90
28 Diagnostics for replication setup: Viewing information about remote links (v2). . . . . . . . . . . . . . . . . . . 90
29 Diagnostics for replication setup: Creating a replication set (v2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
30 Diagnostics for replication setup: Replicating a volume (v2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
31 Diagnostics for replication setup: Viewing a replication image (v2) . . . . . . . . . . . . . . . . . . . . . . . . . . 93
32 Diagnostics for replication setup: Viewing a remote system (v2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
33 Power supply sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
34 Cooling fan sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
35 Controller module temperature sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
36 Power supply temperature sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
37 Voltage sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
38 LEDs: Disks in 2U48 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
39 LEDs: Disk groups in 2U48 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
40 LEDs: Disks in LFF enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
41 LEDs: Disk groups in LFF enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
42 Power requirements - AC Input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
43 Power requirements - DC Input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
44 Rackmount controller enclosure weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122
45 Rackmount compatible drive enclosure weights (ordered separately) . . . . . . . . . . . . . . . . . . . . . . . . 122
46 Operating environmental specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122
47 Non-operating environmental specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
48 Supported terminal emulator applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126
49 USB vendor and product identification codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
AssuredSAN 6004 Series Setup Guide 9
Page 10

About this guide

Overview

This guide provides information about initial hardware setup for the AssuredSAN™ 6004 Series storage enclosure product listed below:
CNC (Converged Network Controller) controller enclosure: 6844/6854
• Qualified Fibre Channel SFP option supporting (4/8/16 Gb)
• Qualified Internet SCSI (10GbE) SFP option
• Qualified Internet SCSI (1 Gb) Copper RJ-45 SFP option
HD mini-SAS (12 Gb) controller enclosure: 6544/6554
AssuredSAN 6004 Series provides high-capacity storage enclosures supporting 48 2.5" small form factor (SFF) disks in a 2U chassis and 56 3.5" large form factor (LFF) disks in a 4U chassis. The chassis form factor supports controller enclosures and expansion enclosures. Storage enclosures are equipped with dual I/O modules (IOMs); and they are equipped with either two AC power supply modules or two DC power supply modules. The 6004 Series controller enclosures can optionally be cabled to supported drive enclosures for adding storage.
See the Dot Hill web site for more information about specific storage product models and uses:
https://www.dothill.com
AssuredSAN 6004 Series enclosures support both traditional linear storage and virtual storage, which uses paged-storage technology. For linear storage, a group of disks with an assigned RAID level is called a
vdisk or linear disk group. For virtual storage, a group of disks with an assigned RAID level is called a virtual disk group. This guide uses the term vdisk when specifically referring to linear storage, and uses the
term disk group otherwise.
.

AssuredSAN 6004 Series enclosure user interfaces

The 6004 Series enclosures support two versions of the web-based application for configuring, monitoring, and managing the storage system. Both web-based application GUI versions (v3 and v2), and the command-line interface are briefly described:
Storage Management Console (SMC) is the primary web interface (v3) to manage virtual storage.
RAIDar is a secondary web interface (v2) to manage linear storage. This legacy interface provides
certain functionality that is not available in the primary interface.
The command-line interface (CLI) enables you to interact with the storage system using command syntax
entered via the keyboard or scripting. You can set a CLI preference to use v3 commands to manage virtual storage or v2 commands to manage linear storage.
NOTE: For more information about enclosure user interfaces, see the following:
AssuredSAN Storage Management Guide or online help
The guide describes SMC (v3) and RAIDar (v2) GUIs
AssuredSAN CLI Reference Guide

CNC ports used for host connection

AssuredSAN 6844/6854 models use Converged Network Controller (CNC) technology, allowing you to select the desired host interface protocol from the available Fibre Channel (FC) or Internet SCSI (iSCSI) host interface protocols supported by the system. You can use the Command-line Interface (CLI) to set all controller module CNC ports to use one of these host interface protocols:
16 G b F C
8 Gb FC
4 Gb FC
10 About this guide
Page 11
10 G bE iSC SI
1 GbE iSCSI
Alternatively, you can use the CLI to set CNC ports to support a combination of host interface protocols. When configuring a combination of host interface protocols, host ports 0 and 1 are set to FC (either both16 Gbit/s or both 8 Gbit/s), and host ports 2 and 3 must be set to iSCSI (either both 10GbE or both 1 Gbit/s), provided the CNC ports use the qualified SFP connectors and cables required for supporting the selected host interface protocol. See CNC technology on page 51 and 6844/6854 CNC controller
module — rear panel LEDs on page 110 for more information.
TIP: See the Storage Management Guide for information about configuring CNC ports with host interface protocols of the same type or a combination of types.
IMPORTANT: AssuredSAN 6844/6854 models ship with CNC ports initially configured for FC. When connecting CNC ports to iSCSI hosts, you must use the CLI (not the SMC orRAIDar) to specify which ports will use iSCSI. It is best to do this before inserting the iSCSI SFPs into the CNC ports (see Change the CNC
port mode on page 71 for instructions).

HD mini-SAS ports used for host connection

The AssuredSAN 6544/6554 models provide four high-density mini-SAS (HD mini-SAS) ports per controller module. The HD mini-SAS host interface protocol uses the SFF-8644 external connector interface defined for SAS3.0 to support a link rate of 12 Gbit/s using the qualified connectors and cable options. See 6544/6554 SAS controller module—rear panel LEDs on page 112 f or m o r e i n fo r m a t i on .

Intended audience

This guide is intended for storage system administrators.

Prerequisites

Prerequisites for installing and using this product include knowledge of:
Servers and computer networks
Network administration
Storage system installation and configuration
Storage area network (SAN) management and direct attach storage (DAS)
Fibre Channel (FC), Internet SCSI (iSCSI), Serial Attached SCSI (SAS), and Ethernet protocols

Related documentation

Table 1 Related documents
For information about See
Enhancements, known issues, and late-breaking information not included in product documentation
Overview of product shipkit contents and setup tasks Getting Started
Regulatory compliance and safety and disposal information
Release Notes
*
AssuredSAN Product Regulatory Compliance and Safety
*
Using a rackmount bracket kit to install an enclosure into a rack
Attaching or removing an enclosure bezel, and servicing the optional air filter
AssuredSAN Rackmount Bracket Kit Installation* document pertaining to the specific enclosure model
*
AssuredSAN Enclosure Bezel Kit Installation pertaining to the specific enclosure model
AssuredSAN 6004 Series Setup Guide 11
document
Page 12
Table 1 Related documents (continued)
For information about See
Identifying and installing or replacing field-replaceable units (FRUs)
Obtaining and installing a license to use licensed features
Using the v3 and v2 web interfaces to configure and manage the product
Using the command-line interface (CLI) to configure and manage the product
Event codes and recommended actions AssuredSAN Event Descriptions Reference Guide
* Printed document included in product shipkit.
For additional information, see Dot Hill’s Customer Resource Center web site: https://crc.dothill.com.

Document conventions and symbols

Table 2 Document conventions
Convention Element
Blue text Cross-reference links and e-mail addresses
Blue, underlined text Web site addresses
Bold text Key names
Text typed into a GUI element, such as into a box
GUI elements that are clicked or selected, such as menu and list
AssuredSAN 6004 Series FRU Installation and Replacement Guide
AssuredSAN Obtaining and Installing a License
AssuredSAN Storage Management Guide
AssuredSAN CLI Reference Guide
items, buttons, and check boxes
Italic text Text emphasis
Monospace text File and directory names
System output
Code
Text typed at the command-line
Monospace, italic text Code variables
Command-line variables
Monospace, bold text Emphasis of file and directory names, system output, code, and text
typed at the command-line
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT: Provides clarifying information or specific instructions.
NOTE: Provides additional information.
TIP: Provides helpful hints and shortcuts.
12 About this guide
Page 13
1Components
OK
2 3
4 5
Note: Remove this enclosure bezel to access the front panel components shown below.
1

Front panel components

AssuredSAN 6004 Series supports storage enclosures in dual-purpose fashion. For each form factor, the chassis can be configured as either a controller enclosure or an expansion enclosure:
2U48 high-capacity chassis—supports 48 2.5" small form factor (SFF) disks (up to 16 disks per drawer)
(see 48-drive enclosure front panel components on page 13)
4U56 high-capacity chassis—supports 56 3.5" large form factor (LFF) disks (up to 28 disks per drawer)
(see 56-drive enclosure front panel components on page 16)
Supported expansion enclosures are used for adding storage (see Table 4 on page 37 and Table 5 on page 38). Enclosures (except for the 2U48—which supports AC PSUs only) are equipped with either two redundant AC or two DC redundant power supply modules (see Controller enclosure — rear panel layout on page 19).

48-drive enclosure front panel components

2U48 enclosure bezel installed
When the enclosure is in operation, the bezel should be installed on the front panel as shown in Figure 1. The enclosure bezel is equipped with an EMI (Electromagnetic Interference) shield to provide protection from the product.
1 Enclosure ID LED 2 Enclosure status LED: Unit Locator 3 Enclosure status LED: Fault/Service Required
Figure 1 2U48 enclosure: front panel with bezel installed
TIP: See Enclosure bezel attachment and removal on page 96 and Figure 75 on page 97 (2U48).
4 Enclosure status LED: FRU OK 5 Enclosure status LED: Temperature Fault
AssuredSAN 6004 Series Setup Guide 13
Page 14
2U48 enclosure bezel removed
1 20
OK
61
7
2
Left ear
Right ear
Note: Integers atop drawers indicate drawer numbering sequence. (Silk screens on bezel)
14
3
5
8
Drawer Profiles
Disk bays with sequentially−numbered disk slots
Right side view (Revolved Y −90°)
Left side view (Revolved Y +90°)
47
46
45
44
43
42
41
40
39
38
37
36
35
34
33
32
31
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
0
1
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
Right side view (Revolved Y −90°)
2
To access the drawers, you must remove the enclosure bezel (shown removed in Figure 2).
1 Enclosure ear LEDs (see Figure 1 on page 13) 2 Thumbscrew for securing or accessing drawer 3 Disabled button (used by engineering only) 4 Drawer handle (shown in stowed position)
2U48 enclosure drawers
You can open the enclosure drawers to access the disks. Drawers 0 and 1 provide access to disk bays from the right side of the drawer; whereas drawer 2 provides access to disk bays from the left side of the drawer. These respective side views—or profiles—are shown below.
5 Drawer status LED: FRU OK 6 Drawer status LED: Fault/Service Required 7 Drawer status LED: OK to Remove 8 Drawer status LED: Unit Locator
Figure 2 2U48 enclosure: front panel with bezel removed
Your 2U48 enclosure is shipped with the drawers installed, but they are not populated with disks. Depending on your product configuration, disk bay blanks—also known as Air Management Solution (AMS) inserts—might be installed. Locate the box containing your sledded disks and AMS inserts (if applicable) in preparation for populating the drive slots.
See Table 3 on page 26 for sequential tasks required for successful enclosure installation.
See Populating drawers in 2U48 enclosures on page 27 for instructions on installing disks and AMS
inserts.
14 Components
Figure 3 2U48 enclosure: drawer front and side views with disk slot numbering
Page 15
IMPORTANT: Disk drive slot numbering is also provided on the label that is laminated to the sheet metal
1 23 5
4
Disks and AMS installed in drive slots
(partial view of a 4-bay drawer)
housing (top face) on each drawer. Refer to the drawer label when installing disks into the drawer.
Figure 4 below provides a sample partial configuration of disk bays within a drawer. The bay on the left is
populated with four disks, whereas the adjacent bay on the right contains an AMS insert to manage air flow within the enclosure to maintain optimal operating temperature.
1 2.5" sledded disk module assembly pictorial
1, 2
2 Disk drive status LED (see Disk drive LED (2U48) on
4 Air Management Solution (AMS) insert (installed) 5 AMS insert pictorial
3
3
page 102)
3 2.5" disk drive module (installed in 3 of 4 disk bays)
1
The disk is oriented in the sled such that its PCBA faces upward on the top side of the disk drive module as shown.
2
Electromagnetic interference protection is provided by the EMI shield within the enclosure bezel.
3
The AMS insert spans an entire disk bay (4 slots). A single disk slot AMS is also available.
Figure 4 2U48 enclosure: sample drawer population
IMPORTANT: Empty bays will cause overheating. To avoid overheating, install an AMS insert in disk bays
that do not contain quantity four disk drive modules.
Legacy models use the disk bay AMS insert shown above (spans four disk slots).
New models use the single disk slot AMS insert (See Figure 24 on page 29).
NOTE: Front and rear panel LEDs for 6004 Series enclosures are described in LED descriptions.
AssuredSAN 6004 Series Setup Guide 15
Page 16

56-drive enclosure front panel components

OK
2 3
4 5
Note: Remove this enclosure bezel to access the front panel components shown below.
1
4U56 enclosure bezel installed
When the enclosure is in operation, the bezel should be installed on the front panel as shown in Figure 5. The enclosure bezel is equipped with an EMI (Electromagnetic Interference) shield to provide protection from the product. It also provides a serviceable air filter option for NEBS Level 3 compliance.
1 Enclosure ID LED 2 Enclosure status LED: Unit Locator 3 Enclosure status LED: Fault/Service Required
4 Enclosure status LED: FRU OK 5 Enclosure status LED: Temperature Fault
Figure 5 4U56 enclosure: front panel with bezel installed
TIP: See Enclosure bezel attachment and removal on page 96 and Figure 77 on page 98 (4U56).
NOTE: Front and rear panel LEDs for 6004 Series enclosures are described in LED descriptions.
16 Components
Page 17
4U56 enclosure bezel removed
DRAWER 0
DRIVES 0 − 27
PN: 21−00000590−00−01 rev A
DRAWER 1
DRIVES 28 − 55
PN: 21−00000590−00−02 rev A
Left ear
Right ear
123 4 5 6 7
1
Note: Bezel is removed and rails are not installed in this view.
2
PN: 21−00000590−00−02 rev A
0
28
1
29
2
30
3
31
4
32
5
33
34
7
35
8
36
9
37
10
38
11
39
12
40
41
42
15
43
16
44
17
45
18
46
19
47
48
49
22
50
23
51
23
52
25
53
26
54
55
14 21
2720
13
6
A
B C
Front view
Top view
Right side view
Diagram legend:
A
Drawer multiviews
Disk rows with sequentially−numbered disk slots
B
C
0
3
1
2
0
3
1
2
D0
D1
Drawer 0 slot numbers
Drawer 1 slot numbers
Key: drawer/row/slot numbering
n
n
Disk row numbers
45° orthographic bisector
Revolve handle
To access the drawers, you must remove the enclosure bezel (shown removed in Figure 6).
4U56 enclosure drawers
You can open the enclosure drawers to access the disks. Drawers 0 and 1 provide access to disks that are oriented vertically—such that the back face of the disk faces down—and is inserted into the drawer disk slots from above. A single drawer diagram is used to describe the numbering scheme.
Your 4U56 enclosure is shipped with the drawers installed, but they are not populated with disks. Although Air Management Solution (AMS) inserts are used with 2U48 enclosures, they are not used by the 4U56 enclosures. Locate the box containing your sledded disks.
1 Enclosure ear LEDs (see Figure 5 on page 16) 2 Thumbscrews for securing or accessing drawer 3 Drawer handle (shown in stowed position)
5 Drawer status LED: OK to Remove 6 Drawer status LED: Fault/Service Required 7 Drawer status LED: FRU OK
4 Drawer status LED: Unit Locator
Figure 6 4U56 enclosure: front panel with bezel removed
Figure 6 shows the drawer handle in stow (closed and locked) position.
AssuredSAN 6004 Series Setup Guide 17
Figure 7 4U56 enclosure: drawer front and side views with disk slot numbering
Page 18
See Table 3 on page 26 for sequential tasks required for successful enclosure installation.
3.5" LFF disk drive module
(see table below for LED behaviors)
1
LEDs
2
Electromagnetic interference protection is provided by the EMI shield in the enclosure bezel.
Front view of sledded-disk
Disk module oriented for insertion into drawer (PCBA connector is visible at base of module)
See Populating drawers in 4U56 enclosures on page 31 for instructions on installing disk modules.
IMPORTANT: Disk drive slot numbering is also provided on the label that is applied to the sheet metal
housing (side face) on each drawer. Refer to the drawer label when installing disks into the drawer.
Figure 8 below provides two different view orientations of the disk drive module used in 4U56 enclosures.
1 3.5" sledded disk module assembly (front view) 2 Disk drive LEDs (see Disk drive LEDs (4U56) on
page 106)
Figure 8 4U56 disk drive module

Disk drives used in 6004 Series enclosures

6004 Series enclosures support LFF/SFF Midline SAS, LFF/SFF Enterprise SAS, and SFF SSD disks. They also support LFF/SFF Midline SAS and LFF/SFF Enterprise self-encrypting disks that work with the Full Disk Encryption (FDE) feature. For information about creating disk groups and adding spares using different disk drive types, see the Storage Management Guide or online help. Also see FDE considerations on page 35.
3 3.5" disk module aligned for insertion into drawer
(AMS inserts are not used in blank disk slots)
18 Components
Page 19
Controller enclosure — rear panel layout
CACHE
LINK
ACT
CACHE
LINK
ACT
CLI
CLI
PORT 2 PORT 3
SERVICE−2SERVICE−1
PORT 0 PORT 1
CLI
CLI
PORT 2 PORT 3
SERVICE−2SERVICE−1
PORT 0 PORT 1
EXP 0 EXP 1
6Gb/s
S
S
A
LINK 0
LINK 1
EXP 0 EXP 1
6Gb/s
S
S
A
MGMT
MGMT
LINK 0
LINK 1
1 1
4
2 2
3

2U48 controller enclosures

The diagram and table below display and identify important component items that comprise the rear panel layout of a 2U48 controller enclosure. The 6844 is shown as a representative example of 2U48 controller enclosure models included in the product series.
1 AC power supply 2 AC power switch
3 Controller module A (see face plate detail figures) 4 Controller module B (see face plate detail figures)
Figure 9 6004 Series controller enclosure: rear panel (2U)
A controller enclosure accommodates two AC power supply FRUs within the two power supply slots (see two instances of callout No.1 above). The controller enclosure accommodates two controller module FRUs of the same type within the I/O module IOM) slots (see callouts No.3 and No.4 above).
IMPORTANT: The 6004 Series enclosures support dual-controller environments only. Single-controller support is provided only when a controller fails over to its partner controller. A controller module must be installed in each IOM slot to ensure sufficient airflow through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions for the controller modules and power supply modules that can be installed into the rear panel of a 6004 Series controller enclosure. Showing controller modules and power supply modules separately from the enclosure enables improved clarity in identifying the component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by 6004 Series controller enclosures for expanding storage capacity.
NOTE: 6004 Series enclosures support hot-plug replacement of redundant controller modules, fans, power supplies, and expansion modules. Hot-add replacement of drive enclosures is also supported.
AssuredSAN 6004 Series Setup Guide 19
Page 20

4U56 controller enclosure

CACHE
CLI
CLI
LINK
ACT
SERVICE−2SERVICE−1
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 0 SAS 1
ACT
LINK
ACT
LINK
SAS 2 SAS 3
EXP 0 EXP 1
6Gb/s
S
S
A
LINK 0
LINK 1
12Gb/s
S
S
A
CACHE
CLI
CLI
LINK
ACT
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 0 SAS 1
ACT
LINK
ACT
LINK
SAS 2 SAS 3
EXP 0 EXP 1
6Gb/s
S
S
A
12Gb/s
S
S
A
MGMT
SERVICE−2SERVICE−1
LINK 0
LINK 1
MGMT
2 1
3 4
5 5
4
3
The diagram and table below display and identify important component items that comprise the rear panel layout of a 4U56 controller enclosure.
1 Controller module A (see face plate detail figure) 2 Controller module B (see face plate detail figure)
4 AC power supply (PSU) 5 Fan control module (FCM)
3 AC power supply switch
Figure 10 6004 Series controller enclosure: rear panel (4U)
The controller enclosure accommodates two controller module FRUs of the same type within the I/O module (IOM) slots (see callouts No.1 and No.2 above). The controller enclosure accommodates two AC power supply FRUs of the same type within the two power supply slots (see two instances of callout No.4 above). Beneath each power supply is a power supply switch (see two instances of callout No.3 above). The controller enclosure accommodates two fan control modules (see two instances of callout No.5 above).
IMPORTANT: The 6004 Series enclosures support dual-controller environments only. Single-controller support is provided only when a controller fails over to its partner controller. A controller module must be installed in each IOM slot to ensure sufficient airflow through the enclosure during operation.
Figure 10 shows a 4U56 controller enclosure equipped with AC power supplies. Alternatively, the
enclosure can be equipped with redundant DC power supplies (see Figure 50 on page 49).
The diagrams with tables that immediately follow provide descriptions for the controller modules, power supply modules, and fan control modules that can be installed into the rear panel of a 6004 Series controller enclosure. Showing them separately from the enclosure enables improved clarity in identifying the component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by 6004 Series controller enclosures for expanding storage capacity.
NOTE: 6004 Series enclosures support hot-plug replacement of redundant controller modules, fans, power supplies, and expansion modules. Hot-add replacement of drive enclosures is also supported.
20 Components
Page 21
6844/6854 CNC controller module — rear panel components
CACHE
CLI
CLI
LINK
ACT
SERVICE−1SERVICE−2
PORT 0 PORT 1 PORT 2 PORT 3
MGMT
EXP 0 EXP 1
6Gb/s
S
S
A
LINK 0
LINK 1
5
2 3 6 8
1
7
4
= FC LEDs = 10GbE iSCSI LEDs
CACHE
CLI
CLI
LINK
ACT
SERVICE−1SERVICE−2
MGMT
EXP 0 EXP 1
6Gb/s
S
S
A
PORT 0 PORT 1 PORT 2 PORT 3
LINK 0
LINK 1
5
2 3 6 8
1
7
4
= FC LEDs = 1Gb iSCSI LEDs (all CNC ports use 1 Gb RJ-45 SFPs in this figure)
Figure 11 shows CNC ports configured with SFPs supporting either 4/8/16 Gb FC or 10GbE iSCSI. The
SFPs look identical. Refer to the CNC LEDs that apply to the specific configuration of your CNC ports.
1 CNC ports used for host connection or replication
(see Install an SFP transceiver on page 128)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Figure 11 6844/6854 controller module face plate (FC or 10GbE iSCSI)
Figure 12 shows CNC ports configured with 1 Gb RJ-45 SFPs.
1 CNC ports used for host connection or replication
(see Install an SFP transceiver on page 128)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Figure 12 6844/6854 controller module face plate (1 Gb RJ-45)
NOTE: See CNC ports used for host connection on page 10 for more information about CNC technology. For
CNC port configuration, see the “Configuring host ports” topic within the Storage Management Guide or online help.
AssuredSAN 6004 Series Setup Guide 21
Page 22
6544/6554 SAS controller module — rear panel components
CACHE
CLI
CLI
LINK
ACT
SERVICE−1SERVICE−2
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 0 SAS 1
ACT
LINK
ACT
LINK
SAS 2 SAS 3
EXP 0 EXP 1
6Gb/s
S
S
A
LINK 0
LINK 1
12Gb/s
S
S
A
5
2 3 6 8
1
7
4
SERVICE
IN
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
OUT
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
SERVICE
IN
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
OUT
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
1
6 75
1
4
2
2
3
Figure 13 shows host ports configured with 12 Gbit/s HD mini-SAS (SFF-8644) connectors.
1 HD mini-SAS ports used for host connection 2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use 5 Network port
Figure 13 6544/6554 controller module face plate (HD mini-SAS)

Supported drive enclosures

AssuredSAN controller enclosures support compatible Dot Hill drive enclosures—using 2U and 4U form factors—for adding storage.

J6X48 drive enclosure rear panel components

6004 Series controller enclosures support SFF J6X48 48-disk drive enclosures in the 2U form factor for expansion of storage capacity. These drive enclosures use mini-SAS (SFF-8088) connectors to facilitate backend SAS expansion. Each J6X48 PSU is configured with a power switch.
6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
1 AC Power supply 2 AC Power supply switch 3 Expansion module A 4 Expansion module B
The J6X48 expansion enclosure uses the same 2.5" SFF disk drive modules used by the 2U48 controller enclosures (see Figure 4 on page 15 for a visual representation of disk drive modules and AMS inserts used by the J6X48). See Cable requirements for storage enclosures on page 38 for cabling information.
22 Components
Figure 14 Drive enclosure rear panel view (2U form factor)
5 SAS In port 6 Service port (used by service personnel only) 7 SAS Out port
Page 23

J6X56 drive enclosure rear panel components

SERVICE
IN
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
OUT
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
SERVICE
IN
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
OUT
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
2 1
3 4 5 543
SERVICE
IN
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
OUT
LINK LINK
6Gb/s
S
S
A
6Gb/s
S
S
A
2 31
6004 Series controller enclosures support LFF 56-disk drive enclosures (J6X56) for expansion of storage capacity. These drive enclosures use HD mini-SAS (SFF-8644) connectors to facilitate backend SAS expansion. See Cable requirements for storage enclosures on page 38 for cabling information. The example below features AC power supplies (DC not shown).
1 Expansion module A 2 Expansion module B
4 AC power supply module 5 Fan control module
3 AC power supply switch
Figure 15 Drive enclosure rear panel view (4U form factor)
J6X56 expansion module — rear panel components
Figure 16 shows host ports configured with 12 Gbit/s J6X56 HD mini-SAS (SFF-8644) connectors.
1 HD mini-SAS ports–SAS in 1 Service port (used by service personnel only)
Figure 16 6554 controller module face plate (J6X56 HD mini-SAS)
2 HD mini-SAS expansion port–SAS out
AssuredSAN 6004 Series Setup Guide 23
Page 24

Component installation and replacement

Installation and replacement of 6004 Series FRUs (field-replaceable units) is addressed in the FRU Installation and Replacement Guide within the “Procedures” chapter.
FRU procedures facilitate replacement of a damaged chassis or chassis component:
Replacing a controller or expansion module
Replacing a disk drive module
Replacing a power supply unit
Replacing a fan control module (4U56)
Replacing ear components
Replacing a Fibre Channel transceiver
Replacing a 10GbE SFP+ transceiver
Replacing a 1 Gb SFP transceiver
Replacing a controller enclosure chassis
See Dot Hill’s Customer Resource Center web site for additional information: https://crc.dothill.com

Cache

To enable faster data access from disk storage, the following types of caching are performed:
Write-back or write-through caching. The controller writes user data into the cache memory in the
controller module rather than directly to the disks. Later, when the storage system is either idle or aging —and continuing to receive new I/O data—the controller writes the data to the disks.
Read-ahead caching. The controller detects sequential data access, reads ahead into the next
sequence of data—based upon settings—and stores the data in the read-ahead cache. Then, if the next read access is for cached data, the controller immediately loads the data into the system memory, avoiding the latency of a disk access.
TIP: See the Storage Management Guide for more information about cache options and settings.

CompactFlash

During a power loss or controller failure, data stored in cache is saved off to non-volatile memory (CompactFlash). The data is restored to cache, and then written to disk after the issue is corrected. To protect against writing incomplete data to disk, the image stored on the CompactFlash is verified before committing to disk.
The CompactFlash card is located at the midplane-facing end of the controller module as shown in
Figure 17 on page 25. Do not remove the card; it is used for cache recovery only.
.
IMPORTANT: In dual-controller configurations featuring one healthy partner controller, there is no need to transport failed controller cache to a replacement controller because the cache is duplicated between the controllers (subject to volume write optimization setting).
IMPORTANT: The 6004 Series enclosures support dual-controller environments only. Do not transport the CompactFlash since data corruption might occur. Single-controller support is provided only when a controller fails over to its partner controller.
24 Components
Page 25
CAUTION: The CompactFlash memory card should only be removed for transportable purposes. To
Do not remove
Used for cache recovery only
Controller module pictorial
CompactFlash card
(Midplane-facing rear view)
preserve the existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to the replacement controller using a procedure outlined in the FRU Installation and Replacement Guide with the procedure for replacing a controller module. Failure to use this procedure will result in the loss of data stored in the cache module. The CompactFlash must stay with the same enclosure. If the CompactFlash is used/installed in a different enclosure, data loss/data corruption will occur.

Supercapacitor pack

To protect controller module cache in case of power failure, each controller enclosure model is equipped with supercapacitor technology, in conjunction with CompactFlash memory, built into each controller module to provide extended cache memory backup time. The supercapacitor pack provides energy for backing up unwritten data in the write cache to the CompactFlash, in the event of a power failure. Unwritten data in CompactFlash memory is automatically committed to disk media when power is restored. In the event of power failure, while cache is maintained by the supercapacitor pack, the Cache Status LED flashes at a rate of 1/10 second on and 9/10 second off.
Figure 17 CompactFlash card
AssuredSAN 6004 Series Setup Guide 25
Page 26
2Installing the enclosures

Installation checklist

The following table outlines the steps required to install the enclosures, and initially configure and provision the storage system. To ensure successful installation, perform the tasks in the order presented.
IMPORTANT: For 4U56 enclosures, retain original packaging materials for use with returns. For chassis returns, the master container must ship on a pallet (non-compliance could void warranty).
Table 3 Installation checklist
Step Task Where to find procedure
1. Install the controller enclosure and optional
drive enclosures in the rack.
2. Install the disk drive modules into the drawers. See Popul ating Drawers on page 27.
3. Attach the enclosure bezel.
4. Connect controller enclosure and optional
drive enclosures.
5. Connect power cords. See Powering on/powering off on page 46.
6. Test enclosure connectivity. See Testing enclosure connections on page 46.
7. Install required host software. See Host system requirements on page 51.
8. Connect hosts.
9. Connect remote management hosts.
10. Obtain IP values and set network port IP
properties on the controller enclosure.
11 . Use the CLI to set the host interface protocol. See CNC technology on page 51. The 6844/6854 models
12 . Perform initial configuration tasks
Sign-in to the web-browser interface(v3 or
v2) to access the application GUI.
Verify firmware revisions and update if
necessary.
Initially configure and provision the system
using the SMC (v3) or RAIDar(v2).
2
1
2
3
:
See the rack-mount bracket kit installation instructions pertaining to your enclosure.
(The enclosure bezel must be removed for this task.)
Refer to the bezel attachment instructions for your enclosure.
See Connecting the controller enclosure and drive enclosures on page 36.
See Connecting the enclosure to hosts on page 51.
See Connecting a management host on the network, page 61.
See Obtaining IP values on page 68. For USB CLI port and cable use, see Appendix D.
allow you to set the host interface protocol for your qualified SFP option. Use the as described in the CLI Reference Guide or online help.
Topics below correspond to bullets at left:
See “Getting Started” in the web-posted Storage Management Guide.
See Updating firmware. Also see the same topic in the Storage Management Guide.
See “Configuring the System” and “Provisioning the System” topics in the Storage Management Guide or online help.
set host-port-mode command
1
See the FRU Installation and Replacement Guide for illustrations and narrative describing attachment of enclosure bezels to chassis.
See also Enclosure bezel attachment and removal on page 96. The 4U56 chassis includes hard copy shipkit instructions describing enclosure bezel attachment and removal.
2
For more about hosts, see the “About hosts” topic in the Storage Management Guide.
3
The SMC and RAIDar are introduced in Accessing the SMC or RAIDar on page 73. See the Storage Management Guide or online
help for additional information.

26 Installing the enclosures

Page 27

Populating Drawers

1 20
Enclosure front panel with bezel removed showing left (0), middle, (1) and right (2) drawers
Loosen screw on target drawer
90°
The drawer handle functions identically on all drawers.
Pull position

Populating drawers in 2U48 enclosures

Although the 2U48 chassis provides pre-assembled and pre-installed drawers, disk drive modules must be installed into the drawers. In addition to locating your disk modules—and any Air Management Solution (AMS) inserts, you should become familiar with the following concepts before populating the drawers:
Full Disk Encryption (FDE) firmware feature (see FDE considerations on page 35).
Preventing electrostatic discharge (see Electrostatic discharge appendix).
IMPORTANT: Please review the bullet topics above before populating the drawers.
Opening and closing a 2U16 drawer
You can open a drawer for visual inspection of disk bays. Before accessing the drawer via its handle, you must first remove the enclosure bezel (see Enclosure bezel removal on page 97). Given that the enclosure bezel is required to provide EMI protection, you should re-attach the bezel to the enclosure after examining the drawer (see Enclosure bezel attachment on page 97).
1. Using a Torx T15 or straight blade screwdriver, loosen the drawer stop screw on the front face of the
drawer. Once the screw is loosened, turn the outer thumbwheel counter-clockwise to unlock the drawer. Take
care not to remove the screw.
Figure 18 Opening a 2U16 drawer: loosen the drawer stop screw
2. Revolve the drawer handle upwards by 90to enable pulling the drawer outward for viewing disks.
Figure 19 Opening a 2U16 drawer: revolve the handle
AssuredSAN 6004 Series Setup Guide 27
Page 28
3. Face the front of the drawer—and using the handle—pull the drawer outward along the drawer slide
90°
Disk
AMS (disk bay)
until it meets the drawer stop.
Figure 20 Opening and closing a 2U16 drawer: pull or push drawer along slide
To close the drawer, simply slide the drawer into the enclosure along the drawer slide until it properly seats in the drawer bay. Take care to ensure there are no loose cable wires protruding beyond the limits of the igus chainflex cable. After closing the drawer, revolve the handle downwards such that it is flush with the drawer front panel—in its stowed position—and re-attach the bezel to the front of the enclosure.
Aligning an AMS or disk module for installation into a 2U16 drawer
Once you have opened the drawer, you can access the disk bays. The enclosure uses an SFF sledded disk positioned to lay on its side—with the disk PCBA facing up—for insertion into the disk slot within the drawer. Each disk is mated to a connector on the drawer PCBA. In the absence of quantity-four disks, the enclosure uses an AMS insert within disk bays to manage air flow within the enclosure, to help maintain optimal operating temperature. A new AMS is also available for single disk slots.
IMPORTANT: Each disk bay must be populated with either a full complement of four disk drive modules, a disk bay AMS insert (shown above), or a combination of disks and single disk slot AMS inserts (see
Figure 24 on page 29). Empty disk slots are disallowed.
28 Installing the enclosures
Figure 21 Align AMS or disk module for installation into the 2U16 drawer
Page 29
Installing an AMS into a 2U16 drawer
Align AMS for insertion into disk bay
Right drawerLeft or middle drawer
Right drawerLeft or middle drawer
The single disk slot AMS insert is slated to replace the disk bay AMS insert over time.
Refer to Figure 22 when orienting the AMS for insertion into the target drawer. If you are installing into the left drawer or middle drawer, refer to the illustration on the left when performing this step-procedure. If you are installing into the right drawer, refer to the illustration on the right when performing this step-procedure.
Figure 22 Orient the AMS for installation (2U48)
1. Squeeze the latch release flanges together—so that the locking-nib will clear the sheet metal bay
wall—and insert the AMS into the target disk bay.
2. Verify that the AMS is firmly seated in place.
The installed disk drive module should now appear as shown in the sectioned cutaway views of the respective drawers in Figure 23.
Figure 23 Secure the AMS into the disk bay (2U48)
If using the new single disk slot AMS insert shown in Figure 24, the insertion steps are essentially the same as those described for Figure 22 and Figure 23 above; however, they pertain to a single disk slot rather than a disk bay (four vertically-contiguous disk slots).
Figure 24 AMS insert for a single disk slot (2U48)
AssuredSAN 6004 Series Setup Guide 29
Page 30
Installing a disk module into a 2U16 drawer
Align SFF 2.5" disk module for insertion into drive slot
Left or middle drawer Right drawer
Left or middle drawer Right drawer
IMPORTANT: Please review FDE considerations on page 35 before populating the 6004 Series enclosure drawers.
Refer to Figure 25 when orienting the disk drive module for insertion into the target drawer. If you are installing a disk drive module in the left drawer or middle drawer, refer to the illustration on the left when performing the step-procedure. If you are installing a disk drive module in the right drawer, refer to the illustration on the right when performing this step-procedure.
Also see the admonition—following this procedure—concerning the installation of disk drive modules into an operating enclosure (not powered-off).
Figure 25 Orient the disk for installation (2U48)
1. While supporting the bottom of the disk module with one hand (disk PCBA should be facing up and
latch release flanges should be facing out)—align the disk for insertion into the target disk slot.
2. Using your other hand, squeeze the latch release flanges together—so that the locking-nib will clear the
sheet metal bay wall—and insert the disk drive module into the target slot.
Figure 26 Secure the disk module into the drive slot (2U48)
3. Gently push the disk drive module into the slot until it latches in place.
The installed disk drive module should now appear as shown in the sectioned cutaway views of the respective drawers in Figure 26.
IMPORTANT: Disk drive module replacement guidelines When replacing disks in an operating enclosure, only one disk drive module can be replaced at a time
(see “Replacing a disk drive module” within the FRU Installation and Replacement Guide).
30 Installing the enclosures
Page 31

Populating drawers in 4U56 enclosures

DRAWER 0
DRIVES 0 − 27
PN: 21−00000590−00−01 rev A
DRAWER 1
DRIVES 28 − 55
PN: 21−00000590−00−02 rev A
Drawer toggle moved to right position allows Drawer 0 to pull forward.
Bird’s eye right view (Drawer 1 hidden)
Note:
Loosen screws on target drawer
Enclosure front panel featuring drawer access components
Although the 4U56 chassis provides pre-assembled and pre-installed drawers, disk drive modules must be installed into the drawers. In addition to locating your disk modules, you should become familiar with the following concepts before populating the drawers:
Full Disk Encryption (FDE) firmware feature (see FDE considerations on page 35).
Preventing electrostatic discharge (see Electrostatic discharge appendix).
IMPORTANT: Please review the bullet topics above before populating the drawers.
Opening and closing a 4U28 drawer
You can open a drawer for visual inspection of disk bays. Before accessing the drawer via its handle, you must first remove the enclosure bezel (see Enclosure bezel removal on page 97). Given that the enclosure bezel is required to provide EMI protection, you should re-attach the bezel to the enclosure after examining the drawer (See Enclosure bezel attachment on page 97).
1. Using a No.2 phillips screwdriver, loosen the two screws securing the handle to the front face of the
drawer. Once the two screws on the target drawer are loosened, turn the thumbwheel counter-clockwise to
disengage the handle from it’s upright stowed position.
2. Move the drawer toggle to enable the target drawer to travel along the slide.
Move the toggle to the right to open Drawer 0 (left drawer); or move the toggle to the left to open Drawer 1 (right drawer).
3. Revolve the drawer handle downwards by 90to enable pulling the drawer outward for viewing disk
slots.
Figure 27 Opening a 4U28 drawer: loosen screws and move the drawer toggle
AssuredSAN 6004 Series Setup Guide 31
Page 32
Figure 28 Opening a 4U28 drawer: revolve the handle
90°
Drawer handle
S tow position
Pull position
Revolve handle to suit open and close drawer actions.
Enclosure front panel (partial)
The detail at left in Figure 28 shows the drawer handle in pull and stow (closed and locked) position.
4. Face the front of the drawer—and using the handle—pull the drawer outward along the drawer slide
until it meets the drawer stop (see Figure 29 on page 33).
To close the drawer, simply slide the drawer into the enclosure along the drawer slide until it properly seats in the drawer bay. Take care to ensure that no loose cable wires protrude beyond the limits of the igus chainflex cable. After closing the drawer, rotate the handle upwards such that it is flush with the drawer front panel. Tighten the two thumbwheel screws on each drawer handle. Re-attach the enclosure bezel to the front of the enclosure (see Enclosure bezel attachment on page 97).
Aligning a disk module for installation into a 4U56 drawer
Once you have opened the drawer, you can access the disk bays. The enclosure uses a sledded disk positioned to stand on end, for insertion into the drawer. Each sledded disk or SSD is mated to its connector on the drawer PCBA.
IMPORTANT: Please refer to the disk slot numbering diagram on the center exterior wall of the target drawer.
For Drawer 0 (left drawer), the pictorial is on the right exterior drawer wall. For Drawer 1 (right drawer) the pictorial is on the left exterior drawer wall. Disk row and slot numbering is also provided in Figure 7 on page 17.
NOTE: Blank disk drive slots are allowed. Unlike other Dot Hill Systems enclosures (2U48), the 4U56 enclosure does not employ an Air Management Solution (AMS) for use in empty disk drive slots.
During setup of your storage system, you will need to install disk modules into the drawers. You may also need to remove a disk module, or move it to a different slot. Both procedures are provided herein.
32 Installing the enclosures
Page 33
90°
Figure 29 Align the disk module for installation into the open a 4U28 drawer
Installing a disk module into a 4U28 drawer
IMPORTANT: Please review FDE considerations on page 35 before populating the enclosure drawers.
Refer to Figure 29 when orienting the disk drive module for insertion into the target drawer. The disk installation procedure applies to the left drawer (Drawer 0) and the right drawer (Drawer 1). Disk row and slot numbering for each drawer is provided on the sticker applied to the exterior wall of each drawer.
1. With the disk module standing on end—and the LEDs oriented to the left—insert the disk module into
the vertically-aligned disk slot, and seat it into the connector on the drawer PCBA.
Figure 30 Install a disk into a drawer slot (4U56)
AssuredSAN 6004 Series Setup Guide 33
Page 34
2. Verify that you have inserted the disk module into the slot as far as it will go, to ensure that the module
Release latch
is firmly seated in the drawer PCBA and latched in place.
IMPORTANT: If you are completely filling a drawer with disk modules, populate from back row to front row, while installing disks into the slots. Provide adequate support for the weight of the extended drawer as you install the disks. If you are installing disk modules to partially fill a drawer, you must install a minimum of 14 disk modules, and they must be placed in contiguous slots closest to the front of the drawer.
Removing a disk module from a 4U28 drawer
1. Using your index finger, slide the release latch—located in the right pocket on the face of the disk drive
module—to the left to disengage the disk drive module (see detail inset view in Figure 31). Moving the latch to the left will provide a clicking sound and cause the spring to move its position
inside the drawer cage, partially ejecting the disk from its installed position within the disk drive slot.
Figure 31 Remove a disk from a drawer slot (4U56)
2. Wait 20 seconds for the internal disk to stop spinning.
3. Once the disk drive module partially ejects from the slot, grasp the module firmly, and carefully pull it
straight out of the drawer slot. Take care not to drop the module.
Examples of fully-populated drawers are shown below as isolated sub-assembly pictorials.
Figure 32 Drawer 0 with full complement of disks (4U56)
The complementary views of the left drawer (Figure 32) and right drawer (Figure 33 on page 35) viewed from front bird’s-eye orientation show key characteristics of the drawers, including a) staggered elevation of slide rails, b) locations of applied drawer row/slot-numbering reference diagrams, and c) orientation of installed disk modules.
34 Installing the enclosures
Page 35

FDE considerations

The Full Disk Encryption feature available via the management interfaces requires use of self-encrypting drives (SED) which are also referred to as FDE-capable disk drive modules. When installing FDE-capable disk drive modules, follow the same procedures for installing disks that do not support FDE. The exception occurs when you move FDE-capable disk drive modules for one or more vdisks or disk groups to a different system, which requires additional steps.
The procedures for using the FDE feature, such as securing the system, viewing disk FDE status, and clearing and importing keys are performed using the web-based (v3 or v2) application or CLI commands (see the Storage Management Guide or CLI Reference Guide for more information).
NOTE: When moving FDE-capable disk drive modules for a disk group, stop I/O to any disk groups before removing the disk drive modules. Follow the “Removing a disk drive module” and “Installing a disk drive module” procedures within the FRU Installation and Replacement Guide. Import the keys for the disks so that the disk content becomes available.
While replacing or installing FDE-capable disk drive modules, consider the following:
If you are installing FDE-capable disk drive modules that do not have keys into a secure system, the
system will automatically secure the disks after installation. Your system will associate its existing key with the disks, and you can transparently use the newly-secured disks.
If the FDE-capable disk drive modules originate from another secure system, and contain that system’s
key, the new disks will have the Secure, Locked status. The data will be unavailable until you enter the passphrase for the other system to import its key. Your system will then recognize the metadata of the disk groups and incorporate it. The disks will have the status of Secure, Unlocked and their contents will be available:
• To view the FDE status of disks, use the SMC or RAIDar, or the show fde-state CLI command.
• To import a key and incorporate the foreign disks, use the SMC or RAIDar, or the set fde-import-key CLI command.
Figure 33 Drawer 1 with full complement of disks (4U56)
NOTE: If the FDE-capable disks contain multiple keys, you will need to perform the key importing
process for each key to make the content associated with each key become available.
If you do not want to retain the disks’ data, you can repurpose the disks. Repurposing disks deletes all
disk data, including lock keys, and associates the current system’s lock key with the disks. To repurpose disks, use the SMC or RAIDar, or the
You need not secure your system to use FDE-capable disks. If you install all FDE-capable disks into a
system that is not secure, they will function exactly like disks that do not support FDE. As such, the data they contain will not be encrypted. If you decide later that you want to secure the system, all of the disks must be FDE-capable.
If you install a disk module that does not support FDE into a secure system, the disk will have the
Unusable status and will be unavailable for use.
set disk CLI command.
AssuredSAN 6004 Series Setup Guide 35
Page 36
If you are re-installing your FDE-capable disk drive modules as part of the process to replace the
chassis-and-midplane FRU, you must insert the original disks and re-enter their FDE passphrase (see the FRU Installation and Replacement Guide for more information).

Network Equipment-Building System (NEBS) Level 3 compliance

Generic Requirements (GRs)

Meets the NEBS requirement of GR-1089-CORE Issue 6, port types 2, 7 & 8. Meets the NEBS requirements of GR-63-CORE Issue 4, for the product’s intended use. Meets the NEBS filter requirements.
IMPORTANT: Table 5 on page 38 shows NEBs-compliance for individual storage enclosures.
Exceptions to GRs
Exceptions to the overall NEBS GR-63-CORE Issue 4 requirements include:
Heat Dissipation: Environmental Criteria Section 4.1.6, Objective O4-29 and Operational Requirement
R4-31. This product exceeds the values shown in Table 4-5 for Forced-Air Fan Shelf equipment.
The equipment does not meet the objective O4-103 for Surface Reflectance, Section 4.1.7, as it has a
black bezel that mounts to the front.
Exceptions to the overall NEBS GR-1089-CORE Issue 6 requirements include:
None reported
Product documentation requirements
NEBS product documentation requirements applying to AssuredSAN 6004 Series controller and drive enclosures are listed beneath “NEBS (Level 3)” in the Index — under either GR-1089-CORE Issue 6 or GR-63-CORE Issue 4 — together with adjacent page locations. NEBS topics are integrated within the overall content of this Setup Guide. The requirement designators in the Index have been codified for use within index marker tags according to the following example:
NEBS generic requirement number “R2-7 [7]” appears as “R2-7.7” within the Index.
Each codified string (e.g., R2-7.7) is followed by a hyphen and brief description of the requirement. Within the Index, click on the blue page number link to navigate to the corresponding NEBS topic.

Connecting the controller enclosure and drive enclosures

AssuredSAN 6004 Series controller enclosures—available in 48-drive (2.5") and 56-drive (3.5") chassis. The high-density 2U enclosure support up to four enclosures, or a maximum of 192 disk drives. The high-density 4U enclosure supports up to eight enclosures, or a maximum of 248 disk drives for 56-drive enclosures. The maximum number of storage enclosures cited include the controller enclosure.
The 6004 Series enclosures support both straight-through and reverse SAS cabling. Reverse cabling allows any drive enclosure to fail—or be removed—while maintaining access to other enclosures. Fault tolerance and performance requirements determine whether to optimize the configuration for high availability or high performance when cabling. AssuredSAN 6004 Series controller modules support both 3-Gbps and 6-Gbps internal disk drive speeds together with 3 Gbps and 6 Gbps expander link speeds.
CAUTION: Some 6-Gbps disks might not consistently support a 6-Gbps transfer rate. If this happens, the system automatically adjusts transfers to those disks to 3 Gbps, increasing reliability and reducing error messages with little impact on system performance. This rate adjustment persists until the controller is restarted or power-cycled.
36 Installing the enclosures
Page 37
AssuredSAN controller enclosures support compatible Dot Hill drive enclosures for adding storage. Supported enclosure form factors include high-capacity models in 2U (2U48) and 4U (4U56) format. A summary overview of drive enclosures supported by controller enclosures is provided herein.
NOTE: See Table 4 for a comparative tabular view of AssuredSAN drive enclosures supported by AssuredSAN controller enclosures.
Table 4 Summary of drive enclosures supported by controller enclosures
Model
RAID
Expansion
4U56: Chassis (4-rack units high) with 56 LFF (3.5") sledded disk drive modules. 2U48: Chassis (2-rack units high) with 48 SFF (2.5") sledded disk drive
Chassis
modules.
J6X56: 4U56 chassis configured as a 6 Gb drive (expansion) enclosure for adding storage. J6X48: 2U48 chassis configured as a 6 Gb drive (expansion)
Models
enclosure for adding storage.
1
See https://www.dothill.com for most current information concerning support of drive enclosures for storage expansion.
2
See the website (above) or the Setup Guide for your product model for enclosure maximum configuration limits.
3
Using high-capacity drive enclosures reduces the maximum number of enclosures supported, without materially affecting the
maximum number of disks allowed.
Form
2U48 J6X48
4U56 J6X56 J6X56
2U48 4U56
6004
J6X48
Cabling diagrams in this section show fault-tolerant cabling patterns. Controller and expansion modules are identified by <enclosure-ID><controller-ID>. When connecting multiple drive enclosures, use reverse cabling to ensure the highest level of fault tolerance, enabling controllers to access remaining drive enclosures if a drive enclosure fails.
For example, the illustration in Figure 42 on page 43 shows reverse cabling, wherein controller 0A (i.e., enclosure-ID = 0; controller-ID = Able) is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller 0B is connected to the lower expansion module (B) of the last drive enclosure in the chain, with connections moving in the opposite direction (green). Several cabling examples are provided on the following pages.

Connecting the 6004 Series controller to the 2U48 drive enclosure

The high capacity J6X48 48-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be attached to an 6004 Series controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length (see Figure 34 on page 39).

Connecting the 6004 Series controller to the 4U56 drive enclosure

The high capacity J6X56 56-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be attached to a 6004 Series controller enclosure using supported mini-SAS to mini-SAS cables of 1 m (3.28') to 2 m (6.56') length (see Figure 34 on page 39).

Connecting the 6004 Series controller to mixed model drive enclosures

The 6004 Series controllers support cabling of 6 Gb SAS link-rate SFF and LFF expansion modules—in mixed model fashion—as shown in Figure 34 on page 39.
AssuredSAN 6004 Series Setup Guide 37
Page 38

Cable requirements for storage enclosures

The 6004 Series enclosures support 6 Gbps or 3 Gbps expansion port data rates. Use only AssuredSAN or OEM-qualified cables, and observe the following guidelines (see Table 5 on page 38):
When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with
SFF-8644 connectors.
Qualified mini-SAS to mini-SAS 0.5 m (1.64') cables are used to connect cascaded enclosures in the
rack. The “HD mini-SAS to HD mini-SAS” cable designator connotes SFF-8644 to SFF-8644 connectors.
The maximum expansion cable length allowed in any configuration is 2 m (6.56').
Cables required, if not included, must be separately purchased.
When adding more than two drive enclosures, you may need to purchase additional 1 m or 2 m
cables, depending upon number of enclosures and cabling method used:
• Spanning 3, 4, or 5 drive enclosures requires 1 m (3.28') cables.
• Spanning 6 or 7 drive enclosures requires 2 m (6.56') cables.
You may need to order additional or longer cables when reverse-cabling a fault-tolerant configuration
(see Figure 38 on page 41).
Use only AssuredSAN or OEM-qualified cables for host connection:
• Qualified Fibre Channel SFP and cable options
• Qualified 10GbE iSCSI SFP and cable options
• Qualified 1 Gb RJ-45 SFP and cable options
• Qualified HD mini-SAS standard cable options supporting SFF-8644 and SFF-8088 host connection (also see HD mini-SAS host connection on page 54):
A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gbit/s enabled host; whereas a qualified SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gbit/s enabled host.
NOTE: The 4U56 enclosure does not use the 0.5 m (1.64') length supported for 2U enclosures.
TIP: Requirements for cabling 6004 Series controller enclosures and supported drive enclosures
are summarized in Table 5.
Table 5 summarizes key characteristics of controller enclosures and compatible drive (expansion)
enclosures relative to cabling concerns.
Table 5 Summary of cabling connections for AssuredSAN 6004 Series storage enclosures
Model
6844
6854
6544
6554
J6X48 2U48
J6X56 4U56 Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
Form Host connect NEBS
1, 2
2U48 Qualified CNC option:
FC (8/16 Gb) SFP
1, 2
4U56 Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
1, 3
2U48
1, 3
4U56 Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
10Gb E i S C SI S F P
1 Gb iSCSI SFP
HD mini-SAS connector:
standard cables
Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
SFF 48-disk chassis LFF 56-disk chassis
38 Installing the enclosures
Page 39
Table 5 Summary of cabling connections for AssuredSAN 6004 Series storage enclosures (continued)
Controller enclosure
0
Drive
enclosure
1
0B
0A
1A
1B
OutIn
In Out
Controller A
Controller B
Controller enclosure
0
Drive
enclosure
In
Out
0B
0A
1A
1
1B
In
In
Out
Controller A
Controller B
Model
Form Host connect NEBS
Enclosure chassis designators:
4U56: Enclosure measuring four rack units high, providing 56 LFF (3.5") sledded disk drive modules. 2U48: Enclosure measuring two rack units high, providing 48 SFF (2.5") sledded disk drive modules.
See Physical requirements on page 120 for more information about AssuredSAN storage enclosures.
1
These compatible product models feature 6 Gbit/s internal disk and SAS expander link speeds.
2
See CNC technology on page 51 for information about locating and installing qualified SFP options into CNC ports.
3
See 12 Gb HD mini-SAS on page 53 for information about host connection using SFF-8644 high-density mini-SAS connectors.
4
The 6004 Series 4U56 enclosures are designed for NEBS compliance; however, the 6004 Series 2U48 enclosures are not.
Summary of drive enclosure cabling illustrations
The following illustrations show both reverse and straight-through cabling examples featuring 6004 Series controller enclosures and compatible J6X48 (2U48) and J6X56 (4U56).
NOTE: For clarity, the schematic diagrams show only relevant details such as face plate outlines and expansion ports. For detailed illustrations, see Controller enclosure — rear panel layout on page 19. Also see the controller module face plate illustrations that follow the rear panel layout.
6004 Series controller enclosures cabled to a supported drive enclosure
SFF 48-disk chassis LFF 56-disk chassis
Figure 34 Cabling connections between a 2U controller enclosure and one 2U drive enclosure
Figure 35 Cabling connections between a 2U controller enclosure and one 4U drive enclosure
AssuredSAN 6004 Series Setup Guide 39
Page 40
Controller
enclosure
0
Drive
enclosure
In
Out
0B 0A
1A
1B
Controller A
1
Controller B
In
Out
Controller enclosure
0
Drive
enclosure
In
Out
0B 0A
1A
1B
Controller A
1
Controller B
In
Out
Figure 36 Cabling connections between a 4U controller enclosure and a 4U drive enclosure
Figure 37 Cabling connections between a 4U controller enclosure and a 2U drive enclosure
The preceding illustrations (Figure 34 through Figure 37) show examples of 2U and 4U controller enclosures cabled to either a 2U or 4U drive enclosure. Each set of examples feature enclosures equipped with dual IOMs.
See the “Replacing a controller or expansion module” topic within the FRU Installation and Replacement Guide for additional information.
NOTE: Figure 38Figure 41 feature 2U controller enclosures cabled to supported drive enclosures;
whereas Figure 42Figure 46 feature 4U controller enclosures cabled to supported drive enclosures.
40 Installing the enclosures
Page 41
6004 Series 2U controller enclosures cabled to supported drive enclosures
Controller enclosure
0
Drive
enclosure
In Out
In Out
0B
0A
1A
1B
Controller A
Controller B
1
0B
0A
1A
1B
In Out
In Out
In Out
In Out
Reverse cabling
Drive
enclosure
2
Drive
enclosure
3
OutIn
In Out
OutIn
OutIn
In Out
In Out
2B
2A
3A
3B
Straight-through cabling
2B
2A
3A
3B
Controller A
Controller B
Figure 38 Fault-tolerant cabling between a dual-IOM 2U enclosure and three 2U drive enclosures
The diagram at left (above) shows reverse cabling of a 6004 Series dual-controller 2U enclosure and J6X48 drive enclosures configured with dual-expansion modules. Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3B), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining access to other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through cabling. Using this method, if an expansion enclosure fails, the enclosures that follow the failed enclosure in the chain are no longer accessible until the failed enclosure is repaired or replaced.
Refer to these diagrams when cabling multiple compatible 2U drive enclosures together with the 6004 Series controller enclosure.
IMPORTANT: Guidelines for stacking enclosures in the rack are provided in the rackmount bracket kit installation sheet provided with your product.
TIP: Per common convention, in diagrams the controller enclosure is shown atop the stack of connected
enclosures. In reality, for Figure 44Figure 46, you can invert the order of the stack for optimal weight and placement stability within the rack. The schematic representation of the cabling remains unchanged. See the rackmount bracket kit installation instructions for your product(s) for more detail.
AssuredSAN 6004 Series Setup Guide 41
Page 42
Controller enclosure
0
Drive
enclosure
In Out
In Out
0B
0A
1A 1B
Controller A
1
In Out
Drive
enclosure
2
Drive
enclosure
3
OutIn
In Out
OutIn
2B2A
3A 3B
Controller B
Controller enclosure
0
Drive
enclosure
In Out
0B
0A
1A
1B
Controller A
1
In Out
In Out
Drive
enclosure
2
Drive
enclosure
3
In Out
OutIn
In Out
2B
2A
3A
3B
Controller B
Figure 39 Reverse cabling between a dual-controller 2U enclosure and three 4U drive enclosures.
The diagram at above shows reverse cabling of an 6004 Series dual-controller enclosure and 2U drive enclosures configured with dual-expansion modules. Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3A), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining access to other enclosures.
Figure 40 Straight-through cabling between a dual-controller 2U enclosure and three 4U drive enclosures
The diagram at right shows the same storage components connected using straight-through cabling. Using this method, if an expansion enclosure fails, the enclosures that follow the failed enclosure in the chain are
42 Installing the enclosures
no longer accessible until the failed enclosure is repaired or replaced.
Figure 39 shows reverse-cabling and Figure 40 shows straight-through cabling. Cabling logic is explained
in the narrative supporting Figure 39.
Page 43
Controller enclosure
0
Drive
enclosure
In Out
0B
0A
1A
1B
Controller A
1
In Out
Drive
enclosure
2
Drive
enclosure
3
In Out
OutIn
2B
2A
3A
3B
In Out In Out
Controller B
Controller enclosure
0
Drive
enclosure
In Out
Out
0B 0A
1A1B
Controller A
Controller B
1
In
In Out
Out
2A2B
In
In Out
Out
3A3B
In
Drive
enclosure
2
Drive
enclosure
3
Figure 41 Reverse cabling between a dual-controller 2U enclosure and mixed drive enclosures
The diagram above shows dual-controller enclosures cabled to drive enclosures featuring dual-expansion modules. In the example shown above, a 4U drive enclosure is included with 2U drive enclosures within the RAID-array cascade.
6004 Series 4U controller enclosures cabled to supported drive enclosures
Figure 42 Reverse-cabling between a dual-controller 4U enclosure and three 4U drive enclosures
Figure 42 shows reverse-cabling of a 6004 Series dual-controller enclosure and J6X56 drive enclosures
configured with dual-expansion modules. Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3B), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining access to other enclosures.
AssuredSAN 6004 Series Setup Guide 43
Page 44
Controller enclosure
0
Drive
enclosure
In Out
Out
0B
0A
1A
1B
Controller A
Controller B
1
In
In
Out
Out
2A
2B
In
In Out
Out
3A
3B
In
Drive
enclosure
2
Drive
enclosure
3
Controller enclosure
0
Drive
enclosure
In Out
Out
0B 0A
1A
1B
Controller AController B
1
In
2A
2B
3A
3B
Drive
enclosure
2
Drive
enclosure
3
In Out
In Out
In Out
In Out
Figure 43 Straight-through cabling between a dual-controller 4U enclosure and three 4U drive enclosures
Figure 43 on page 44 shows the same storage components used in Figure 42, but they are connected
using straight-through cabling. Using this method, if an expansion enclosure fails, the enclosures that follow the failed enclosure in the chain are no longer accessible until the failed enclosure is repaired or replaced.
Refer to these diagrams when cabling multiple compatible drive enclosures together with the 6004 Series controller enclosure. The cabling diagrams reflect maximum configuration.
Figure 44 Reverse-cabling between a dual-controller 4U enclosure and three 2U drive enclosures
Figure 44 shows reverse cabling of a 6004 Series dual-controller enclosure and 2U drive enclosures
configured with dual-expansion modules. Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3B), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining
44 Installing the enclosures
access to other enclosures.
Page 45
Controller enclosure
0
Drive
enclosure
In Out
Out
0B 0A
1A
1B
Controller A
Controller B
1
In
2A
2B
3A
3B
Drive
enclosure
2
Drive
enclosure
3
In Out
In Out
In Out
In Out
Controller enclosure
0
Drive
enclosure
In Out
Out
0B 0A
1A
1B
Controller AController B
1
In
2A
2B
3A
3B
Drive
enclosure
2
Drive
enclosure
3
In Out
In Out
In Out
In Out
Figure 45 Straight-through cabling between a dual-controller 4U enclosure and three 2U drive enclosure
Figure 45 on page 45 shows the same storage components used in Figure 44, but they are connected
using straight-through cabling. Using this method, if an expansion enclosure fails, the enclosures that follow the failed enclosure in the chain are no longer accessible until the failed enclosure is repaired or replaced.
Figure 46 Reverse-cabling between a dual-controller 4U enclosure and mixed-model drive enclosures
Figure 46 shows reverse cabling of a 6004 Series dual-controller enclosure and supported mixed-model
(2U and 4U) drive enclosures configured with dual-expansion modules. In this example, the 2U enclosures follow the 4U enclosures. Given that all of the supported drive enclosure models use 6 Gb SAS link-rate and SAS 2.0 expanders, they can be ordered in desired sequence within the array, following the controller enclosure.
AssuredSAN 6004 Series Setup Guide 45
Page 46
Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3B), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining access to other enclosures.

Testing enclosure connections

Power cycling procedures vary according to the type of power supply unit (PSU) provided with the enclosure. Some enclosure models are equipped with PSUs that do not have power switches; whereas 6004 Series enclosures use a PSU model that is equipped with a power switch.
AssuredSAN 4U chassis use a different PSU model for AC and a different PSU model for DC. The 2U48 enclosures presently do not use DC power supplies. The AC PSU with power switch—used by the 2U48 and 4U56 chassis—is described within the following section, Powering on/powering off, which also describes power cycling procedures relative to PSUs installed within enclosures.
Once the power-on sequence succeeds, the storage system is ready to be connected to hosts as described in Connecting the enclosure to hosts

Powering on/powering off

Before powering on the enclosure for the first time:
Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
NOTE: For high-capacity 2U48 or 4U56 enclosures, you must remove the enclosure bezel and
open the target drawer to access disk slots or view LEDs for disks:
•Remove/attach bezel (Enclosure bezel attachment and removal on page 96).
• Open/close drawer in 2U48 chassis (Opening and closing a 2U16 drawer on page 27).
• Open/close drawer in 4U56 chassis (Opening and closing a 4U28 drawer on page 31).
on page 51.
Connect the cables and power cords to the enclosure as described herein.
Generally, when powering up, make sure to power up the enclosures and associated data host in the
following order:
•Drive enclosures first This ensures that the disks in the drive enclosure have enough time to completely spin up before
being scanned by the controller modules within the controller enclosure. While enclosures power up, their LEDs blink. After the LEDs stop blinking—if no LEDs on the drawer
panels and no LEDs on the front and back of the enclosure are amber—the power-on sequence is complete, and no faults have been detected. See LED descriptions on page 96 for descriptions of LED behavior. Navigate to the desired 2U or 4U enclosure for product-specific LED behavior.
• Controller enclosure next Depending upon the number and type of disks in the system, it may take several minutes for the
system to become ready.
• Data host last (if powered down for maintenance purposes).
TIP: Generally, when powering off, you will reverse the order of steps used for powering on.

Power supplies

This section provides an overview of power supplies—2U and 4U form factors—used in 6004 Series enclosures.
CAUTION: Remove power cords from both power supplies before servicing the system.
46 Installing the enclosures
Page 47
AC PSU (2U)
Power switch
Power cord connect
Power supply module
Rack power source
Each AC power supply is equipped with a power switch. The illustration below shows the face of an AC power supply module as it appears when viewing the rear panel of the enclosure.
Connect power cord to AC power supply
Figure 47 AC PSU (2U)
Figure 48 AC power cord connect (2U)
Obtain two suitable AC power cords: one for each AC power supply that will connect to a separate power source. See Figure 48 and the illustration in Figure 47 when performing the following steps:
1. Verify that the enclosure’s power switches are in the Off position.
2. Identify the power cord connector on the PSU, and locate the target power source.
3. Using the AC power cords provided, plug one end of the cord into the power cord connector on the
PSU. Plug the other end of the power cord into the rack power source.
4. Verify connection of primary power cords from the rack to separate external power sources.
See Power cycle (2U48).
Power cycle (2U48)
Once the system is successfully cabled, use the following procedures to power on and power off.
To power on the system:
1. Power up drive enclosure(s). Allow several seconds for disks to spin up. Press the power switches at the back of each drive enclosure to the On position.
2. Power up the controller enclosure next.
Press the power switches at the back of the controller enclosure to the On position.
AssuredSAN 6004 Series Setup Guide 47
Page 48
To power off the system:
Power supply
Rear panel
Rack
cord
AC power
1. Stop all I/O from hosts to the system (see Stopping I/O on page 77).
2. Shut down both controllers using either method described below:
3. Press the power switches at the back of the controller enclosure to the Off position.
4. Press the power switches at the back of each drive enclosure to the Off position.
AC PSU (4U)
The 6004 Series enclosures use two redundant AC PSUs, each of which is controlled by a power switch mounted on the chassis (beneath the PSU). The illustration below shows the face of an AC power supply module as it appears when viewing the rear panel of the 4U enclosure. Connection of the power cable to the rack power source is also shown.
• Use the SMC or RAIDar to shut down both controllers, as described in the online help and Storage Management Guide.
Proceed to step 3.
• Use the CLI to shut down both controllers, as described in the CLI Reference Guide.
Obtain two suitable power cords: one for each AC power supply that will connect to a separate power source. See Figure 49 when performing the following steps:
1. Verify that the enclosure’s power switches are in the Standby position.
2. Identify the power cord connector on the PSU, and locate the target power source.
3. Using the AC power cords provided, plug one end of the cord into the power cord connector on the
4. Verify connection of primary power cables from the rack to separate external power sources.
DC PSU (4U)
As an alternative to using AC PSUs, the 6004 Series enclosure can use two redundant DC PSUs controlled by the power switch mounted on the chassis.
Figure 49 AC PSU with power switch (4U)
PSU. Plug the other end of the power cord into the rack power source.
48 Installing the enclosures
Page 49
Figure 50 DC PSU with power switch (4U)
Power
Power cable connect
Connector (front view)
Power cable (right side view with wire breaks)
+L
-L
+L
-L
+L
-L
Connector screw (typical 2 places)
Ring/lug connector (typical 2 places)
(+)
(-)
Figure 51 DC power cable featuring 2-circuit header and lug connectors (4U)
See Figure 50 and Figure 51 when performing the following steps:
1. Locate the appropriate DC power cables.
2. Verify that the enclosure’s power switches are in the Standby position.
3. Connect the DC power to each DC power supply using the 2-circuit header connector.
4. Tighten the screws at the base of the connector—left and right sides—applying a torque between 1.7
N-m (15 in-lb) and 2.3 N-m (20 in-lb), to securely attach the cable to the DC power supply module.
5. To complete the DC connection, secure the other end of each cable wire component of the DC power
cable to the target DC power source. Check the two individual DC cable wire labels before connecting each cable wire lug to its power
source. One wire is labeled positive (+L) and the other is labeled negative (-L), as shown in Figure 51 above. The 4U56 enclosure is grounded independently of this DC cable. The chassis ground wire is connected from a stud on its mounting rail to the rack in which it is mounted.
Power cycle (4U56)
Once the system is successfully cabled, use the following procedures to power on and power off.
To power on the system:
1. Power up drive enclosure(s) by doing one of the following:
• Press the power switches at the back of each drive enclosure to the On position.
• Connect the power cable from the power source to the connector on the PSU (switchless PSUs). Allow 2.5 minutes for disks to spin up.
2. Power up the 4U56 controller enclosure next. Press the power switches at the back of the controller enclosure to the On position.
To power off the system:
1. Stop all I/O from hosts to the system (see Stopping I/O on page 77).
CAUTION: Connecting to a DC power source outside the designated -48VDC nominal range
(-36VDC to -72 VDC) may damage the enclosure.
AssuredSAN 6004 Series Setup Guide 49
Page 50
2. Shut down both controllers using either method described below:
• Use the SMC or RAIDar to shut down both controllers, as described in the online help and Storage Management Guide.
Proceed to step 3.
• Use the CLI to shut down both controllers, as described in the CLI Reference Guide.
3. Press the power switches at the back of the 4U controller enclosure to the Standby position.
4. Power down drive enclosure(s) by doing one of the following:
• Press the power switches at the back of each drive enclosure to the Off position; or
• Disconnect the power cable from the power source to the connector on the PSU (switchless PSUs).
50 Installing the enclosures
Page 51

3 Connecting hosts

Host system requirements

Hosts connected to an AssuredSAN 6004 Series controller enclosure must meet the following requirements:
Depending on your system configuration, host operating systems may require that multipathing is
supported. If fault tolerance is required, then multipathing software may be required. Host-based multipath
software should be used in any configuration where two logical paths between the host and any storage volume may exist at the same time. This would include most configurations where there are multiple connections to the host or multiple connections between a switch and the storage.
• Use native Microsoft MPIO DSM support with Windows Server 2008 and Windows Server 2013. Use either the Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation.
See the following web sites for information about using native Microsoft MPIO DSM:
https://support.microsoft.com https://technet.microsoft.com (search the site for “multipath I/O overview”)

Cabling considerations

Common cabling configurations address hosts, controller enclosures, drive enclosures, and switches. Host interface ports on 6004 Series controller enclosures can connect to respective hosts via direct-attach or switch-attach. Cabling systems to enable use of the optional AssuredRemote™ feature—to replicate volumes—is yet another important cabling consideration. See Connecting two storage systems to replicate
volumes on page 61. The 6844/6854 models can be licensed to support replication; whereas the
6544/6554 models do not support the feature.
TIP: Table 5 on page 38 aligns product model numbers with 2U and 4U chassis form factors.

Connecting the enclosure to hosts

A host identifies an external port to which the storage system is attached. The external port may be a port in an I/O adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration. This section describes host interface protocols supported by 6004 Series controller enclosures, while showing a few common cabling configurations.
NOTE: 6004 Series controllers use Unified LUN Presentation (ULP): a controller feature enabling a host to access mapped volumes through any controller host port.
ULP can show all LUNs through all host ports on both controllers, and the interconnect information is managed by the controller firmware. ULP appears to the host as an active-active storage system, allowing the host to select any available path to access the LUN, regardless of disk group ownership.
TIP: See “Using the Configuration Wizard” in the Storage Management Guide to initially configure the system or change system configuration settings (such as Configuring host ports).

CNC technology

AssuredSAN 6844/6854 models use Converged Network Controller technology, allowing you to select the desired host interface protocol(s) from the available FC or iSCSI host interface protocols supported by
AssuredSAN 6004 Series Setup Guide 51
Page 52
the system. The small form-factor pluggable (SFP transceiver or SFP) connectors used in CNC ports are further described in the subsections below. Also see CNC ports used for host connection on page 10 for more information concerning use of CNC ports.
NOTE: Controller modules are not shipped with pre-installed SFPs. Within your product kit, you will need to locate the qualified SFP options, and install them into the CNC ports. See Install an SFP transceiver on page 128.
IMPORTANT: Use the set host-port-mode CLI command to set the host interface protocol for CNC ports using qualified SFP options. AssuredSAN 6844/6854 models ship with CNC ports configured for FC. When connecting CNC ports to iSCSI hosts, you must use the CLI (not the SMC or RAIDar) to specify which ports will use iSCSI. It is best to do this before inserting the iSCSI SFPs into the CNC ports (see
Change the CNC port mode on page 71 for instructions).
NOTE: AssuredSAN 6844/6854 models support the optionally-licensed AssuredRemote replication feature. Whereas linear storage supports FC and iSCSI host interface ports for replication, virtual supports iSCSI host interface ports for replication. Both linear and virtual storage support all qualified CNC options for host connection.
Fibre Channel protocol
AssuredSAN 6844/6854 controller enclosures support two controller modules using the Fibre Channel interface protocol for host connection. Each controller module provides four host ports designed for use with an FC SFP supporting data rates up to 16 Gbit/s. When configured with FC SFPs, 6844/6854 controller enclosures can also be cabled to support the optionally-licensed AssuredRemote replication feature via the FC ports (linear storage only).
The controller supports Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies. Loop protocol can be used in a physical loop or in a direct connection between two devices. Point-to-point protocol is used to connect to a fabric switch. Point-to-point protocol can also be used for direct connection, and it is the only option supporting direct connection at 16 Gbit/s. See the set host-parameters command within the CLI Reference Guide for command syntax and details about parameter settings relative to supported link speeds.
Fibre Channel ports are used in either of two capacities:
To connect two storage systems through a Fibre Channel switch for use of AssuredRemote replication
(linear storage only).
For attachment to FC hosts directly, or through a switch used for the FC traffic.
The first usage option requires valid licensing for the AssuredRemote replication feature, whereas the second option requires that the host computer supports FC and optionally, multipath I/O.
TIP: Use the SMC or RAIDar Configuration Wizard to set FC port speed. Within the Storage Management Guide, see “Configuring host ports.” Use the set host-parameters CLI command to set FC port options, and use the show ports CLI command to view information about host ports.
10G b E iSCSI prot ocol
AssuredSAN 6844/6854 controller enclosures support two controller modules using the Internet SCSI interface protocol for host connection. Each controller module provides four host ports designed for use with a 10GbE iSCSI SFP supporting data rates up to 10 Gbit/s, using either one-way or mutual CHAP (Challenge-Handshake Authentication Protocol).
52 Connecting hosts
Page 53
TIP: See the topics about Configuring CHAP, and CHAP and replication in the Storage Management Guide.
TIP: Use the SMC or RAIDar Configuration Wizard to set iSCSI port options. Within the Storage Management Guide, see “Configuring host ports.” Use the set host-parameters CLI command to set iSCSI port options, and use the show ports CLI command to view information about host ports.
The 10GbE iSCSI ports are used in either of two capacities:
To connect two storage systems through a switch for use of AssuredRemote replication.
For attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic.
The first usage option requires valid licensing for the AssuredRemote replication feature, whereas the second option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.
1 Gb iSCSI protocol
AssuredSAN 6844/6854 controller enclosures support two controller modules using the Internet SCSI interface protocol for host port connection. Each controller module provides four iSCSI host ports configured with an RJ-45 SFP supporting data rates up to 1 Gbit/s, using either one-way or mutual CHAP.
TIP: See the topics about Configuring CHAP, and CHAP and replication in the Storage Management Guide.
TIP: Use the SMC or RAIDar Configuration Wizard to set iSCSI port options. Within the Storage Management Guide, see “Configuring host ports.” Use the set host-parameters CLI command to set iSCSI port options, and use the show ports CLI command to view information about host ports.
The 1 Gb iSCSI ports are used in either of two capacities:
To connect two storage systems through a switch for use of AssuredRemote replication.
For attachment to 1 Gb iSCSI hosts directly, or through a switch used for the 1 Gb iSCSI traffic.
The first usage option requires valid licensing for the AssuredRemote replication feature, whereas the second option requires that the host computer supports Ethernet, iSCSI, and optionally, multipath I/O.

HD mini-SAS technology

AssuredSAN 6544/6554 models use high-density mini-SAS (Serial Attached SCSI) interface protocol for host connection.
12 Gb HD mini-SAS
Each controller module provides four SFF-8644 HD mini-SAS host ports supporting data rates up to 12 Gbit/s. HD mini-SAS host ports connect to hosts or switches; they are not used for replication. Use a qualified SFF-8644 to SFF-8644 cable option when connecting to a 12 Gbit/s host. Use a qualified SFF-8644 to SFF-8088 option when connecting to a supported 6 Gbit/s host.

Connecting direct attach configurations

AssuredSAN 6004 Series controller enclosures support up to eight direct-connect server connections, four per controller module. Connect appropriate cables from the server’s HBAs to the controller module’s host ports as described below, and shown in the following illustrations.
NOTE: The 4U56 enclosure does not use the 0.5 m (1.64') length supported for 2U enclosures.
AssuredSAN 6004 Series Setup Guide 53
Page 54
Fibre Channel host connection
To connect 6844/6854 controller modules supporting (4/8/16 Gb) FC host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified FC SFP option.
Qualified options support cable lengths of 1 m (3.28'), 2 m (6.56'), 5 m (16.40'), 15 m (49.21'), 30 m (98.43'), and 50 m (164.04') for OM4 multimode optical cables and OM3 multimode FC cables, respectively. A 0.5 m (1.64') cable length is also supported for OM3. In addition to providing host connection, these cables are used for connecting a local storage system to a remote storage system via a switch, to facilitate use of the optional AssuredRemote replication feature (linear storage only).
10GbE iSCSI host connection
To connect 6844/6854 controller modules supporting 10GbE iSCSI host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified 10GbE SFP option.
Qualified options support cable lengths of 1 m (3.28'), 3 m (9.84'), 5 m (16.40'), and 7 m (22.97') for copper cables; and cable lengths of 0.65 m (2.13'), 1 m (3.28'), 1.2 m (3.94'), 3 m (9.84'), 5 m (16.40'), and 7 m (22.97') for direct attach copper (DAC) cables. In addition to providing host connection, these cables are used for connecting a local storage system to a remote storage system via a switch, to facilitate use of the optional AssuredRemote replication feature.
1 Gb iSCSI host connection
To connect 6844/6854 controller modules supporting 1Gb iSCSI host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified 1 Gb RJ-45 copper SFP option supporting (CAT5-E minimum) Ethernet cables of the same lengths specified for 10GbE iSCSI above. In addition to providing host connection, these cables are used for connecting a local storage system to a remote storage system via a switch, to facilitate use of the optional AssuredRemote replication feature.
HD mini-SAS host connection
To connect 6544/6554 controller modules supporting HD mini-SAS host interface ports to a server HBA or switch —using the controller’s SFF-8644 dual HD mini-SAS host ports—select a qualified HD mini-SAS cable option. A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gbit/s enabled host; whereas a qualified SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gbit/s host. Qualified SFF-8644 to SFF-8644 options support cable lengths of 0.5 m (1.64'), 1 m (3.28'), 2 m (6.56'), and 4 m (13.12'). Qualified SFF-8644 to SFF-8088 options support cable lengths of 1 m (3.28'), 2 m (6.56'), 3 m (9.84'), and 4 m (13.12').
NOTE: Supported qualified cable options for host connection are subject to change.
NOTE: Keys to reading cabling diagrams for host connection:
The diagrams that follow use a single representation for each CNC cabling example. This is due to the
fact that the CNC port locations and labeling are identical for each of the three possible interchangeable SFPs supported by the system.
Within each cabling connection category, the HD mini-SAS model is shown beneath the CNC model.
Dual-controller configurations
A dual-controller configuration improves application availability because in the event of a controller failure, the affected controller fails over to the partner controller with little interruption to data flow. A failed controller can be replaced without the need to shut down the storage system.
In a dual-controller system, hosts use LUN-identifying information from both controllers to determine that up to four paths are available to a given storage volume. Assuming MPIO software is installed, a host can use any available data path to access a volume owned by either controller. The path providing the best performance is through host ports on the volume’s owning controller. Both controllers share one set of 1,024 LUNs (0-1,023) for use in mapping volumes to hosts (see “ULP” in the Storage Management Guide).
54 Connecting hosts
Page 55
IMPORTANT: 6004 Series controller enclosures support dual-controller configuration only.
S
S
A
S
S
A
Server
6844
0A
0B
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
S
S
A
S
S
A
Server
6544
0A
0B
S
S
A
S
S
A
Server
6854
0A0B
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
Server
6554
0A0B
Single-controller support is provided only when a controller fails over to its partner controller. A controller module must be installed in each IOM slot to ensure sufficient airflow through the enclosure during operation.
The illustrations below show dual-controller configurations for 6004 Series controller enclosures equipped with either CNC ports or 12 Gb HD mini-SAS host ports (shown beneath the CNC model).
One server/one HBA/dual path
Figure 52 Connecting hosts: direct attach—one server/one HBA/dual path (2U)
Figure 53 Connecting hosts: direct attach—one server/one HBA/dual path (4U CNC)
Figure 54 Connecting hosts: direct attach—one server/one HBA/dual path (4U HD mini-SAS)
AssuredSAN 6004 Series Setup Guide 55
Page 56
Two servers/one HBA per server/dual path
S
S
A
S
S
A
6844
Server 2
0A
0B
Server 1
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
S
S
A
S
S
A
6544
Server 2
0A
0B
Server 1
S
S
A
S
S
A
Server 1Server 2
6854
0A
0B
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
Server 1Server 2
6554
0A
0B
Figure 55 Connecting hosts: direct attach—two servers/one HBA per server/dual path (2U)
Figure 56 Connecting hosts: direct attach—two servers/one HBA per server/dual path (4U CNC)
Figure 57 Connecting hosts: direct attach—two servers/one HBA per server/dual path (4U HD mini-SAS)
56 Connecting hosts
Page 57
Four servers/one HBA per server/dual path
S
S
A
S
S
A
6844
Server 2
0A
0B
Server 1
Server 4Server 3
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
S
S
A
S
S
A
6544
Server 2
0A
0B
Server 1
Server 4Server 3
S
S
A
S
S
A
Server 1Server 2
6854
0A
0B
Server 3Server 4
Figure 58 Connecting hosts: direct attach—four servers/one HBA per server/dual path (2U)
Figure 59 Connecting hosts: direct attach—four servers/one HBA per server/dual path (4U CNC)
AssuredSAN 6004 Series Setup Guide 57
Page 58
Figure 60 Connecting hosts: direct attach—four servers/one HBA per server/dual path (4U HD mini-SAS)
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
S
S
A
Server 1Server 2
6554
0A
0B
Server 3Server 4
S
S
A
S
S
A
Switch ASwitch B
6844
Server 2 Server 1
0A
0B

Connecting switch attach configurations

A switch attach solution—or SAN—places a switch between the servers and the controller enclosures within the storage system.
Using switches, a SAN shares a storage system among multiple servers, reducing the number of storage systems required for a particular environment. Using switches increases the number of servers that can be connected to the storage system. A 6004 Series controller enclosure supports 64 hosts.
The diagrams that follow show the SAN by using labeled switches and by using a cloud symbol to denote a SAN fabric.
Dual-controller configuration
Two servers/two switches
58 Connecting hosts
Figure 61 Connecting hosts: switch attach—two servers/two switches (2U CNC)
Page 59
S
S
A
S
S
A
Switch ASwitch B
6854
0A
0B
Server 2 Server 1
S
S
A
S
S
A
Server 1
Server 2
6854
Server 3Server 4
SAN
0A
0B
Figure 62 Connecting hosts: switch attach—two servers/two switches (4U CNC)
Four servers/multiple switches/SAN fabric
Figure 63 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric (2U CNC)
AssuredSAN 6004 Series Setup Guide 59
Page 60
S
S
A
S
S
A
Server 1
Server 2
6854
0A
0B
Server 3Server 4
SAN
Figure 64 Connecting hosts: switch attach—four servers/multiple switches/SAN fabric (4U CNC)
6004 Series controller enclosure iSCSI considerations
When installing an 6004 Series iSCSI controller enclosure, use at least three ports per server—two for the storage LAN, and one or more for the public LAN(s)—to ensure that the storage network is isolated from the other networks. The storage LAN is the network connecting the servers—via switch attach—to the controller enclosure (see Figure 61 on page 58 and Figure 63).
IP address scheme for the controller pair — two iSCSI ports per controller
The 6844/6854 can use port 2 of each controller as one failover pair, and port 3 of each controller as a second failover pair for iSCSI traffic. Port 2 of each controller must be in the same subnet, and port 3 of each controller must be in second subnet. See Controller enclosure — rear panel layout on page 19 for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
C on tro ll er A p or t 2: 10 .10.10.100
Controller A port 3: 10.11.10.120
C on tro ll er B p or t 2: 10.10.10.110
C on tro ll er B p or t 3 : 10 .11.10.130
The 6844/6854 can use port 0 of each controller as one failover pair, and port 1 of each controller as a second failover pair. Port 0 of each controller must be in the same subnet, and port 1 of each controller must be in second subnet. See Controller enclosure — rear panel layout on page 19 for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
Controller A port 0: 10.10.10.100
C on tro ll er A p or t 1: 10 .11.10.120
Controller B port 0: 10.10.10.110
C on tr o ll er B p or t 1: 10 .11.10 .13 0
IP address scheme for the controller pair — four iSCSI ports per controller
When all CNC ports are configured for iSCSI, the scheme is similar to the one described for two-ports above. See Controller enclosure — rear panel layout on page 19 for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
60 Connecting hosts
Page 61
Controller A port 0: 10.10.10.100
C on tro ll er A p or t 1: 10 .11.10.120
C on tro ll er A p or t 2: 10 .10.10.110
Controller A port 3: 10.11.10.130
Controller B port 0: 10.10.10.140
C on tr o ll er B p or t 1: 10 .11.10 .15 0
Controller B port 2: 10.10.10.160
C on tro ll er B p or t 3 : 10 .11.10.170
In addition to setting the port-specific options described above, you can view the settings using the GUI.
If using the SMC, in the System topic, select Action > Set Up Host Ports.
The Host Ports Settings panel opens, allowing you to access host connection settings.
If using RAIDar, in the Configuration View panel, right-click the system and select Configuration >
System Settings > Host Interfaces.
The Configure Host Interface panel opens, allowing you to access host connection settings.

Connecting a management host on the network

The management host directly manages storage systems out-of-band over an Ethernet network.
1. Connect an RJ-45 Ethernet cable to the network port on each controller.
2. Connect the other end of each Ethernet cable to a network that your management host can access
(preferably on the same subnet).
NOTE: Connections to this device must be made with shielded cables—grounded at both ends—with metallic See the Product Regulatory Compliance and Safety (included in your product’s ship kit).
Alternatively, you can access the document online. See Dot Hill’s customer resource center (CRC) web site for additional information: https://crc.dothill.com
RFI/EMI connector hoods, in order to maintain compliance with NEBS and FCC Rules and Regulations.
.

Connecting two storage systems to replicate volumes

AssuredRemote™ replication is a licensed feature for disaster recovery, providing access to either of the following software product versions:
SMC (v3) supports replication for virtual storage environments.
RAIDar (v2) supports replication for linear storage environments.
IMPORTANT: These two replication models are mutually exclusive to one another. Choose the method that
applies to your storage system. For more information, see replication topics in the Storage Management Guide.
The replication feature performs asynchronous replication of block-level data from a volume in a primary system to a volume in a secondary system by creating an internal snapshot of the primary volume, and copying the snapshot data to the secondary system via FC (linear storage only) or iSCSI links.
The two associated standard volumes form a replication set, and only the primary volume (source of data) can be mapped for access by a server. Both systems must be licensed to use the replication feature, and must be connected through switches to the same fabric or network (i.e., no direct attach). The server accessing the replication set need only be connected to the primary system. If the primary system goes offline, a connected server can access the replicated data from the secondary system.
Replication configuration possibilities are many, and can be cabled—in switch attach fashion—to support the CNC-based systems on the same network, or on physically-split networks (SAS systems do not support
AssuredSAN 6004 Series Setup Guide 61
Page 62
replication). As you consider the physical connections of your system—specifically connections for replication—keep several important points in mind:
Ensure that controllers have connectivity between systems, whether local or remote.
Whereas linear storage supports FC and iSCSI host interface ports for replication, virtual storage
supports iSCSI host interface ports for replication. Both linear and virtual storage support all qualified CNC options for host connection.
If using the RAIDar (v2) user interface, be sure of the desired link type before creating the linear
replication set, because you cannot change the replication link type after creating the replication set.
Assign specific ports for replication whenever possible. By specifically assigning ports available for
replication, you free the controller from scanning and assigning the ports at the time replication is performed.
Ensure that all ports assigned for replication are able to communicate appropriately with the replication
system (see the CLI Reference Guide for more information).
• For linear replication, use the verify remote-link command.
• For virtual replication, use the query peer-connection command.
Allow two ports to perform replication. This permits the system to balance the load across those ports as
I/O demands rise and fall. On dual-controller enclosures, if some of the volumes replicated are owned by controller A and others are owned by controller B, then allow at least one port for replication on each controller module—and possibly more than one port per controller module—depending on replication traffic load.
For the sake of system security, do not unnecessarily expose the controller module network port to an
external network connection.
Conceptual cabling examples are provided addressing cabling on the same network and cabling relative to physically-split networks.The cabling examples provided apply to linear replication and virtual replication.
IMPORTANT: Controller module firmware must be compatible on all systems used for replication. For license information, see the Storage Management Guide.

Cabling for replication

This section shows example replication configurations for CNC-based controller enclosures. The following illustrations provide conceptual examples of cabling supporting linear replication and virtual replication. Blue cables show I/O traffic and green cables show replication traffic.
NOTE: Simplified versions of controller enclosures are used in cabling illustrations to show either FC or iSCSI host interface protocol, given that only the external connectors used in the host interface ports differ.
Cabling for replication diagrams pertain to linear replication and virtual replication.
• Linear replication supports FC and iSCSI host interface protocols.
• Virtual replication supports iSCSI host interface protocol.
The 2U enclosure rear panel view represents 6844 models
The 4U enclosure rear panel view represents 6854 models.
Once the CNC systems are physically cabled, see the Storage Management Guide or online help for information about configuring, provisioning, and using the optional replication feature. Refer to the replication feature topic pertaining to your environment (linear replication or virtual replication).
62 Connecting hosts
Page 63

CNC ports and replication

S
S
A
S
S
A
S
S
A
S
S
A
6844 controller enclosure
Switch To host server(s)
6844 controller enclosure
CNC controller modules can use qualified SFP options of the same type, or they can use a combination of qualified SFP options supporting different host interface protocols. If you use a combination of different protocols, then CNC ports 0 and 1 must be set to FC (either both16 Gbit/s or both 8 Gbit/s), and CNC ports 2 and 3 must be set to iSCSI (either both 10GbE or both 1 Gbit/s).
In linear storage environments [RAIDar (v2)], each CNC port can perform I/O or replication. In combination environments, one interface—for example FC—might be used for I/O, and the other interface type—10GbE or 1 Gb iSCSI—might be used for replication. In virtual storage environments [SMC (v3)], each CNC port can perform I/O, but replication traffic is supported by iSCSI host interface ports (either both 10GbE or both 1 Gbit/s).
Dual-controller configuration
Each of the following diagrams show the rear panel of two 6844 controller enclosures equipped with dual-controller modules.
IMPORTANT: Whereas linear storage supports FC and iSCSI host interface protocols for replication, virtual storage supports iSCSI host interface protocol for replication. Both linear and virtual storage support all qualified CNC options for host connection.
Multiple servers/single network
Figure 65 shows the rear panel of two 6844 controller enclosures with both I/O and replication occurring
on the same physical network.
Figure 65 Connecting two storage systems for replication: multiple servers/one switch/one location
AssuredSAN 6004 Series Setup Guide 63
Page 64
Figure 66 shows the rear panel of two 6854 controller enclosures with both I/O and replication occurring
S
S
A
S
S
A
S
S
A
S
S
A
6854 controller enclosure
Switch To host server(s)
6854 controller enclosure
on the same physical network.
Figure 66 Connecting two storage systems for replication: multiple servers/one switch/one location
Figure 67 and Figure 68 on page 65 show CNC host interface connections and CNC-based replication,
with I/O and replication occurring on different networks. For optimal protection, use two switches. Connect two ports from each controller module to the first switch to facilitate I/O traffic, and connect two ports from each controller module to the second switch to facilitate replication. Using two switches in tandem avoids the potential single point of failure inherent to using a single switch; however, of one switch fails, either I/O or replication will fail, depending on which of the switches fails.
64 Connecting hosts
Page 65
S
S
A
S
S
A
S
S
A
S
S
A
6844 controller enclosure 6844 controller enclosure
I/O switch
Switch
To host server
(replication)
S
S
A
S
S
A
S
S
A
S
S
A
6854 controller enclosure 6854 controller enclosure
To host server(s)
I/O switch
Switch (replication)
Figure 67 Connecting two storage systems for replication: multiple servers/switches/one location
Figure 68 Connecting two storage systems for replication: multiple servers/switches/one location
Virtual Local Area Network (VLAN) and zoning can be employed to provide separate networks for iSCSI and FC, respectively. Whether using a single switch or multiple switches for a particular interface, you can create a VLAN or zone for I/O and a VLAN or zone for replication to isolate I/O traffic from replication traffic. Since each switch would include both VLANs or zones, the configuration would function as multiple networks.
AssuredSAN 6004 Series Setup Guide 65
Page 66
Multiple servers/different networks/multiple switches
S
S
A
S
S
A
S
S
A
S
S
A
Remote site "A"
CNC
I/O switch
To host servers
Remote site "B"
To host servers
CNC I/O switch
Peer sites with failover
6844 controller enclosure 6844 controller enclosure
WAN
Ethernet
S
S
A
S
S
A
S
S
A
S
S
A
Remote site "A"
CNC
I/O switch
To host servers
Remote site "B"
To host servers
CNC I/O switch
Peer sites with failover
6854 controller enclosure 6854 controller enclosure
WAN
Ethernet
Figure 69 shows the rear panel of two 6844 controller enclosures with I/O and replication occurring on
different networks.
Figure 69 Connecting two storage systems for replication: multiple servers/switches/two locations
Figure 70 shows the rear panel of two 6854 controller enclosures with I/O and replication occurring on
different networks.
Figure 70 Connecting two storage systems for replication: multiple servers/switches/two locations
The diagram entitled “Connecting two storage systems for replication: multiple servers/SAN fabric/two
locations” (Figure 71 on page 67) shows the rear-panel of two 6844 controller enclosures with both I/O
and replication occurring on different networks.
This diagram represents two branch offices cabled to enable disaster recovery and backup. In case of failure at either the local site or the remote site, you can fail over the application to the available site.
66 Connecting hosts
Page 67
S
S
A
S
S
A
S
S
A
S
S
A
Peer sites with failover
Key — Server Codes:
= "A" File servers
Data Restore Modes:
- Replicate back over WAN
- Replicate via physical media transfer Failover Modes
- VMware
- Hyper V failover to servers
A
1
A
2
B
2
A
1
B
1
A
2
B
2
A
1
B
1
A
1
A
2
B
1
B
2
= "A" Application servers
A
2
= "B" File servers
B
1
= "B" Application servers
B
2
Remote site “B”
Remote site “A”
Corporate end-users
Corporate end-users
LAN
LAN
WAN
Ethernet
SAN
FC
SAN
FC
data
FS “A”
data
App “A”
replica
App “B”
replica
FS “B”
replica
FS “A”
replica
App “A”
data
FS “B”
data
App “B”
6844 storage
system (typ. 2 places)
Figure 71 Connecting two storage systems for replication: multiple servers/SAN fabric/two locations
Although not shown in the preceding cabling examples, you can cable replication-enabled 6844 systems and compatible 6004 Series systems—via switch attach—for performing replication tasks.
AssuredSAN 6004 Series Setup Guide 67
Page 68

Updating firmware

After installing the hardware and powering on the storage system components for the first time, verify that the controller modules, expansion modules, and disk drives are using the current firmware release.
If using the SMC (v3), in the System topic, select Action > Update Firmware.
The Update Firmware panel opens. The Update Controller Module tab shows versions of firmware components currently installed in each controller.
NOTE:
the partner controller. To enable or disable the setting, use the command, and set the partner-firmware-upgrade parameter. See the CLI Reference Guide for more information about command parameter syntax.
If using RAIDar (v2), right-click the system in the Configuration View panel, and select Tools > Update Firmware.
The Update Firmware panel displays the currently installed firmware versions, and allows you to update them.
Optionally, you can update firmware using FTP (File Transfer Protocol) as described in the Storage Management Guide.
IMPORTANT: See the “Updating firmware” topic in the Storage Management Guide before performing a firmware update.
The SMC does not provide a check-box for enabling or disabling Partner Firmware Update for

Obtaining IP values

You can configure addressing parameters for each controller module’s network port. You can set static IP values or use DHCP.
TIP: See the “Configuring network ports” topic in the Storage Management Guide.
set advanced-settings

Setting network port IP addresses using DHCP

In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.
Because DHCP is disabled by default in 6004 Series systems, you must either use the CLI to change controller IP address settings, or use the Configuration Wizard as described in the Storage Management Guide or online help.

Setting network port IP addresses using the CLI port and cable

If you did not use DHCP to set network port IP values, set them manually (default method) as described below. If you are using the USB CLI port and cable, you will need to enable the port for communication (also see Using the CLI port and cable—known issues on Windows on page 127).
Network ports on controller module A and controller module B are configured with the following default values:
• Network port IP address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
•IP subnet mask: 255.255.255.0
• Gateway IP address: 10.0.0.1
68 Connecting hosts
Page 69
If the default IP addresses are not compatible with your network, you must set an IP address for each
Connect USB cable to CLI port on controller face plate
network port using the CLI embedded in each controller module. The CLI enables you to access the system using the USB (Universal Serial Bus) communication interface and terminal emulation software.
NOTE: If you are using the mini USB CLI port and cable, see Appendix D - USB device connection:
Windows customers should download and install the device driver as described in Obtaining the
software download on page 126.
Linux customers should prepare the USB port as described in Setting parameters for the device driver on
page 127.
Use the CLI commands described in the steps below to set the IP address for the network port on each controller module.
Once new IP addresses are set, you can change them as needed using the SMC or RAIDar. Be sure to change the IP address via the SMC or RAIDar before changing the network configuration. See Accessing
the SMC or RAIDar on page 73 for more information concerning the web-based storage management
application.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for
controller A and another for controller B. Record these IP addresses so you can specify them whenever you manage the controllers using the
SMC, RAIDar, or the CLI.
2. Use the provided USB cable to connect controller A to a USB port on a host computer. The USB mini 5
male connector plugs into the CLI port as shown in Figure 72 on page 69 (generic 6004 Series controller module shown).
3. Enable the CLI port for subsequent communication:
• Linux customers should enter the command syntax provided in Setting parameters for the device
• Windows customers should locate the downloaded device driver described Obtaining the software
4. Start and configure a terminal emulator, such as HyperTerminal or VT-100, using the display settings in
Table 6 and the connection settings in Table 7 (also, see the note following this procedure).
Figure 72 Connecting a USB cable to the CLI port
driver on page 127.
download on page 126, and follow the instructions provided for proper installation.
AssuredSAN 6004 Series Setup Guide 69
Page 70
Table 6 Terminal emulator display settings
Parameter Value
Terminal emulation mode VT-100 or ANSI (for color support)
Font Terminal
Translations None
Columns 80
Table 7 Terminal emulator connection settings
Parameter Value
Connector COM3 (for example)
Baud rate 115,200
Data bits 8
Parity N one
Stop bits 1
Flow control None
1
Your server or laptop configuration determines which COM port is used for Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
1, 2
5. In the terminal emulator, connect to controller A.
6. Press Enter to display the CLI prompt (#).
The CLI displays the system version, MC version, and login prompt:
a. At the login prompt, enter the default user manage. b. Enter the default password !manage.
If the default user or password—or both—have been changed for security reasons, enter the secure login credentials instead of the defaults shown above.
NOTE: The following CLI commands enable you to set the management mode to v3 or v2:
•Use set protocols to change the default management mode.
•Use set cli-parameters to change the current management mode for the CLI session. The system defaults to v3 for new customers and v2 for existing users (see the CLI Reference Guide
for more information).
7. At the prompt, enter the following command to set the values you obtained in step 1 for each Network port, first for controller A, and then for controller B:
set network-parameters ip address netmask netmask gateway gateway controller a|b
where:
address is the IP address of the controller
netmask is the subnet mask
gateway is the IP address of the subnet router
a|b specifies the controller whose network parameters you are setting For example:
# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway
192.168.0.1 controller a
# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway
192.168.0.1 controller b
70 Connecting hosts
Page 71
8. Enter the following command to verify the new IP addresses:
show network-parameters
Network parameters, including the IP address, subnet mask, and gateway address are displayed for each controller.
9. Use the ping command to verify connectivity to the gateway address.
For example:
# ping 192.168.0.1 Info: Pinging 192.168.0.1 with 4 packets. Success: Command completed successfully. - The remote computer responded
with 4 packets.(2011-12-19 10:20:37)
10. In the host computer's command window, type the following command to verify connectivity, first for
controller A and then for controller B:
ping controller-IP-address
If you cannot access your system for at least three minutes after changing the IP address, you might need to restart the Management Controller(s) using the serial CLI.
When you restart a Management Controller, communication with it is temporarily lost until it successfully restarts.
Enter the following command to restart the Management Controller in both controllers:
restart mc both
IMPORTANT: When configuring an iSCSI system or a system using a combination of FC and iSCSI SFPs, do not restart the Management Controller or exit the terminal emulator session until configuring the CNC ports as described in Change the CNC port mode on page 71.
11 . When you are done using the CLI, exit the emulator. 12 . Retain the IP addresses (recorded in step 1) for accessing and managing the controllers using the SMC,
RAIDar, or the CLI.
NOTE: Using HyperTerminal with the CLI on a Microsoft Windows host:
On a host computer connected to a controller module’s mini-USB CLI port, incorrect command syntax in a HyperTerminal session can cause the CLI to hang. To avoid this problem, use correct syntax, use a different terminal emulator, or connect to the CLI using telnet rather than the mini-USB cable.
Be sure to close the HyperTerminal session before shutting down the controller or restarting its Management Controller. Otherwise, the host’s CPU cycles may rise unacceptably.
If communication with the CLI is disrupted when using an out-of-band cable connection, communication can sometimes be restored by disconnecting and reattaching the mini-USB CLI cable as described in step 2 and Figure 72 on page 69.

Change the CNC port mode

This subsection applies to 6844/6854 models only. While the USB cable is still connected and the terminal emulator session remains active, perform the following steps to change the CNC port mode from the default setting (FC), to either iSCSI or FC-and-iSCSI used in combination.
When using FC SFPs and iSCSI SFPs in combination, host ports 0 and 1 are set to FC (either both 16 Gbits/s or both 8 Gbit/s), and host ports 2 and 3 must be set to iSCSI (either both 10GbE or both 1 Gbit/s).
Set CNC port mode to iSCSI
To set the CNC port mode for use with iSCSI SFPs, run the following command at the command prompt:
set host-port-mode iSCSI
AssuredSAN 6004 Series Setup Guide 71
Page 72
The command notifies you that it will change host port configuration, stop I/O, and restart both controllers. When asked if you want to continue, enter y to change the host port mode to use iSCSI SFPs.
Once the set host-port-mode command completes, it will notify you that the specified system host port mode was set, and that the command completed successfully.
Continue with step 11 of Setting network port IP addresses using the CLI port and cable on page 68.
Set CNC port mode to FC and iSCSI
To set the CNC port mode for use with FC SFPs and iSCSI SFPs in combination, run the following command at the command prompt:
set host-port-mode FC-and-iSCSI
The command notifies you that it will change host port configuration, stop I/O, and restart both controllers. When asked if you want to continue, enter y to change the host port mode to use FC and iSCSI SFPs.
Once the set host-port-mode command completes, it will notify you that the specified system host port mode was set, and that the command completed successfully.
Continue with step 11 of Setting network port IP addresses using the CLI port and cable on page 68.
Configure the system
NOTE:
After using either of the CLI command sequences shown above, you may see events stating that the
SFPs installed are not compatible with the protocol set for the host ports. The new host port mode setting will be synchronized with the qualified SFP option once the controller modules restart.
See Appendix E—SFP option for CNC ports for instructions about locating and installing your qualified SFP transceivers within the CNC ports.
After changing the CNC port mode, you can invoke the SMC or RAIDar, and use the Configuration Wizard to initially configure the system, or change system configuration settings as described in the Storage Management Guide and Basic operation.
72 Connecting hosts
Page 73

4 Basic operation

Verify that you have successfully completed the sequential “Installation Checklist” instructions in Table 3 on page 26. Once you have successfully completed steps 1 through 8 therein, you can access the management interfaces using your web-browser, to complete the system setup.

Accessing the SMC or RAIDar

Upon completing the hardware installation, you can access the controller module’s web-based management interface [either the SMC (v3) or RAIDar (v2)] to configure, monitor, and manage the storage system. Invoke your web browser, and enter the IP address of the controller module’s network port in the address field (obtained during completion of “Installation Checklist” step 9), then press Enter. To sign-in to the SMC or RAIDar, use the default user name manage and password !manage. If the default user or password—or both—have been changed for security reasons, enter the secure login credentials instead of the defaults shown above. This brief Sign In discussion assumes proper web browser setup.
IMPORTANT: For detailed information on accessing and using either the SMC or RAIDar, see the “Getting Started” section in the web-posted Storage Management Guide.
In addition to summarizing the processes to configure and provision a new system for the first time—using the wizards—the Getting Started section provides instructions for signing in to the SMC orRAIDar, introduces key system concepts, addresses browser setup, and provides tips for using the main window and the help window.
TIP: After signing-in to either the SMC or RAIDar, you can use online help as an alternative to consulting the Storage Management Guide.

Configuring and provisioning the storage system

Once you have familiarized yourself with either the SMC or RAIDar, use the interface to configure and provision the storage system. If you are licensed to use the optional AssuredRemote feature, you may also need to set up the storage systems for replication. Refer to the following topics within the Storage Management Guide or online help:
Getting started
Configuring the system
Provisioning the system
Using AssuredRemote to replicate volumes
NOTE: See the “Installing a license” topic within the Storage Management Guide for instructions about
creating a temporary license, or installing a permanent license.
IMPORTANT: If the system is used in a VMware environment, set the system’s Missing LUN Response option to use its Illegal Request setting. To do so, see either the configuration topic “Changing the missing LUN response” in the Storage Management Guide or the command topic “set-advanced-settings” in the CLI Reference Guide.
AssuredSAN 6004 Series Setup Guide 73
Page 74
5Troubleshooting
These procedures are intended to be used only during initial configuration, for the purpose of verifying that hardware setup is successful. They are not intended to be used as troubleshooting procedures for configured systems using production data and I/O.
NOTE: For further troubleshooting help, after initial setup and when user data is present, contact Dot Hill support as specified at https://crc.dothill.com

USB CLI port connection

AssuredSAN 6004 Series controllers feature a CLI port employing a mini-USB Type B form factor. If you encounter problems communicating with the port after cabling your computer to the USB device, you may need to either download a device driver (Windows), or set appropriate parameters via an operating system command (Linux). See Appendix D for more information.

Fault isolation methodology

AssuredSAN 6004 Series storage systems provide many ways to isolate faults. This section presents the basic methodology used to locate faults within a storage system, and to identify the pertinent FRUs (Field Replaceable Units) affected.
As noted in Basic operation on page 73, use the SMC orRAIDar to configure and provision the system upon completing the hardware installation. As part of this process, configure and enable event notification so the system will notify you when a problem occurs that is at or above the configured severity (see “Using the Configuration Wizard > Configuring event notification” within the Storage Management Guide). With event notification configured and enabled, you can follow the recommended actions in the notification message to resolve the problem, as further discussed in the options presented below.
.

Basic steps

The basic fault isolation steps are listed below:
Gather fault information, including using system LEDs (see Gather fault information on page 75)
Determine where in the system the fault is occurring (see Determine where the fault is occurring on page 75)
Review event logs (see Review the event logs on page 76)
If required, isolate the fault to a data path component or configuration (see Isolate the fault on page 76)
Cabling systems to enable use of the licensed AssuredRemote feature—to replicate volumes—is another important fault isolation consideration pertaining to initial system installation. See Isolating replication
faults on page 85 for more information about troubleshooting during initial setup.

Options available for performing basic steps

When performing fault isolation and troubleshooting steps, select the option or options that best suit your site environment. Use of any option (four options are described below) is not mutually-exclusive to the use of another option. You can use the SMC or RAIDar to check the health icons/values for the system and its components to ensure that everything is okay, or to drill down to a problem component. If you discover a problem, either the SMC or RAIDar and the CLI provide recommended-action text online. Options for performing basic steps are listed according to frequency of use:
Use the SMC or RAIDar

74 Troubleshooting

Page 75
Use the CLI
Monitor event notification
View the enclosure LEDs
Use the SMC or RAIDar
The SMC and RAIDar uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its components. The SMC and RAIDar enables you to monitor the health of the system and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown. Use the web application’s GUI to drill down to find each component that has a problem, and follow actions in the component Health Recommendations field to resolve the problem.
Use the CLI
As an alternative to using the SMC or RAIDar, you can run the show system command in the CLI to view the health of the system and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown, and those components will be listed as Unhealthy Components. Follow the recommended actions in the component Health Recommendations field to resolve the problem.
Monitor event notification
With event notification configured and enabled, you can view event logs to monitor the health of the system and its components. If a message tells you to check whether an event has been logged, or to view information about an event in the log, you can do so using the SMC, RAIDar, or the CLI. Using either the SMC or RAIDar, you would view the event log and then click on the event message to see detail about that event. Using the CLI, you would run the show events detail filter the output) to see the detail for an event.
command (with additional parameters to
View the enclosure LEDs
You can view the LEDs on the hardware (while referring to LED descriptions for your enclosure model) to identify component status. If a problem prevents access to the SMC, RAIDar or the CLI, this is the only option available. However, monitoring/management is often done at a management console using storage management interfaces, rather than relying on line-of-sight to LEDs of racked hardware components.

Performing basic steps

You can use any of the available options described above in performing the basic steps comprising the fault isolation methodology.
Gather fault information
When a fault occurs, it is important to gather as much information as possible. Doing so will help you determine the correct action needed to remedy the fault.
Begin by reviewing the reported fault:
Is the fault related to an internal data path or an external data path?
Is the fault related to a hardware component such as a disk drive module, controller module, or power
supply unit?
By isolating the fault to one of the components within the storage system, you will be able to determine the necessary corrective action more quickly.
Determine where the fault is occurring
Once you have an understanding of the reported fault, review the enclosure LEDs. The enclosure LEDs are designed to immediately alert users of any system faults, and might be what alerted the user to a fault in the first place.
When a fault occurs, the Fault ID status LED on an enclosure’s right ear illuminates (see the diagram pertaining to your product’s front panel components on page 13 (2U48) or page 16 (4U56). Check the LEDs on the back of the enclosure to narrow the fault to a FRU, connection, or both. The LEDs also help you identify the location of a FRU reporting a fault.
AssuredSAN 6004 Series Setup Guide 75
Page 76
Use the SMC or RAIDar to verify any faults found while viewing the LEDs. The SMC and RAIDar is also a good tool to use in determining where the fault is occurring if the LEDs cannot be viewed due to the location of the system. This web-application provides you with a visual representation of the system and where the fault is occurring. The SMC and RAIDar can also provide more detailed information about FRUs, data, and faults.
Review the event logs
The event logs record all system events. Each event has a numeric code that identifies the type of event that occurred, and has one of the following severities:
Critical. A failure occurred that may cause a controller to shut down. Correct the problem immediately.
Error. A failure occurred that may affect data integrity or system stability. Correct the problem as soon
as possible.
Warning. A problem occurred that may affect system stability, but not data integrity. Evaluate the problem and correct it if necessary.
Informational. A configuration or state change occurred, or a problem occurred that the system corrected. No immediate action is required.
See the Event Descriptions Reference Guide for information about specific events. See Dot Hill’s Customer Resource Center web site for additional information: https://crc.dothill.com
The event logs record all system events. It is very important to review the logs, not only to identify the fault, but also to search for events that might have caused the fault to occur. For example, a host could lose connectivity to a virtual disk or disk group if a user changes channel settings without taking the storage resources assigned to it into consideration. In addition, the type of fault can help you isolate the problem to either hardware or software.
.
Isolate the fault
Occasionally, it might become necessary to isolate a fault. This is particularly true with data paths, due to the number of components comprising the data path. For example, if a host-side data error occurs, it could be caused by any of the components in the data path: controller module, cable, or data host.

If the enclosure does not initialize

It may take up to two minutes for all enclosures to initialize. If an enclosure does not initialize:
Perform a rescan
Power cycle the system
Make sure the power cord is properly connected, and check the power source to which it is connected
Check the event log for errors

Correcting enclosure IDs

When installing a system with drive enclosures attached, the enclosure IDs might not agree with the physical cabling order. This is because the controller might have been previously attached to enclosures in a different configuration, and it attempts to preserve the previous enclosure IDs, if possible. To correct this condition, make sure that both controllers are up, and perform a rescan using the SMC,RAIDar, or the CLI. This will reorder the enclosures, but can take up to two minutes for the enclosure IDs to be corrected.
To perform a rescan using the CLI, type the following command:
rescan
To rescan using the SMC (v3):
1. Verify that both controllers are operating normally.
2. Do one of the following:
•Point to the System tab and select Rescan Disk Channels.
•In the System topic, select Action > Rescan Disk Channels.
3. Click Rescan.
76 Troubleshooting
Page 77
To rescan using RAIDar (v2):
1. Verify that controllers are operating normally
2. In the Configuration View panel, right-click the system and select Tools > Rescan Disk Channels
3. Click Rescan
NOTE: The reordering enclosure IDs action only applies to Dual Controller mode. If only one controller is
available, due to either Single Controller configuration or controller failure, a manual rescan will not reorder the drive enclosure IDs.

Stopping I/O

When troubleshooting disk drive and connectivity faults, stop I/O to the affected disk groups from all hosts and remote systems as a data protection precaution. As an additional data protection precaution, it is helpful to conduct regularly scheduled backups of your data.
IMPORTANT: Stopping I/O to a disk group is a host-side task, and falls outside the scope of this document.
When on-site, you can verify that there is no I/O activity by briefly monitoring the system LEDs; however, when accessing the storage system remotely, this is not possible. Remotely, you can use the show disk-group-statistics command to determine if input and output has stopped. Perform these steps:
1. Using the CLI, run the show disk-group-statistics command.
The Reads and Writes outputs show the number of these operations that have occurred since the statistic was last reset, or since the controller was restarted. Record the numbers displayed.
2. Run the show disk-group-statistics command a second time.
This provides you a specific window of time (the interval between requesting the statistics) to determine if data is being written to or read from the disk group. Record the numbers displayed.
3. To determine if any reads or writes occur during this interval, subtract the set of numbers you recorded
in step 1 from the numbers you recorded in step 2.
• If the resulting difference is zero, then I/O has stopped.
• If the resulting difference is not zero, a host is still reading from or writing to this disk group. Continue to stop I/O from hosts, and repeat step 1 and step 2 until the difference in step 3 is zero.
NOTE: See the CLI Reference Guide for additional information.

Diagnostic steps

This section describes possible reasons and actions to take when an LED indicates a fault condition during initial system setup. See Appendix A – LED descriptions for descriptions of all LED statuses.
NOTE: Once event notification is configured and enabled using the SMC or RAIDar, you can view event logs to monitor the health of the system and its components using the GUI.
In addition to monitoring LEDs via line-of-sight observation of the racked hardware components when performing diagnostic steps, you can also monitor the health of the system and its components using the management interfaces previously discussed. Bear this in mind when reviewing the Actions column in the following diagnostics tables, and when reviewing the step procedures provided later in this chapter.
AssuredSAN 6004 Series Setup Guide 77
Page 78

Is the enclosure front panel Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes A fault condition exists/occurred.
If installing an I/O module FRU, the module has gone online and likely failed its self-test.
Table 8 Diagnostics LED status: Front panel “Fault/Service Required”

Is the controller back panel FRU OK LED off?

Answer Possible reasons Actions
No (blinking)
Yes The controller module is not powered on.
System functioning properly. System is booting.
The controller module has failed.
Check the LEDs on the back of the controller to
narrow the fault to a FRU, connection, or both.
Check the event log for specific information
regarding the fault; follow any Recommended Actions.
If installing an IOM FRU, try removing and
reinstalling the new IOM, and check the event log for errors.
If the above actions do not resolve the fault,
isolate the fault, and contact an authorized service provider for assistance. Replacement may be necessary.
No action required. Wait for system to boot.
Check that the controller module is fully inserted
and latched in place, and that the enclosure is powered on.
Check the event log for specific information
regarding the failure.
Table 9 Diagnostics LED status: Rear panel “FRU OK”

Is the controller back panel Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes (blinking) One of the following errors
occurred:
Hardware-controlled power-up error
Cache flush error
Cache self-refresh error
Table 10 Diagnostics LED status: Rear panel “Fault/Service Required”
Restart this controller from the other controller
using the SMC, RAIDar or the CLI.
If the above action does not resolve the fault,
remove the controller module and reinsert it.
If the above action does not resolve the fault,
contact an authorized service provider for assistance. It may be necessary to replace the controller module.
78 Troubleshooting
Page 79

Is the drawer panel Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes (solid) A drawer-level fault is detected or a
service action is required.
Yes (blink) Hardware-controlled power-up error Check the event log for specific information
Table 11 Diagnostics LED status: Drawer panel “Fault/Service Required”
NOTE: Consider the following when troubleshooting drawer LEDs:
See Enclosure bezel attachment and removal on page 96 and 2U48 enclosure drawers on page 14
See Enclosure bezel attachment and removal on page 96 and 4U56 enclosure drawers on page 17.

Are both disk drive module LEDs off?

Answer Possible reasons Actions
Check the event log for specific information
regarding the fault; follow any Recommended Actions.
If the above action does not resolve the fault,
contact an authorized service provider for assistance.
regarding the fault; follow any Recommended Actions.
If the above action does not resolve the fault,
contact an authorized service provider for assistance.
Yes There is no power.
The disk drive is offline.
The disk is not configured.
Check that the disk drive module is fully inserted and latched in place, and that both the enclosure and the drawer are powered on.
Table 12 Diagnostics LED status: Drawer slot “Disk Power/Activity” and “Disk Fault Status”
NOTE: See 2U48 enclosure drawers on page 14 or 4U56 enclosure drawers on page 17.

Is the 2U48 disk drive module LED blinking blue (4Hz blink rate)?

Answer Possible reasons Actions
Yes The disk has failed or experienced a
fault.
The disk is a leftover.
The disk group that the disk is
associated with is down or critical.
Table 13 Diagnostics LED status: Disk drive fault status (2U48 modules)
NOTE: Consider the following when accessing disks: see FDE considerations on page 26.
Check the event log for specific information
regarding the fault; follow any Recommended Actions.
Isolate the fault.
Contact an authorized service provider for
assistance.
AssuredSAN 6004 Series Setup Guide 79
Page 80

Is the 4U56 disk drive module Fault LED amber?

Answer Possible reasons Actions
Yes, and the online/activity LED is off.
Yes, and the online/activity LED is blinking.
The disk drive is offline. An event message may have been received for this device.
The disk drive is active, but an event message may have been received for this device.
Check the event log for specific information regarding
the fault.
Isolate the fault.
Contact an authorized service provider for assistance.
Check the event log for specific information regarding
the fault.
Isolate the fault.
Contact an authorized service provider for assistance.
Table 14 Diagnostics LED status: Disk drive fault status (4U56 modules)

Is a connected host port Host Link Status LED off?

Answer Possible reasons Actions
No System functioning properly. No action required (see Link LED note: page 112).
Yes The link is down. Check cable connections and reseat if necessary.
Inspect cable for damage.
Swap cables to determine if fault is caused by a
defective cable. Replace cable if necessary.
Verify that the switch, if any, is operating properly. If
possible, test with another port.
Verify that the HBA is fully seated, and that the PCI slot
is powered on and operational.
In the SMC or RAIDar, review event logs for indicators
of a specific fault in a host data path component.
Contact an authorized service provider for assistance.
See Isolating a host-side connection fault on page 82.
Table 15 Diagnostics LED status: Rear panel “Host Link Status”

Is a connected port Expansion Port Status LED off?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The link is down. Check cable connections and reseat if necessary.
Inspect cable for damage. Replace cable if necessary.
Swap cables to determine if fault is caused by a
defective cable. Replace cable if necessary.
In the SMC or RAIDar, review the event logs for
indicators of a specific fault in a host data path component.
Contact an authorized service provider for assistance.
See Isolating a controller module expansion port
connection fault on page 84.
Table 16 Diagnostics LED status: Rear panel “Expansion Port Status”
80 Troubleshooting
Page 81

Is a connected port’s Network Port link status LED off?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The link is down. Use standard networking troubleshooting procedures to isolate
faults on the network.
Table 17 Diagnostics LED status: Rear panel “Network Port Link Status”

Is the fan control module Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The power supply unit or a fan is
operating at an unacceptable voltage/RPM level, or has failed.
When isolating faults in the power supply, remember that the fans in both modules receive power through a common bus on the midplane, so if a power supply unit fails, the fans continue to operate normally.
Verify that the FCM FRU is firmly locked into position.
Verify that the power cable is connected to a power source.
Verify that the power cable is connected to the enclosure’s
power supply unit.
Table 18 Diagnostics LED status: Rear panel fan control module “Fault/Service Required” (4U56)

Is the power supply Input Power Source LED off?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The power supply is not receiving
adequate power.
Verify that the power cord is properly connected, and check
the power source to which it connects.
Check that the power supply FRU is firmly locked into
position.
Check the event log for specific information regarding the
fault.
If the above action does not resolve the fault, isolate the
fault, and contact an authorized service provider for assistance.
Table 19 Diagnostics LED status: Rear panel power supply “Input Power Source”

Is the Voltage/Fan Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The power supply unit or a fan is
operating at an unacceptable voltage/RPM level, or has failed.
Table 20 Diagnostics LED status: Rear panel power supply “Voltage/Fan Fault/Service Required”
When isolating faults in the power supply, remember that the fans in both modules receive power through a common bus on the midplane, so if a power supply unit fails, the fans continue to operate normally.
Verify that the power supply FRU is firmly locked into
position.
Verify that the power cable is connected to a power source.
Verify that the power cable is connected to the enclosure’s
power supply unit.
AssuredSAN 6004 Series Setup Guide 81
Page 82

Controller failure in a single-controller configuration

This subsection addresses a potential situation that might occur if a partner controller fails following failure of its peer controller.
IMPORTANT: Transportable cache only applies to single-controller configurations. In dual-controller configurations, there is no need to transport a failed controller’s cache to a replacement controller because the cache is duplicated between the partner controllers (subject to volume write optimization settings).
Cache memory is flushed to CompactFlash in the case of a controller failure or power loss. During the write to CompactFlash process, only the components needed to write the cache to the CompactFlash are powered by the supercapacitor. This process typically takes 60 seconds per 1 Gbyte of cache. After the cache is copied to CompactFlash, the remaining power left in the supercapacitor is used to refresh the cache memory. While the cache is being maintained by the supercapacitor, the Cache Status LED flashes at a rate of 1/10 second on and 9/10 second off.

If the controller has failed or does not start, is the Cache Status LED on/blinking?

Answer Actions
No, the Cache LED status is off, and the controller does not boot.
No, the Cache Status LED is off, and the controller boots.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller does not boot.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller boots.
Yes, at a blink 1:1 rate - 1 Hz, and the controller does not boot.
Yes, at a blink 1:1 rate - 1 Hz, and the controller boots.
Table 21 Diagnostics LED status: Rear panel “Cache Status”
NOTE: See also Cache Status LED details on page 113.

Transporting cache

To preserve the existing data stored in the CompactFlash, you must transport the CompactFlash from the failed controller to a replacement controller. To transport cache, you must return the controller module to a maintenance depot for servicing by qualified personnel.
If valid data is thought to be in Flash, see Transporting cache; otherwise, replace the controller module.
The system has flushed data to disks. If the problem persists, replace the controller module.
See Transporting cache.
The system is flushing data to CompactFlash. If the problem persists, replace the controller module.
See Transporting cache.
The system is in self-refresh mode. If the problem persists, replace the controller module.
CAUTION: Transporting of cache must be performed by a qualified service technician.

Isolating a host-side connection fault

During normal operation, when a controller module host port is connected to a data host, the port’s host link status/link activity LED is green. If there is I/O activity, the host activity LED blinks green. If data hosts are having trouble accessing the storage system, and you cannot locate a specific fault or cannot access the event logs, use the following procedure. This procedure requires scheduled downtime.
82 Troubleshooting
Page 83
IMPORTANT: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the troubleshooting process.

Host-side connection troubleshooting featuring CNC ports

The procedure below applies to AssuredSAN 6004 Series controller enclosures employing small form factor pluggable (SFP) transceiver connectors in 4/8/16 Gb FC, 10GbE iSCSI, or 1 Gb iSCSI host interface ports. In the following procedure, “SFP and host cable” is used to refer to any of the qualified SFP options supporting CNC ports used for I/O or replication.
NOTE: When experiencing difficulty diagnosing performance problems, consider swapping out one SFP at a time to see if performance improves.
1. Halt all I/O to the storage system (see Stopping I/O on page 77).
2. Check the host link status/link activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
• Solid – Cache contains data yet to be written to the disk.
• Blinking – Cache data is being written to CompactFlash.
•Flashing at
• Off – Cache is clean (no unwritten data).
4. Remove the SFP and host cable and inspect for damage.
5. Reseat the SFP and host cable.
Is the host link status/link activity LED on?
• Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to ensure that a dirty connector is not interfering with the data path.
• No – Proceed to the next step.
6. Move the SFP and host cable to a port with a known good link status.
This step isolates the problem to the external data path (SFP, host cable, and host-side devices) or to the controller module port.
Is the host link status/link activity LED on?
• Yes – You now know that the SFP, host cable, and host-side devices are functioning properly. Return the cable to the original port. If the link status LED remains off, you have isolated the fault to the controller module’s port. Replace the controller module.
• No – Proceed to the next step.
7. Swap the SFP with the known good one.
Is the host link status/link activity LED on?
• Yes – You have isolated the fault to the SFP. Replace the SFP.
• No – Proceed to the next step.
8. Re-insert the original SFP and swap the cable with a known good one.
Is the host link status/link activity LED on?
• Yes – You have isolated the fault to the cable. Replace the cable.
• No – Proceed to the next step.
9. Verify that the switch, if any, is operating properly. If possible, test with another port.
10. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational. 11 . Replace the HBA with a known good HBA, or move the host side cable and SFP to a known good
HBA. Is the host link status/link activity LED on?
1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.
AssuredSAN 6004 Series Setup Guide 83
Page 84
• Yes – You have isolated the fault to the HBA. Replace the HBA.
• No – It is likely that the controller module needs to be replaced.
12 . Move the cable and SFP back to its original port.
Is the host link status/link activity LED on?
• No – The controller module port has failed. Replace the controller module.
• Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with damaged SFPs, cables, and HBAs.

Host-side connection troubleshooting featuring SAS host ports

The procedure below applies to 6544/6554 controller enclosures employing 12 Gb SFF-8644 connectors in the HD mini-SAS host ports used for I/O. These models do not support AssuredRemote replication.
1. Halt all I/O to the storage system (see Stopping I/O on page 77).
2. Check the host activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
• Solid – Cache contains data yet to be written to the disk.
• Blinking – Cache data is being written to CompactFlash.
•Flashing at
• Off – Cache is clean (no unwritten data).
4. Reseat the host cable and inspect for damage.
Is the host link status LED on?
• Yes – Monitor the status to ensure that there is no intermittent error present. If the fault occurs again, clean the connections to ensure that a dirty connector is not interfering with the data path.
• No – Proceed to the next step.
5. Move the host cable to a port with a known good link status.
This step isolates the problem to the external data path (host cable and host-side devices) or to the controller module port.
Is the host link status LED on?
• Yes – You now know that the host cable and host-side devices are functioning properly. Return the cable to the original port. If the link status LED remains off, you have isolated the fault to the controller module port. Replace the controller module.
• No – Proceed to the next step.
6. Verify that the switch, if any, is operating properly. If possible, test with another port.
7. Verify that the HBA is fully seated, and that the PCI slot is powered on and operational.
8. Replace the HBA with a known good HBA, or move the host side cable to a known good HBA.
Is the host link status LED on?
• Yes – You have isolated the fault to the HBA. Replace the HBA.
• No – It is likely that the controller module needs to be replaced.
9. Move the host cable back to its original port.
Is the host link status LED on?
• No – The controller module port has failed. Replace the controller module.
• Yes – Monitor the connection for a period of time. It may be an intermittent problem, which can occur with damaged cables and HBAs.
1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.

Isolating a controller module expansion port connection fault

During normal operation, when a controller module’s expansion port is connected to a drive enclosure, the expansion port status LED is green. If the connected port’s expansion port LED is off, the link is down. Use the following procedure to isolate the fault.
This procedure requires scheduled downtime.
84 Troubleshooting
Page 85
NOTE: Do not perform more than one step at a time. Changing more than one variable at a time can complicate the troubleshooting process.
1. Halt all I/O to the storage system (see Stopping I/O on page 77).
2. Check the host activity LED.
If there is activity, halt all applications that access the storage system.
3. Check the Cache Status LED to verify that the controller cached data is flushed to the disk drives.
• Solid – Cache contains data yet to be written to the disk.
• Blinking – Cache data is being written to CompactFlash.
•Flashing at
• Off – Cache is clean (no unwritten data).
4. Reseat the expansion cable, and inspect it for damage.
Is the expansion port status LED on?
• Yes – Monitor the status to ensure there is no intermittent error present. If the fault occurs again, clean the connections to ensure that a dirty connector is not interfering with the data path.
• No – Proceed to the next step.
5. Move the expansion cable to a port on the controller enclosure with a known good link status.
This step isolates the problem to the expansion cable or to the controller module’s expansion port. Is the expansion port status LED on?
• Yes – You now know that the expansion cable is good. Return the cable to the original port. If the expansion port status LED remains off, you have isolated the fault to the controller module’s expansion port. Replace the controller module.
• No – Proceed to the next step.
6. Move the expansion cable back to the original port on the controller enclosure.
7. Move the expansion cable on the drive enclosure to a known good expansion port on the drive
enclosure. Is the expansion port status LED on?
• Yes – You have isolated the problem to the drive enclosure’s port. Replace the expansion module.
• No – Proceed to the next step.
8. Replace the cable with a known good cable, ensuring the cable is attached to the original ports used
by the previous cable. Is the host link status LED on?
• Yes – Replace the original cable. The fault has been isolated.
• No – It is likely that the controller module must be replaced.
1/10 second on and 9/10 second off – Cache is being refreshed by the supercapacitor.

Isolating replication faults

Cabling for replication

The replication feature is a licensed option for disaster recovery, providing access to either of the following software product versions:
SMC (v3) supports replication for virtual storage environments.
RAIDar (v2) supports replication for linear storage environments.
IMPORTANT: These two replication models are mutually exclusive to one another. Choose the method that
applies to your storage system. For more information, see replication topics in the Storage Management Guide.
AssuredSAN 6004 Series Setup Guide 85
Page 86

Replication setup and verification

After storage systems are cabled for replication, you can use the SMC (v3) or RAIDar (v2) to prepare to use the replication feature. Alternatively, you can use telnet to access the IP address of the controller module and access the replication feature using the CLI.
NOTE: You can use the CLI to perform replication in linear or virtual storage environments.
Set Management mode to v2 for linear replication (use Manage role).
Set Management mode to v3 for virtual replication (use Manage role).
Basic information for enabling the 6004 Series controller enclosures for replication supplements the troubleshooting procedures that follow.
Familiarize yourself with replication by reviewing the “Getting started”, “Working in the Replications
topic”, and “Using AssuredRemote to replicate volumes” chapters in the Storage Management Guide.
For virtual replication, in order to replicate an existing volume to a pool on the peer in the primary
system or secondary system, follow these steps:
• Find the port address. Using the CLI, run the query peer-connection command.
• Create a peer connection. To create a peer connection, use the CLI command create peer-connection or in the SMC
Replications topic, select Action > Create Peer Connection.
• Create a virtual replication set. To create a replication set, use the CLI command create replication-set or in the SMC
Replications topic, select Action > Create Replication Set.
•Replicate. To initiate replication, use the CLI command replicate or in the SMC Replications topic, select
Action > Initiate Replication.
For linear replication, in order to replicate an existing volume to another disk group in the primary or
secondary system, follow these steps:
•Use RAIDar‘s Wizards > Replication Setup Wizard to prepare to replicate an existing volume to another disk group in the primary system or secondary system.
Follow the wizard to select the primary volume, replication mode, and secondary volume, and to confirm your replication settings. The wizard verifies the communication links between the primary and secondary systems. Once setup is successfully completed, you can initiate replication from RAIDar or the CLI.
For descriptions and replication-related events, see the Event Descriptions Reference Guide.
NOTE: These steps are a general outline of the replication setup. Refer to the following manuals for more
information about replication setup:
See the Storage Management Guide for procedures to setup and manage replications.
See the CLI Reference Guide for replication commands and syntax.
See the Event Descriptions Reference Guide for replication event reporting.
IMPORTANT: Controller module firmware must be compatible on all systems used for replication. For
license information, see the Storage Management Guide.
86 Troubleshooting
Page 87

Diagnostic steps for replication setup

The tables in the following subsections show menu navigation using the SMC (v3), and using RAIDar (v2). The shorthand v3 and v2 prefixes are used to distinguish between the SMC and RAIDar, respectively.
Virtual replication using the SMC
Can you successfully use the replication feature?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No The replication feature is not licensed
on each controller enclosure used for replication.
No Compatible firmware revision
supporting the replication feature is not running on each system used for replication.
Verify licensing of the optional feature per system:
In the Home topic in the SMC, select Action > Install License.
The License Settings panel opens and displays information about each licensed feature.
If the Replication feature is not enabled, obtain and install a valid license for this feature.
See the Storage Management Guide for license information.
NOTE: Virtual replication is only supported by 6004 Series iSCSI controller enclosures.
Perform the following actions on each system used for virtual replication:
In the System topic, select Action > Update Firmware. The Update Firmware panel opens. The Update Controller
Modules tab shows firmware versions installed in each controller.
If necessary, update the controller module firmware to ensure compatibility with the other systems.
For more information on compatible firmware, see the topic about updating firmware in the Storage Management Guide.
No Invalid cabling connection.
(If multiple controller enclosures are used, check the cabling for each system.)
Verify controller enclosure cabling:
Verify use of proper cables.
Verify proper cabling paths for host connections.
Verify cabling paths between replication ports and switches
are visible to one another.
Verify that cable connections are securely fastened.
Inspect cables for damage and replace if necessary.
Table 22 Diagnostics for replication setup: Using the replication feature (v3)
AssuredSAN 6004 Series Setup Guide 87
Page 88
Can you view information about remote links?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Communication link is down Verify controller enclosure cabling (see Table 22).
Review event logs for indicators of a specific fault in a host
or replication data path component. In the footer, click the events panel and select Show Event
List. This will open the Event Log Viewer panel.
Verify valid IP address of the network port on the remote
system.
Click in the Volumes topic, then click on a volume name in the volumes list. Click the Replication Sets tab to display replications and associated metadata.
Alternatively, click in the Replications topic to display replications and associated metadata.
Table 23 Diagnostics for replication setup: Viewing information about remote links (v3)
Can you create a replication set?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No On controller enclosures with iSCSI
host interface ports, replication set creation fails due to use of CHAP.
No Unable to create the secondary
volume (the destination volume in the virtual disk group to which you will replicate data from the primary volume)?
No Communication link is down. See actions described in Can you view information about
1
After ensuring valid licensing, valid cabling connections, and network availability, create the replication set using the Replications topic, select Action > Create Replication Set.
1
If using CHAP (Challenge-Handshake Authentication Protocol), see the topics about configuring CHAP and working in replications within the Storage Management Guide.
Review event logs (in the footer, click the events panel and select Show Event List) for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
Verify valid specification of the secondary volume according to either of the following criteria:
• A conflicting volume does not already exist
• Creation of the new volume on the disk group
remote links? on page 88.
Table 24 Diagnostics for replication setup: Creating a replication set (v3)
Can you replicate a volume?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No The replication feature is not licensed
on each controller enclosure used for replication.
See actions described in Can you successfully use the
replication feature? on page 87.
Table 25 Diagnostics for replication setup: Replicating a volume (v3)
88 Troubleshooting
Page 89
Answer Possible reasons Actions
No Nonexistent replication set. Determine existence of primary or secondary volumes.
If a replication set has not been successfully created, use
the Replications topic, select Action > Create Replication Set to create one.
Review event logs (in the footer, click the events panel and select Show Event List) for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
No Network error occurred during
in-progress replication.
No Communication link is down. See actions described in Can you view information about
Review event logs for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
Click in the Volumes topic, then click on a volume name in the volumes list. Click the Replication Sets tab to display replications and associated metadata.
Replications that enter the suspended state can be resumed manually.
remote links? on page 88.
Table 25 Diagnostics for replication setup: Replicating a volume (v3) (continued)
Has a replication run successfully?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Last Successful Run shows N/A In the Volumes topic, click on the volume that is a member
of the replication set.
• Select the Replication Sets table.
• Check the Last Successful Run information.
If a replication has not run successfully, use the SMC to replicate as described in the “Working in the Replications topic” in the Storage Management Guide.
No Communication link is down. See actions described in Can you view information about
remote links? on page 88.
Table 26 Diagnostics for replication setup: Checking for a successful replication (v3)
AssuredSAN 6004 Series Setup Guide 89
Page 90
Linear replication using RAIDar
Can you successfully use the replication feature?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No The replication feature is not licensed
on each controller enclosure used for replication.
No Compatible firmware revision
supporting replication is not running on each system used for replication.
No Invalid cabling connection.
(If multiple controller enclosures are used, check the cabling for each system.)
Verify licensing of the optional feature per system:
In the Configuration View panel in RAIDar, right-click on the system, and select View > Overview. Within the System Overview table, select the Licensed Features component to display the status of licensed features.
If the Replication feature is not enabled, obtain and install a valid license for AssuredRemote.
NOTE: Linear replication is not supported by 6004 Series SAS controller enclosures.
Perform the following actions on each system used for replication:
In the Configuration View panel in RAIDar, right-click the system, and select Tools > Update Firmware.
The Update Firmware panel displays currently installed firmware versions.
If necessary, update the controller module firmware to ensure compatibility with the other systems.
Verify controller enclosure cabling:
Verify use of proper cables.
Verify proper cabling paths for host connections.
Verify cabling paths between replication ports and switches
on the same fabric or network.
Verify that cable connections are securely fastened.
Inspect cables for damage and replace if necessary.
Table 27 Diagnostics for replication setup: Using the replication feature (v2)
Can you view information about remote links?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Invalid login credentials Verify user name with Manage role on remote system.
Verify user’s password on remote system.
No Communication link is down Verify controller enclosure cabling (see Table 27).
Review event logs (in the Configuration View panel, right-click the system, and select View > Event Log) for indicators of a specific fault in a host or replication data path component.
Verify valid IP address of the network port on the remote system.
In the Configuration View panel, right-click the remote system, and select Tools > Check Remote System Link. Click
Check Links.
Table 28 Diagnostics for replication setup: Viewing information about remote links (v2)
90 Troubleshooting
Page 91
Can you create a replication set?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Selected link type or port-to-link
connections are incorrect.
No On controller enclosures with iSCSI
host interface ports, replication set creation fails due to use of CHAP.
No Unable to select the replication mode
(Local or Remote)?
1
Remote Replication mode: In the Configuration View panel, right-click the remote system, and select Tools > Check Remote System Link. Click Check Links to verify correct link type and remote host port-to-link connections.
Local Replication mode: In the Configuration View panel, right-click the local system, and select Tools > Check Local System Link. Click Check Links to verify correct link type and local host port-to-link connections.
If using CHAP (Challenge-Handshake Authentication Protocol), configure it as described in the Storage Management Guide topics “Using the Replication Setup Wizard” or “Replicating a volume.”
Review event logs (in the Configuration View panel, right-click the system, and select View > Event Log) for indicators of a specific fault in a host or replication data path component. Follow any Recommended Actions.
Local Replication mode replicates to a secondary volume residing in the local storage system.
• Verify valid links.
On dual-controller systems, verify that A ports can access B ports on the partner controller, and vice versa.
• Verify existence of either a replication-prepared volume
of the same size as the master volume, or a disk group with sufficient unused capacity.
Remote Replication mode replicates to a secondary volume residing in an independent storage system:
• Verify selection of a valid remote disk group.
• Verify selection of valid remote volume on disk group.
• Verify valid IP address of remote system network port.
• Verify user name with Manage role on remote system.
• Verify user password on remote system.
NOTE: If the remote system has not been added, it cannot be selected.
No Unable to select the secondary
volume (the destination volume on the disk group to which you will replicate data from the primary volume)?
1
Review event logs for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
Verify valid specification of the secondary volume according to either of the following criteria:
• Creation of the new volume on the disk group
• Selection of replication-prepared volume
No Communication link is down.
See actions described in Can you view information about
remote links? on page 90.
1
After ensuring valid licensing, valid cabling connections, and network availability, create the replication set using
the Wizards > Replication Setup Wizard.
Table 29 Diagnostics for replication setup: Creating a replication set (v2)
AssuredSAN 6004 Series Setup Guide 91
Page 92
Can you replicate a volume?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No AssuredRemote is not licensed on
each controller enclosure used for replication.
No Nonexistent replication set. Determine existence of primary or secondary volumes.
No Network error occurred during
in-progress replication.
No Communication link is down. See actions described in Can you view information about
See actions described in Can you successfully use the
replication feature? on page 90.
If a replication set has not been successfully created, use
the Replication Setup Wizard to create one.
Review event logs (in the Configuration View panel, right-click the system, and select View > Event Log) for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
Review event logs for indicators of a specific fault in a replication data path component. Follow any Recommended Actions.
In the Configuration View panel, right-click the secondary volume, and select View > Overview to display the Replication Volume Overview table:
• Check for replication interruption (suspended) status.
• Check for inconsistent status.
•Check for offline status.
Replications that enter the suspended state must be resumed manually.
remote links? on page 90.
Table 30 Diagnostics for replication setup: Replicating a volume (v2)
92 Troubleshooting
Page 93
Can you view a replication image?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Nonexistent replication image. • In the Configuration View panel, expand disk groups and
subordinate volumes to reveal the existence of a replication image or images.
If a replication image has not been successfully created,
use RAIDar to create one as described in the “Using AssuredRemote to replicate volumes” topic within the Storage Management Guide.
No Communication link is down. See actions described in Can you view information about
remote links? on page 90.
Table 31 Diagnostics for replication setup: Viewing a replication image (v2)
Can you view remote systems?
Answer Possible reasons Actions
Yes System functioning properly. No action required.
No Communication link is down. See actions described in Can you view information about
remote links? on page 90.
Table 32 Diagnostics for replication setup: Viewing a remote system (v2)

Resolving voltage and temperature warnings

1. Check that all of the fans are working by making sure the Voltage/Fan Fault/Service Required LED on
each power supply module is off, or by using the SMC orRAIDar to check enclosure health status.
• v3: In the lower corner of the footer, overall health status of the enclosure is indicated by a health
status icon. For more information, point to the System tab and select View System to see the System panel. You can select Front, Rear, and Table views on the System panel. If you hover over a component, its associated metadata and health status displays onscreen.
• v2: In the Configuration View panel, right click the enclosure and click View > Overview to view the
health status of the enclosure and its components. The Enclosure Overview page enables you to see information about each enclosure and its physical components in front, rear, and tabular views—using graphical or tabular presentation—allowing you to view the health status of the enclosure and its components.
See Options available for performing basic steps on page 74 for a description of health status icons and alternatives for monitoring enclosure health.
2. Make sure that all modules are fully seated in their slots and that their latches are locked.
3. Make sure that no slots are left open for more than two minutes.
If you need to replace a module, leave the old module in place until you have the replacement or use a blank module to fill the slot. Leaving a slot open negatively affects the airflow and can cause the enclosure to overheat.
4. Try replacing each power supply one at a time.
5. Replace the controller modules one at a time.
6. Replace SFPs one at a time.

Sensor locations

The storage system monitors conditions at different points within each enclosure to alert you to problems. Power, cooling fan, temperature, and voltage sensors are located at key points in the enclosure. In each controller module and expansion module, the enclosure management processor (EMP) monitors the status of these sensors to perform SCSI enclosure services (SES) functions.
The following sections describe each element and its sensors.
AssuredSAN 6004 Series Setup Guide 93
Page 94

Power supply sensors

Each enclosure has two fully redundant power supplies with load-sharing capabilities. The power supply sensors described in the following table monitor the voltage, current, temperature, and fans in each power supply. If the power supply sensors report a voltage that is under or over the threshold, check the input voltage.
Table 33 Power supply sensor descriptions
Description Event/Fault ID LED condition
Power supply 1 Voltage, current, temperature, or fan fault
Power supply 2 Voltage, current, temperature, or fan fault

Cooling fan sensors

Each power supply uses two fans. The normal range for fan speed is 4,000 to 6,000 RPM. When a fan speed drops below 4,000 RPM, the EMP considers it a failure and posts an alarm in the storage system event log. The following table lists the description, location, and alarm condition for each fan. If the fan speed remains under the 4,000 RPM threshold, the internal enclosure temperature may continue to rise. Replace the power supply reporting the fault.
Table 34 Cooling fan sensor descriptions
Description Location Event/Fault ID LED condition
Fan 1 Power supply 1 < 4,000 RPM
Fan 2 Power supply 1 < 4,000 RPM
Fan 3 Power supply 2 < 4,000 RPM
Fan 4 Power supply 2 < 4,000 RPM
During a shutdown, the cooling fans do not shut off. This allows the enclosure to continue cooling.

Temperature sensors

Extreme high and low temperatures can cause significant damage if they go unnoticed. Each controller module has six temperature sensors. Of these, if the CPU or FPGA (Field-programmable Gate Array) temperature reaches a shutdown value, the controller module is automatically shut down. Each power supply has one temperature sensor.
When a temperature fault is reported, it must be remedied as quickly as possible to avoid system damage. This can be done by warming or cooling the installation location.
Table 35 Controller module temperature sensor descriptions
Description Normal operating
CPU temperature 3C–88C0C–3C,
FPGA temperature 3C–97C0C–3C,
range
Warning operating range
88C–90C
97C–100C
Critical operating range
> 90C0C
None 0C
Shutdown values
100 C
105C
Onboard temperature 1 0C–70CNoneNoneNone
Onboard temperature 2 0C–70CNoneNoneNone
Onboard temperature 3 (Capacitor temperature)
CM temperature 5C–50C 5C,
94 Troubleshooting
0C–70CNoneNoneNone
50C
0C, 55C
None
Page 95
When a power supply sensor goes out of range, the Fault/ID LED illuminates amber and an event is logged to the event log.
Table 36 Power supply temperature sensor descriptions
Description Normal operating range
Power supply 1 temperature –10C–80C
Power supply 2 temperature –10C–80C

Power supply module voltage sensors

Power supply voltage sensors ensure that an enclosure’s power supply voltage is within normal ranges. There are three voltage sensors per power supply.
Table 37 Voltage sensor descriptions
Sensor Event/Fault LED condition
Power supply 1 voltage, 12V < 11.00V
> 13.00V
Power supply 1 voltage, 5V < 4.00V
> 6.00V
Power supply 1 voltage, 3.3V < 3.00V
> 3.80V
AssuredSAN 6004 Series Setup Guide 95
Page 96

A LED descriptions

Front panel LEDs

AssuredSAN 6004 Series supports storage enclosures in dual-purpose fashion. Each of the supported chassis form factors can be configured as a controller enclosure or expansion enclosure (see Front panel
components on page 13 for descriptions of supported chassis).
Supported expansion enclosures are used for adding storage. The J6X48 48-drive enclosure is the SFF drive enclosures used for storage expansion. The J6X56 56-drive enclosure is the LFF drive enclosures used for storage expansion. See Table 4 on page 37 for a matrix showing controller enclosures and compatible drive enclosures. Storage enclosures support dual I/O modules (IOMs), and they are equipped with two redundant power supply modules. 4U56 enclosures are equipped with two fan control modules. See
Controller enclosure — rear panel layout on page 19.

Enclosure bezels

Each AssuredSAN 6004 Series enclosure is equipped with a removable bezel designed to cover the front panel during enclosure operation. The bezel provides two debossed pockets. It is equipped with an EMI (Electromagnetic Interference) shield to provide protection from the product.
Figure 73 Front panel enclosure bezel: 48-drive enclosure (2U48)
Figure 74 Front panel enclosure bezel: 56-drive enclosure (4U56)

Enclosure bezel attachment and removal

When you initially attach or remove the front panel enclosure bezel for the first time, refer to the appropriate pictorials for your enclosure(s) from the list below, and follow the instructions provided.
Front view of 48-drive enclosure (2U48): Figure 73
Front view of 56-drive enclosure (4U56): Figure 74
Bezel alignment for 48-drive enclosure (2U48): Figure 75 on page 97
Bezel alignment for 56-drive enclosure (4U56): Figure 77 on page 98
96 LED descriptions
Page 97
NOTE: Step procedures for attaching and removing enclosure bezels are also provided in the FRU
Ball stud on chassis ear (typical 4 places)
Enclosure bezel sub-assembly (Includes EMI shield for disks)
Pocket opening (typical 2 places)
Installation and Replacement Guide.
Enclosure bezel attachment
2U—Orient the enclosure bezel to align its back side with the front face of the enclosure as shown in
Figure 75 on page 97. Face the front of the enclosure, and while supporting the base of the bezel, position
it such that the mounting sleeves within the integrated ear caps align with the ball studs, and then gently push-fit the bezel onto the ball studs to securely attach the bezel to the front of the enclosure.
4U—Orient the enclosure bezel to align its back side with the front face of the enclosure as shown in
Figure 77 on page 98. Tilt the bezel forward, and guide the two angle-bracket slots on the backside of the
bezel onto the two upturned flanges located on sidemount brackets near the front of the enclosure (on the exterior left and right chassis walls). Then, gently push the sleeves onto the ball studs as shown in the details in Figure 77.
Enclosure bezel removal
2U—While facing the front of the enclosure, insert the index finger of each hand into the top of the respective (left or right) pocket opening, and insert the middle finger of each hand into the bottom of the respective opening, with thumbs on the bottom of the bezel face. Gently pull the top of the bezel while applying slight inward pressure below, to release the bezel from the ball studs.
NOTE: Bezel alignment for the 2U48 enclosure is shown in Figure 75 below.
Once you have removed the bezel, you can access the drawers. To open a drawer, you should first revolve the pull-handle upwards by 90to enable pulling the drawer outward for viewing disks. The handle can be in the stowed position when pushing the drawer back into the enclosure along the drawer slide (See
Figure 76 for instructions about using 2U48 drawer handles).
Figure 75 Partial assembly showing bezel alignment with 2U48 chassis
AssuredSAN 6004 Series Setup Guide 97
Page 98
90°
Revolve the handle 90from its stowed position to its pull-position before pulling the drawer outward.
Drawer travels inward and outward along the drawer slide.
Drawers 0 or 1 Drawer 2
AssuredSAN
Vented grille is simplified for clarity
Back
Back
Slotted angle bracket
Sleeve
Front
Figure 76 Drawer detail showing handle rotation and drawer travel (2U48)
NOTE: Bezel alignment for the 4U56 enclosure is shown in Figure 77 below.
4U—While facing the front of the enclosure, insert the index finger of each hand into the top of the respective (left or right) pocket opening. Gently pull the top of the bezel while applying slight inward pressure below to release the top sleeves from the ball studs. Lift the bezel upwards to allow the angle-bracket slots to clear the upturned mounting flanges.
Figure 77 Partial assembly showing bezel alignment with 4U56 chassis
98 LED descriptions
Page 99
Once you have removed the bezel, you can access the drawers. To open a drawer, you must first revolve
Drawer 0
Drawer 1
90°
Drawer handle
S tow position
Pull position
Revolve handle to suit open and close drawer actions.
the pull-handle downwards by 90to enable pulling the drawer outward for viewing disks (see Figure 78 for instructions about using the 4U56 drawer handles). The handle can be in the stowed position when pushing the drawer back into the enclosure along the drawer slide.
Figure 78 Drawer detail showing handle rotation and drawer travel (4U56)
IMPORTANT: The front panel enclosure illustrations that follow assume that you have removed the
enclosure bezel to reveal the underlying components.

48-drive enclosure front panel LEDs

Enclosure front panel LEDs are described in two interrelated figure/table ensembles within this subsection:
Figure 79: describes enclosure front panel LEDs (visible with the bezel installed).
Figure 80 on page 101: describes drawer panel LEDs (visible with the bezel removed).
The disk module LED is described in another related figure/table ensemble within Disk drive LED (2U48):
Figure 81 on page 102: describes the LED located on disk modules (visible with the bezel removed and
drawer(s) opened).
Table 38 and Table 39 on page 103 describe additional disk LED behavior.
LEDs visible with enclosure bezel installed
The LEDs located on the chassis ears are described in Figure 79 and are visible with the enclosure bezel installed.
AssuredSAN 6004 Series Setup Guide 99
Page 100
OK
2 3
4
5
Note: Remove this enclosure bezel to access drawers.
1
LED Description Definition
1 Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented by management software. Sequential enclosure ID numbering of controller enclosures begins with the integer 0. The enclosure ID for an attached drive enclosure is nonzero.
2 Unit Locator White blink — Enclosure is identified
Off — Normal operation
3 Fault/Service Required Amber — On
Enclosure-level fault condition exists. The event has been acknowledged but the problem needs attention. Off — No fault condition exists.
4 FRU OK Green — On
The enclosure is powered on with at least one power supply operating normally. Off — Both power supplies are off; the system is powered off.
5 Temperature Fault Green — On
The enclosure temperature is normal. Amber — On The enclosure temperature is above threshold.
Figure 79 LEDs: 2U48 enclosure front panel
NOTE: The enclosure front panel illustrations that follow assume that you have removed the enclosure
bezel to reveal underlying components.
100 LED descriptions
Loading...