Lenovo S3200, S2200 User Manual

Page 1
Lenovo Storage S3200/S2200 Setup Guide
For firmware release GL210 or later
Abstract
This document describes initial hardware setup for Lenovo Storage S3200/S2200 controller enclosures, and is intended for use by storage system administrators familiar with servers and computer networks, network administration, storage system installation and configuration, storage area network management, and relevant protocols.
Part Number: 00WE606
Page 2
Copyright Lenovo 2015.
LIMITED AND RESTRICTED RIGHTS NOTICE: If data or software is delivered pursuant a General Services Administration “GSA” contract, use, reproduction, or disclosure is subject to restrictions set forth in Contract No. GS-35F-05925.
Lenovo, the Lenovo logo, BladeCenter, Flex System, NeXtScale System, and System x are trademarks of Lenovo in the United States, other countries, or both.
The material in this document is for information only and is subject to change without notice. While reasonable efforts have been made in the preparation of this document to assure its accuracy, changes in the product design can be made without reservation and without notification to its users.
Page 3

Contents

About this guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Storage S3200/S2200 enclosure user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
CNC ports used for host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
HD mini-SAS ports used for host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Intended audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Related documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Document conventions and symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
24-drive enclosure front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
12-drive enclosure front panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Disk drives used in S3200/S2200 enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Controller enclosure — rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
S3200 CNC controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
S3200 SAS controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
S2200 CNC controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
S2200 SAS controller module — rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
E1024/E1012 drive enclosure rear panel components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Component installation and replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
CompactFlash . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Supercapacitor pack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Installing the enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
FDE considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Connecting the controller enclosure and drive enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Connecting the S3200/S2200 controller to the 2U12 drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . 23
Connecting the S3200/S2200 controller to the 2U24 drive enclosure . . . . . . . . . . . . . . . . . . . . . . . . 23
Connecting the S3200/S2200 controller to mixed model drive enclosures . . . . . . . . . . . . . . . . . . . . . 23
Cable requirements for storage enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Summary of drive enclosure cabling illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Testing enclosure connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Powering on/powering off. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
AC PSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3 Connecting hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Host system requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Cabling considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Connecting the enclosure to hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
CNC technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
10GbE iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1 Gb iSCSI protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
HD mini-SAS technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
12 Gb mini-SAS ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Connecting direct attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Fibre Channel host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
10GbE iSCSI host connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1 Gb iSCSI host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
HD mini-SAS host connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Lenovo Storage S3200/S2200 Setup Guide 3
Page 4
Single-controller configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Dual-controller configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Connecting switch attach configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Dual-controller configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Connecting a management host on the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Updating firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Obtaining IP values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Setting network port IP addresses using DHCP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Setting network port IP addresses using the CLI port and cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Change the CNC port mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Set CNC port mode to iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Set CNC port mode to FC and iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Configure the system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Basic operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Accessing the SMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Configuring and provisioning the storage system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
USB CLI port connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Fault isolation methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Options available for performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Use the SMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Use the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Monitor event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
View the enclosure LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Performing basic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Gather fault information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Determine where the fault is occurring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Review the event logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Isolate the fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
If the enclosure does not initialize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Correcting enclosure IDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Stopping I/O. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Diagnostic steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Is the enclosure front panel Fault/Service Required LED amber?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Is the controller back panel CRU OK LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Is the controller back panel Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Are both disk drive module LEDs off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Is the disk drive module Fault LED amber?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Is a connected host port Host Link Status LED on?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Is a connected port Expansion Port Status LED on? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Is a connected port’s Network Port link status LED on? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Is the power supply Input Power Source LED off? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Is the Voltage/Fan Fault/Service Required LED amber? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Controller failure in a single-controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
If the controller has failed or does not start, is the Cache Status LED on/blinking? . . . . . . . . . . . . . . . . 52
Isolating a host-side connection fault . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Host-side connection troubleshooting featuring CNC ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Host-side connection troubleshooting featuring SAS host ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Isolating a controller module expansion port connection fault. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Resolving voltage and temperature warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Sensor locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Power supply sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Cooling fan sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Temperature sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Power supply module voltage sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4Contents
Page 5
A LED descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Front panel LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Enclosure bezels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Enclosure bezel attachment and removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Enclosure bezel attachment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Enclosure bezel removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
24-drive enclosure front panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
12-drive enclosure front panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Disk drive LEDs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
Controller enclosure — rear panel layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
S3200 CNC controller module — rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
S3200 SAS controller module—rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
S2200 CNC controller module — rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
S2200 SAS controller module—rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Cache Status LED details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Power supply LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
E1024/E1012 drive enclosure rear panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
B Specifications and requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Safety requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Site requirements and guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Site wiring and AC power requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Weight and placement guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Electrical guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Ventilation requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Cabling requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Management host requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Physical requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Environmental requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Electrical requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Site wiring and power requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Power cable requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
C Electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Preventing electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Grounding methods to prevent electrostatic discharge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
D USB device connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Rear panel USB ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
USB CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Emulated serial port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Supported host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Command-line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Device driver/special operation mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Microsoft Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Obtaining the software download. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Setting parameters for the device driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Using the CLI port and cable—known issues on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Workaround . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
E SFP option for CNC ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Locate the SFP transceivers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Install an SFP transceiver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Verify component operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Lenovo Storage S3200/S2200 Setup Guide 5
Page 6
F SAS fan-out cable option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Locate the SAS fan-out cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Install the SAS fan-out cable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6Contents
Page 7

Figures

1 2U24 enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 2U12 enclosure: front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 S3200/S2200 controller enclosure: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4 S3200 CNC controller module face plate (FC or 10GbE iSCSI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 S3200 CNC controller module face plate (1 Gb RJ-45) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
6 S3200 SAS controller module face plate (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
7 S2200 CNC controller module face plate (FC or 10GbE iSCSI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
8 S2200 CNC controller module face plate (1 Gb RJ-45) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
9 S2200 SAS controller module face plate (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
10 Drive enclosure rear panel view (2U form factor) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
11 CompactFlash memory card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
12 Cabling connections between a controller enclosure and one 2U drive enclosure . . . . . . . . . . . . . . . 25
13 Fault-tolerant cabling between a dual-controller enclosure and three 2U drive enclosures . . . . . . . . . . 26
14 Fault-tolerant cabling between a dual-controller enclosure and seven 2U drive enclosures . . . . . . . . . . 27
15 AC PSU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
16 AC power cord . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
17 Connecting hosts: S3200 direct attach—one server/one HBA/single path . . . . . . . . . . . . . . . . . . . . 34
18 Connecting hosts: S2200 direct attach—one server/one HBA/single path . . . . . . . . . . . . . . . . . . . . 34
19 Connecting hosts: S2200direct attach—two servers/two HBAs/dual path (fan-out) . . . . . . . . . . . . . . 34
20 Connecting hosts: S3200 direct attach—one server/one HBA/dual path . . . . . . . . . . . . . . . . . . . . . 35
21 Connecting hosts: S2200 direct attach—one server/one HBA/dual path . . . . . . . . . . . . . . . . . . . . . 35
22 Connecting hosts: S3200 direct attach—two servers/one HBA per server/dual path . . . . . . . . . . . . . 36
23 Connecting hosts: S2200 direct attach—two servers/one HBA per server/dual path . . . . . . . . . . . . . 36
24 Connecting hosts: S2200 direct attach—four servers/one HBA per server/dual path (fan-out) . . . . . . . 37
25 Connecting hosts: S3200 direct attach—four servers/one HBA per server/dual path . . . . . . . . . . . . . 37
26 Connecting hosts: S3200 direct attach—four servers/one HBA per server/dual path . . . . . . . . . . . . . 37
27 Connecting hosts: S3200 switch attach—two servers/two switches . . . . . . . . . . . . . . . . . . . . . . . . . 38
28 Connecting hosts: S2200 switch attach—two servers/two switches . . . . . . . . . . . . . . . . . . . . . . . . . 38
29 Connecting hosts: S3200 switch attach—four servers/multiple switches/SAN fabric . . . . . . . . . . . . . 39
30 Connecting a USB cable to the CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
31 Front panel enclosure bezel: 24-drive enclosure (2U24) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
32 Front panel enclosure bezel: 12-drive enclosure (2U12) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
33 Partial assembly showing bezel alignment with 2U24 chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
34 Partial assembly showing bezel alignment with 2U12 chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
35 LEDs: 2U24 enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
36 LEDs: 2U12 enclosure front panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
37 LEDs: Disk drive modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
38 S3200/S2200 controller enclosure: rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
39 LEDs: S3200 CNC controller module (FC and 10GbE SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
40 LEDs: S3200 CNC controller module (1 Gb RJ-45 SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
41 LEDs: S3200 SAS controller module (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
42 LEDs: S2200 CNC controller module (FC and 10GbE SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
43 LEDs: S2200 CNC controller module (1 Gb RJ-45 SFPs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
44 LEDs: S2200 SAS controller module (HD mini-SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
45 LEDs: AC power supply unit — rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
46 LEDs: E1024/E1012 drive enclosure — rear panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
47 Rackmount enclosure dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
48 USB device connection — CLI port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
49 Install a qualified SFP option into an S3200 CNC controller module . . . . . . . . . . . . . . . . . . . . . . . . 80
50 Install a qualified SFP option into an S2200 CNC controller module . . . . . . . . . . . . . . . . . . . . . . . . 80
51 HD mini-SAS to mini-SAS fan-out cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
52 HD mini-SAS to HD mini-SAS fan-out cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Lenovo Storage S3200/S2200 Setup Guide 7
Page 8

Tables

1 Related documents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Document conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Installation checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Summary of cabling connections for S3200/S2200 enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5 Terminal emulator display settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6 Terminal emulator connection settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7 Diagnostics LED status: Front panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8 Diagnostics LED status: Rear panel “CRU OK” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
9 Diagnostics LED status: Rear panel “Fault/Service Required” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
10 Diagnostics LED status: Disk drives (LFF and SFF modules) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
11 Diagnostics LED status: Disk drive fault status (LFF and SFF modules). . . . . . . . . . . . . . . . . . . . . . . . . . 50
12 Diagnostics LED status: Rear panel “Host Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
13 Diagnostics LED status: Rear panel “Expansion Port Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
14 Diagnostics LED status: Rear panel “Network Port Link Status” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
15 Diagnostics LED status: Rear panel power supply “Input Power Source” . . . . . . . . . . . . . . . . . . . . . . . 52
16 Diagnostics LED status: Rear panel power supply “Voltage/Fan Fault/Service Required” . . . . . . . . . . . 52
17 Diagnostics LED status: Rear panel “Cache Status”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
18 Power supply sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
19 Cooling fan sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
20 Controller module temperature sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
21 Power supply temperature sensor descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55
22 Voltage sensor descriptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
23 LEDs: Disks in SFF and LFF enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
24 LEDs: Disk groups in SFF and LFF enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61
25 Power requirements - AC Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
26 Rackmount controller enclosure weights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
27 Rackmount compatible drive enclosure weights (ordered separately) . . . . . . . . . . . . . . . . . . . . . . . . . 74
28 Operating environmental specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
29 Non-operating environmental specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .74
30 Supported terminal emulator applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
31 USB vendor and product identification codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78
8Tables
Page 9

About this guide

Overview

This guide provides information about initial hardware setup for the Lenovo Storage™ S3200/S2200 enclosure products listed below:
CNC (Converged Network Controller) controller enclosure:
• Qualified Fibre Channel SFP option supporting (4/8/16 Gb)
• Qualified Internet SCSI (10GbE) SFP option
• Qualified Internet SCSI (1 Gb) Copper RJ-45 SFP option
HD mini-SAS (12 Gb) controller enclosure
IMPORTANT: Product configuration characteristics
S3200/S2200 products use the same 2U24 and 2U12 chassis form factor.
S3200 enclosures provide four host ports per controller module.
S2200 enclosures provide two host ports per controller module.
S3200/S2200 models can be configured with one or two RAID canisters per enclosure.
Supported features vary between S3200 and S2200 as noted herein and within related documents.
The S3200/S2200 enclosures are designed to meet MIL-STD-810G (storage requirements) and European Telco requirements. The S3200/S2200 storage enclosures support a large form factor (LFF 12-disk) 2U chassis and a small form factor (SFF 24-disk) 2U chassis. These chassis form factors support controller enclosures and expansion enclosures.
The S3200/S2200 controller enclosures can optionally be cabled to supported drive enclosures for adding storage. S3200/S2200 storage enclosures can be equipped with single or dual RAID canisters; and they are equipped with two AC power supply modules.
Storage S3200/S2200 enclosures support virtual storage, which uses paged-storage technology. For virtual storage, a group of disks with an assigned RAID level is called a disk group.
IMPORTANT: These Lenovo Storage products are not intended to connect directly to Public Switched Telecommunications Networks.

Storage S3200/S2200 enclosure user interfaces

The S3200/S2200 enclosures support applications for configuring, monitoring, and managing the storage system. The web-based application GUI and the command-line interface are briefly described:
Storage Management Console (SMC) is the web interface for the enclosures, providing access to all
common management functions for virtual storage.
The command-line interface (CLI) enables you to interact with the storage system using command syntax
entered via the keyboard or scripting.
NOTE: For more information about enclosure user interfaces, see the following:
Lenovo Storage Manager Guide or online help
The guide describes the Storage Management Console GUI
Lenovo Storage CLI Reference Guide
Lenovo Storage S3200/S2200 Setup Guide 9
Page 10

CNC ports used for host connection

Certain models use Converged Network Controller (CNC) technology, allowing you to select the desired host interface protocol from the available Fibre Channel (FC) or Internet SCSI (iSCSI) host interface protocols supported by the system. You can use the Command-line Interface (CLI) to set all controller module CNC ports to use one of these host interface protocols:
16 G b F C
8 Gb FC
4 Gb FC
10 G bE iSC SI
1 GbE iSCSI
Alternatively, for 4-port S3200 models, you can use the CLI to set CNC ports to support a combination of host interface protocols. When configuring a combination of host interface protocols, host ports 0 and 1 are set to FC (either both16 Gbit/s or both 8 Gbit/s), and host ports 2 and 3 must be set to iSCSI (either both 10GbE or both 1 Gbit/s), provided the CNC ports use the qualified SFP connectors and cables required for supporting the selected host interface protocol.
The 2-port S2200 models do not support SFPs for multiple host interface protocols in combination. You must select a common host interface protocol and SFP for use in all CNC ports within the controller enclosure.
See CNC technology on page 30, S3200 CNC controller module — rear panel LEDs on page 63, and
S2200 CNC controller module — rear panel LEDs on page 66 for more information.
TIP: See the Storage Manager Guide for information about configuring CNC ports with host interface protocols of the same type or a combination of types.
IMPORTANT: CNC controller modules ship with CNC ports initially configured for FC. When connecting CNC ports to iSCSI hosts, you must use the CLI (not the SMC) to specify which ports will use iSCSI. It is best to do this before inserting the iSCSI SFPs into the CNC ports (see Change the CNC port mode on page 44 for instructions).

HD mini-SAS ports used for host connection

S3200 SAS models provide four high-density mini-SAS (HD mini-SAS) ports per controller module, and S2200 SAS models provide two HD mini-SAS ports per controller module. The HD mini-SAS host interface protocol uses the SFF-8644 external connector interface defined for SAS3.0 to support a link rate of 12 Gbit/s using the qualified connectors and cable options. See S3200 SAS controller module—rear panel
LEDs on page 65 and S2200 SAS controller module—rear panel LEDs on page 68 for more information.

Intended audience

This guide is intended for storage system administrators.

Prerequisites

Prerequisites for installing and using this product include knowledge of:
Servers and computer networks
Network administration
Storage system installation and configuration
Storage area network (SAN) management and server-attached storage
Fibre Channel (FC), Internet SCSI (iSCSI), Serial Attached SCSI (SAS), and Ethernet protocols
10 About this guide
Page 11

Related documentation

Table 1 Related documents
For information about See
Environmental notices, basic troubleshooting, and safety
Overview of product shipkit contents and setup tasks Lenovo Storage S3200/S2200 Getting Started
Multilingual warranty, service and support information and safety notices
Information about the safety, electronic emission, and environmental notices for your Lenovo product
Lenovo Documentation CD
*
, which includes:
Environmental and Notices Guide
Basic Troubleshooting Guide
Rack Safety Information
Safety Information
Safety Labels
Lenovo Warranty
Lenovo Important Notices
*
*
*
Regulatory compliance and safety and disposal information
Using a rackmount bracket kit to install an enclosure into a rack
Using the web interface to configure and manage the product
Using the command-line interface (CLI) to configure and manage the product
Event codes and recommended actions Lenovo Storage Event Descriptions Reference Guide
Identifying and installing or replacing customer-replaceable units (CRUs)
Installation and usage instructions for the VSS hardware provider that works with Microsoft Windows Server, and the CAPI Proxy required by the hardware provider
Enhancements, known issues, and late-breaking information not included in product documentation
* Printed document included in product shipkit.
For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.
Lenovo Storage Product Regulatory Compliance and
*
Safety
Lenovo Storage Rackmount Bracket Kit Installation
Lenovo Storage Manager Guide
Lenovo Storage CLI Reference Guide
Lenovo Storage CRU Installation and Replacement Guide
Lenovo Storage VSS Hardware Provider Installation Guide
Lenovo Storage S3200/S2200 Release Notes
Lenovo Storage S3200/S2200 Setup Guide 11
Page 12

Document conventions and symbols

Table 2 Document conventions
Convention Element
Blue text Cross-reference links and e-mail addresses
Blue, underlined
Bold text Key names
Italic text Text emphasis Monospace text File and directory names
Monospace, italic text Code variables
Monospace, bold text Emphasis of file and directory names, system output, code, and text
text Web site addresses
Text typed into a GUI element, such as into a box
GUI elements that are clicked or selected, such as menu and list
items, buttons, and check boxes
System output
Code
Text typed at the command-line
Command-line variables
typed at the command-line
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT: Provides clarifying information or specific instructions.
NOTE: Provides additional information.
TIP: Provides helpful hints and shortcuts.
12 About this guide
Page 13
1Components
OK
5 6
7 8
Note: Remove this enclosure bezel to access the front panel components shown below.
OK
1
2
3
Left ear
Right ear
231230 4 5 6 7 8 9101112131415 16171819202122
8
6
5
7 8
6
5
7
(Silk screens on bezel)
4
Note: Integers on disks indicate drive slot numbering sequence.

Front panel components

Lenovo Storage S3200/S2200 supports 2U24 and 2U12 enclosures in dual-purpose fashion. The 2U24 chassis—configured with 24 2.5" small form factor (SFF) disks—is used as either a controller enclosure or expansion enclosure. The 2U12 chassis—configured with 12 3.5" large form factor (LFF) disks—is also used as either a controller enclosure or expansion enclosure.
Supported expansion enclosures are used for adding storage. The E1012 12-drive enclosure is the LFF drive enclosure used for storage expansion. The E1024 24-drive enclosure is the SFF drive enclosures used for storage expansion (also see Table 4 on page 24).
Enclosures support single or dual I/O modules (IOMs), and can be equipped with either two redundant AC or two redundant DC power supply modules (see Controller enclosure — rear panel layout
NOTE: The term IOM can refer to either a RAID canister or an expansion canister.

24-drive enclosure front panel components

The geometric representation of 2U24 front panel components is identical for S2200 and S3200.
on page 15).
1 Enclosure ID LED 2 Disk drive status LED: Power/Activity 3 Disk drive status LED: Fault 4 2.5" disk or drive blank (typical 24 slots)
Figure 1 2U24 enclosure: front panel
5 Enclosure status LED: Unit Locator 6 Enclosure status LED: Fault/Service Required 7 Enclosure status LED: CRU OK 8 Enclosure status LED: Temperature Fault
Lenovo Storage S3200/S2200 Setup Guide 13
Page 14
TIP: See Enclosure bezel attachment and removal on page 56 and Figure 33 on page 57 (2U24).
OK
5 6
7 8
Note: Remove this enclosure bezel to access the front panel components shown below.
21
5
34
6 7 8
Left ear
Right ear
Note: Integers on disks indicate drive slot numbering sequence.
0
4
8
1
5
9
2
6
10
3
7
11
OK
8
6
5
7
(Silk screens on bezel)
NOTE: Front and rear panel LEDs for controller enclosures are described in LED descriptions.

12-drive enclosure front panel components

The geometric representation of 2U12 front panel components is identical for S2200 and S3200.
1 Enclosure ID LED 2 Disk drive status LED: Fault 3 Disk drive status LED: Power/Activity 4 3.5" disk or drive blank (typical 12 slots)
5 Enclosure status LED: Unit Locator 6 Enclosure status LED: Fault/Service Required 7 Enclosure status LED: CRU OK 8 Enclosure status LED: Temperature Fault
Figure 2 2U12 enclosure: front panel
TIP: See Enclosure bezel attachment and removal on page 56 and Figure 34 on page 57 (2U12).
NOTE: Front and rear panel LEDs for controller enclosures are described in LED descriptions.
14 Components
Page 15

Disk drives used in S3200/S2200 enclosures

CACHE
LINK
ACT
6Gb/s
CACHE
LINK
ACT
6Gb/s
CLI
CLI
PORT 2 PORT 3
SERVICE−1SERVICE−2
PORT 0 PORT 1
CLI
CLI
PORT 2 PORT 3
SERVICE−1SERVICE−2
PORT 0 PORT 1
1 1
2
3
CNC controllers are shown in the locator example
S3200/S2200 enclosures support LFF/SFF Midline SAS, LFF/SFF Enterprise SAS, and SFF SSD disks. They also support LFF/SFF Midline SAS and LFF/SFF Enterprise self-encrypting disks that work with the Full Disk Encryption (FDE) feature. For information about creating disk groups and adding spares using different disk drive types, see the Lenovo Storage Manager Guide or online help. For S3200 enclosures, also see
FDE considerations on page 21.
Controller enclosure — rear panel layout
The diagram and table below display and identify important component items that comprise the rear panel layout of a Lenovo Storage S3200/S2200 controller enclosure. Dual 4-port CNC controllers are shown in this representative example of controller enclosure models included in the product series. The rear panel layout applies to the 2U24 and 2U12 form factors.
1 AC power supplies
3 Controller module B (see face plate detail figures)
2 Controller module A (see face plate detail figures)
Figure 3 S3200/S2200 controller enclosure: rear panel
A controller enclosure accommodates two redundant AC power supply CRUs (see two instances of callout No.1 above). The controller enclosure accommodates up to two controller module CRUs of the same type within the IOM slots (see callouts No.2 and No.3 above).
IMPORTANT: If the S3200/S2200 controller enclosure is configured with a single controller module, the controller module must be installed in the upper slot (see callout No.2 above), and an I/O module blank must be installed in the lower slot (see callout No.3 above). This configuration is required to allow sufficient air flow through the enclosure during operation.
The diagrams with tables that immediately follow provide descriptions for the different controller modules and power supply modules that can be installed into the rear panel of an S3200/S2200 controller enclosure. Showing controller modules and power supply modules separately from the enclosure enables improved clarity in identifying the component items called out in the diagrams and described in the tables.
Descriptions are also provided for optional drive enclosures supported by S3200/S2200 controller enclosures for expanding storage capacity.
NOTE: S3200/S2200 enclosures support hot-plug replacement of redundant controller modules, fans, power supplies, and expansion modules. Hot-add replacement of drive enclosures is also supported.
Lenovo Storage S3200/S2200 Setup Guide 15
Page 16
S3200 CNC controller module — rear panel components
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1 PORT 2 PORT 3
5
2 3 6 8
1
7
4
= FC LEDs = 10GbE iSCSI LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1 PORT 2 PORT 3
5
2 3 6 8
1
7
4
= FC LEDs = 1Gb iSCSI LEDs (all CNC ports use 1 Gb RJ-45 SFPs in this figure)
Figure 4 shows CNC ports configured with SFPs supporting either 4/8/16 Gb FC or 10GbE iSCSI. The
SFPs look identical. Refer to the CNC LEDs that apply to the specific configuration of your CNC ports.
1 CNC ports used for host connection
(see Install an SFP transceiver on page 80)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Figure 4 S3200 CNC controller module face plate (FC or 10GbE iSCSI)
Figure 5 shows CNC ports configured with 1 Gb RJ-45 SFPs.
1 CNC ports used for host connection
(see Install an SFP transceiver on page 80)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Figure 5 S3200 CNC controller module face plate (1 Gb RJ-45)
NOTE: See CNC ports used for host connection on page 10 for more information about CNC technology. For
CNC port configuration, see the “Configuring host ports” topic within the Storage Manager Guide or online help.
16 Components
Page 17
S3200 SAS controller module — rear panel components
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 0 SAS 1
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 2 SAS 3
5
2 3 6 8
1
7
4
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1
5
2 3 6 8
1
7
4
= FC LEDs = 10GbE iSCSI LEDs
Figure 6 shows host interface ports configured with 12 Gbit/s HD mini-SAS (SFF-8644) connectors.
1 HD mini-SAS ports used for host connection 2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
5 Network port
Figure 6 S3200 SAS controller module face plate (HD mini-SAS)
S2200 CNC controller module — rear panel components
Figure 7 shows CNC ports configured with SFPs supporting either 4/8/16 Gb FC or 10GbE iSCSI. The
SFPs look identical. Refer to the CNC LEDs that apply to the specific configuration of your CNC ports.
1 CNC ports used for host connection
(see Install an SFP transceiver on page 80)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
Figure 7 S2200 CNC controller module face plate (FC or 10GbE iSCSI)
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Lenovo Storage S3200/S2200 Setup Guide 17
Page 18
Figure 8 shows CNC ports configured with 1 Gb RJ-45 SFPs.
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1
5
2 3 6 8
1
7
4
= FC LEDs = 1Gb iSCSI LEDs (both CNC ports use 1 Gb RJ-45 SFPs in this figure)
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
ACT
LINK
ACT
LINK
SAS 0 SAS 1
12Gb/s
5
2 3 6 8
1
7
4
1 CNC ports used for host connection
(see Install an SFP transceiver on page 80)
2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Figure 8 S2200 CNC controller module face plate (1 Gb RJ-45)
NOTE: See CNC ports used for host connection on page 10 for more information about CNC technology. For
CNC port configuration, see the “Configuring host ports” topic within the Storage Manager Guide or online help.
S2200 SAS controller module — rear panel components
Figure 9 shows host interface ports configured with 12 Gbit/s HD mini-SAS (SFF-8644) connectors.
1 HD mini-SAS ports used for host connection 2 CLI port (USB - Type B) [see Appendix D] 3 Service port 2 (used by service personnel only) 4 Reserved for future use
Figure 9 S2200 SAS controller module face plate (HD mini-SAS)
18 Components
5 Network port 6 Service port 1 (used by service personnel only) 7 Disabled button (used by engineering only)
(Sticker shown covering the opening)
8 mini-SAS expansion port
Page 19

E1024/E1012 drive enclosure rear panel components

00
IN OUT
00
IN OUT
1
5
6 74
1
2
3
S3200/S2200 controller enclosures support SFF E1024 24-disk and LFF E1012 12-disk drive enclosures in the 2U form factor for expansion of storage capacity. These drive enclosures use mini-SAS (SFF-8088) connectors to facilitate backend SAS expansion. The rear panel view is common to both drive enclosures. See Cable requirements for storage enclosures on page 23 for cabling information.
1 Power supplies (AC shown) 2 Expansion module A 3 Expansion module B 4 Disabled button (used by engineering/test only)
Figure 10 Drive enclosure rear panel view (2U form factor)
NOTE: See Connecting the controller enclosure and drive enclosures on page 22 for more information.

Component installation and replacement

Installation and replacement of S3200/S2200 CRUs (customer-replaceable units) is addressed in the Lenovo Storage CRU Installation and Replacement Guide within the “Procedures” chapter.
CRU procedures facilitate replacement of a damaged chassis or chassis component:
Replacing a controller or expansion module
Replacing a disk drive module
Replacing a Fibre Channel transceiver
Replacing a 10GbE SFP+ transceiver
Replacing a 1 Gb SFP transceiver
Replacing a controller enclosure chassis
5 Service port (used by service personnel only) 6 SAS In port 7 SAS Out port

Cache

For additional information, contact support.lenovo.com
, select Product Support, and navigate to Storage
Products.
To enable faster data access from disk storage, the following types of caching are performed:
Write-back or write-through caching. The controller writes user data into the cache memory in the
controller module rather than directly to the disks. Later, when the storage system is either idle or aging —and continuing to receive new I/O data—the controller writes the data to the disks.
Read-ahead caching. The controller detects sequential data access, reads ahead into the next
sequence of data—based upon settings—and stores the data in the read-ahead cache. Then, if the next read access is for cached data, the controller immediately loads the data into the system memory, avoiding the latency of a disk access.
TIP: See the Storage Management Guide for more information about cache options and settings.
Lenovo Storage S3200/S2200 Setup Guide 19
Page 20

CompactFlash

Do not remove
Used for cache recovery only
Controller module pictorial
CompactFlash memory card
(Midplane-facing rear view)
During a power loss or controller failure, data stored in cache is saved off to non-volatile memory (CompactFlash). The data is restored to cache, and then written to disk after the issue is corrected. To protect against writing incomplete data to disk, the image stored on the CompactFlash is verified before committing to disk.
The CompactFlash memory card is located at the midplane-facing end of the controller module as shown below. Do not remove the card; it is used for cache recovery only.
For additional information, contact support.lenovo.com
Products.
NOTE: In dual-controller configurations featuring one healthy partner controller, cache is duplicated
between the controllers (subject to volume write optimization setting).

Supercapacitor pack

To protect controller module cache in case of power failure, each controller enclosure model is equipped with supercapacitor technology, in conjunction with CompactFlash memory, built into each controller module to provide extended cache memory backup time. The supercapacitor pack provides energy for backing up unwritten data in the write cache to the CompactFlash, in the event of a power failure. Unwritten data in CompactFlash memory is automatically committed to disk media when power is restored. In the event of power failure, while cache is maintained by the supercapacitor pack, the Cache Status LED flashes at a rate of 1/10 second on and 9/10 second off.
Figure 11 CompactFlash memory card
, select Product Support, and navigate to Storage
20 Components
Page 21
2Installing the enclosures

Installation checklist

The following table outlines the steps required to install the enclosures, and initially configure and provision the storage system. To ensure successful installation, perform the tasks in the order presented.
Table 3 Installation checklist
Step Task Where to find procedure
1. Install the controller enclosure and optional
drive enclosures in the rack, and attach the enclosure bezel.
2. Connect controller enclosure and optional
drive enclosures.
3. Connect power cords. See Powering on/p owering off on page 28.
4. Test enclosure connectivity. See Testing enclosure connections on page 28.
5. Install required host software. See Host system requirements on page 30.
6. Connect hosts.
7. Connect remote management hosts.
8. Obtain IP values and set network port IP
properties on the controller enclosure.
9. Use the CLI to set the host interface protocol. See CNC technology on page 30. The CNC models allow
10. Perform initial configuration tasks3:
Sign-in to the web-browser interface to
access the application GUI.
Verify firmware revisions and update if
necessary.
Initially configure and provision the system
using the Storage Management Console.
1
2
2
See the rack-mount bracket kit installation instructions pertaining to your enclosure. Also refer to the bezel attachment instructions for your enclosure
See Connecting the controller enclosure and drive enclosures on page 22.
See Connecting the enclosure to hosts on page 30.
See Connecting a management host on the network, page 40.
See Obtaining IP values on page 41. For USB CLI port and cable use, see Appendix D.
you to set the host interface protocol for your qualified SFP option. Use the described in the CLI Reference Guide or online help.
Topics below correspond to bullets at left:
See “Getting Started” in the web-posted Lenovo Storage Manager Guide.
See Updating firmware. Also see the same topic in the Storage Manager Guide.
See “Configuring the System” and “Provisioning the System” topics in the Storage Manager Guide or online help.
set host-port-mode command as
1
See the Lenovo Storage CRU Installation and Replacement Guide for illustrations and narrative describing attachment of enclosure
bezels to 2U24 and 2U12 chassis. See also Enclosure bezel attachment and removal on page 56.
2
For more about hosts, see the “About hosts” topic in the Storage Manager Guide.
3
The Storage Management Console is introduced in Accessing the SMC on page 45. See the Storage Manager Guide or online
help for additional information.
NOTE: Additional installation notes:
Controller modules within the same enclosure must be of the same type.
For optimal performance, do not mix 6 Gb and 3 Gb disk drives within the same enclosure.

FDE considerations

The Full Disk Encryption feature available via the management interfaces requires use of self-encrypting drives (SED) which are also referred to as FDE-capable disk drive modules. When installing FDE-capable disk drive modules, follow the same procedures for installing disks that do not support FDE. The exception occurs when you move FDE-capable disk drive modules for one or more disk groups to a different system, which requires additional steps.
Lenovo Storage S3200/S2200 Setup Guide 21
Page 22
The procedures for using the FDE feature, such as securing the system, viewing disk FDE status, and clearing and importing keys are performed using the web-based SMC application or CLI commands (see the Storage Manager Guide or the CLI Reference Guide for more information).
NOTE: When moving FDE-capable disk drive modules for a disk group, stop I/O to any disk groups before removing the disk drive modules. Follow the “Removing a disk drive module” and “Installing a disk drive module” procedures within the CRU Installation and Replacement Guide. Import the keys for the disks so that the disk content becomes available.
While replacing or installing FDE-capable disk drive modules, consider the following:
If you are installing FDE-capable disk drive modules that do not have keys into a secure system, the
system will automatically secure the disks after installation. Your system will associate its existing key with the disks, and you can transparently use the newly-secured disks.
If the FDE-capable disk drive modules originate from another secure system, and contain that system’s
key, the new disks will have the Secure, Locked status. The data will be unavailable until you enter the passphrase for the other system to import its key. Your system will then recognize the metadata of the disk groups and incorporate it. The disks will have the status of Secure, Unlocked and their contents will be available:
• To view the FDE status of disks, use the SMC or the show fde-state CLI command.
• To import a key and incorporate the foreign disks, use the SMC or the set fde-import-key CLI command.
NOTE: If the FDE-capable disks contain multiple keys, you will need to perform the key importing process for each key to make the content associated with each key become available.
If you do not want to retain the disks’ data, you can repurpose the disks. Repurposing disks deletes all
disk data, including lock keys, and associates the current system’s lock key with the disks. To repurpose disks, use the SMC or the
You need not secure your system to use FDE-capable disks. If you install all FDE-capable disks into a
system that is not secure, they will function exactly like disks that do not support FDE. As such, the data they contain will not be encrypted. If you decide later that you want to secure the system, all of the disks must be FDE-capable.
If you install a disk module that does not support FDE into a secure system, the disk will have the
Unusable status and will be unavailable for use.
If you are re-installing your FDE-capable disk drive modules as part of the process to replace the
chassis-and-midplane CRU, you must insert the original disks and re-enter their FDE passprhase (see the CRU Installation and Replacement Guide for more information).
set disk CLI command.

Connecting the controller enclosure and drive enclosures

Lenovo Storage S3200/S2200 controller enclosures support these maximum configurations:
S3200 enclosures—available in 24-drive (2.5") or 12-drive (3.5") chassis—support up to eight
enclosures (including the controller enclosure), or a maximum of 192 disk drives.
S2200 enclosures—available in 24-drive (2.5") or 12-drive (3.5") chassis—support up to four
enclosures (including the controller enclosure), or a maximum of 96 disk drives.
The S3200/S2200 enclosures support both straight-through and reverse SAS cabling. Reverse cabling allows any drive enclosure to fail—or be removed—while maintaining access to other enclosures. Fault tolerance and performance requirements determine whether to optimize the configuration for high availability or high performance when cabling. The S3200/S2200 controller modules support both 3-Gbps and 6-Gbps internal disk drive speeds together with 3-Gbps and 6-Gbps expander link speeds.
22 Installing the enclosures
Page 23
CAUTION: Some 6-Gbps disks might not consistently support a 6-Gbps transfer rate. If this happens, the system automatically adjusts transfers to those disks to 3 Gbps, increasing reliability and reducing error messages with little impact on system performance. This rate adjustment persists until the controller is restarted or power-cycled.
The S3200/S2200 controller enclosures support compatible Lenovo drive enclosures for adding storage. Supported enclosure form factors include traditional 2U models (2U12 and 2U24). A summary overview of drive enclosures supported by controller enclosures is provided herein.
Cabling diagrams in this section show fault-tolerant cabling patterns. Controller and expansion modules are identified by <enclosure-ID><controller-ID>. When connecting multiple drive enclosures, use reverse cabling to ensure the highest level of fault tolerance, enabling controllers to access remaining drive enclosures if a drive enclosure fails.
For example, the illustration on the left in Figure 13 on page 26 shows reverse cabling, wherein controller 0A (i.e., enclosure-ID = 0; controller-ID = Able) is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller 0B is connected to the lower expansion module (B) of the last drive enclosure in the chain, with connections moving in the opposite direction (green). Cabling examples are provided on the following pages.

Connecting the S3200/S2200 controller to the 2U12 drive enclosure

The LFF E1012 12-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be attached to a S3200/S2200 controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length (see Figure 12 on page 25).

Connecting the S3200/S2200 controller to the 2U24 drive enclosure

The SFF E1024 24-drive enclosure, supporting 6 Gb internal disk drive and expander link speeds, can be attached to a S3200/S2200 controller enclosure using supported mini-SAS to mini-SAS cables of 0.5 m (1.64') to 2 m (6.56') length (see Figure 12 on page 25).

Connecting the S3200/S2200 controller to mixed model drive enclosures

The S3200/S2200 controllers support cabling of 6 Gb SAS link-rate SFF and LFF expansion modules—in mixed model fashion—as shown in Figure 13 on page 26. The simplified rear-panel views of the E1024 and E1012 are identical.

Cable requirements for storage enclosures

The S3200/S2200 enclosures support 6-Gbps or 3-Gbps expansion port data rates. Use only Lenovo Storage or OEM-qualified cables, and observe the following guidelines (see Table 4 on page 24):
When installing SAS cables to expansion modules, use only supported mini-SAS x4 cables with
SFF-8088 connectors supporting your 6 Gb application.
Qualified mini-SAS to mini-SAS 0.5 m (1.64') cables are used to connect cascaded enclosures in the
rack. The “mini-SAS to mini-SAS” cable designator connotes SFF-8088 to SFF-8088 connectors.
The maximum expansion cable length allowed in any configuration is 2 m (6.56').
Cables required, if not included, must be separately purchased.
When adding more than two drive enclosures, you may need to purchase additional cables,
depending upon number of enclosures and cabling method used.
You may need to order additional or longer cables when reverse-cabling a fault-tolerant configuration
(see Figure 14 on page 27).
Use only Lenovo Storage or OEM-qualified cables for host connection:
• Qualified Fibre Channel SFP and cable options
• Qualified 10GbE iSCSI SFP and cable options
• Qualified 1 Gb RJ-45 SFP and cable options
Lenovo Storage S3200/S2200 Setup Guide 23
Page 24
• Qualified HD mini-SAS cable and fan-out cable options supporting SFF-8644 and SFF-8088 host connection (also see HD mini-SAS host connection on page 33):
• A qualified SFF-8466 to SFF-8466 cable option is used for connecting to a 12 Gbit/s enabled
host.
• A qualified SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gbit/s enabled
host.
• A qualified bifurcated SFF-8644 to SFF-8644 fan-out cable option is used for connecting to a 12
Gbit/s enabled host.
NOTE: Using fan-out cables instead of standard cables will double the number of hosts that can be attached to a single system. Use of fan-out cables will halve the maximum bandwidth available to each host, but overall bandwidth available to all hosts is unchanged.
• A qualified bifurcated SFF-8644 to SFF-8088 fan-out cable option is used for connecting to a 6
Gbit/s enabled host (see Note above).
See HD mini-SAS host connection on page 33 and SAS fan-out cable option for more information about bifurcated SAS cables.
TIP: Requirements for cabling S3200/S2200 controller enclosures and supported drive enclosures are summarized in Table 4 on page 24.
Table 4 on page 24 summarizes key characteristics of controller enclosures and compatible drive
(expansion) enclosures relative to cabling, including: the cable type needed for attaching one specific enclosure model to another specific enclosure model; internal disk drive speeds; number of disks of given size (SFF or LFF) supported per enclosure model; and SAS expander data rates. Enclosure form factor and NEBS compliance information are also provided.
Table 4 Summary of cabling connections for S3200/S2200 enclosures
Model
S3200/S2200
Form Host connect NEBS
1, 2
2U24 FC (8/16 Gb) SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
SFF 24-disk chassis LFF 12-disk chassis
2U12 FC (8/16 Gb) SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
S3200/S2200
1, 2
2U24 10GbE iSCSI SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
2U12 10GbE iSCSI SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
S3200/S2200
1, 2
2U24 1 Gb iSCSI SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
2U12 1 Gb iSCSI SFP option Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
S3200/S2200
1, 3
2U24 HD mini-SAS connector Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
2U12 HD mini-SAS connector Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
E1024 2U24 Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
E1012 2U12 Note 4 mini-SAS to mini-SAS mini-SAS to mini-SAS
Enclosure chassis designators:
2U24: Enclosure measuring two rack units high, providing 24 SFF (2.5") sledded disk drive modules. 2U12: Enclosure measuring two rack units high, providing 12 LFF (3.5") sledded disk drive modules.
See Physical requirements on page 73 for more information about 2U24 and 2U12 enclosures.
1
These compatible product models feature 6 Gbit/s internal disk and SAS expander link speeds.
2
See CNC technology on page 30 for information about locating and installing qualified SFP options into CNC ports.
3
See 12 Gb mini-SAS ports on page 32 for information about host connection using SFF-8644 high-density mini-SAS connectors.
4
NEBS compliance is a future consideration for S3200/S2200 enclosures.
5
The S3200 and S2200 enclosures support single or dual-IOMs.
24 Installing the enclosures
Page 25
Summary of drive enclosure cabling illustrations
Controller enclosure
0
Drive
enclosure
1
In Out
0B
0A
1A
1B
Controller A
IOM blank
Enclosures equipped with single IOM
In Out
In Out
0B
0A
1A
1B
Controller A
Controller B
Enclosures equipped with dual IOMs
IOM blank
The following illustrations show both reverse and straight-through cabling examples featuring S3200/S2200 controller enclosures and compatible E1024 (2U24), and E1012 (2U12) drive enclosures. The rear-panel views of the E1024 and E1012 are identical. All storage enclosures use mini-SAS connectors for expansion.
NOTE: The S3200/S2200 controller enclosures and compatible drive enclosures support 6 Gb SFF-8088 mini-SAS connectors for adding storage. See Table 4 for SAS cable requirements.
NOTE: For clarity, the schematic diagrams show only relevant details such as face plate outlines and expansion ports. For detailed illustrations, see Controller enclosure — rear panel layout on page 15. Also see the controller module face plate illustrations that follow the rear panel layout.
Figure 12 Cabling connections between a controller enclosure and one 2U drive enclosure
The figures above show examples of an S3200 or S2200 controller enclosure cabled to a single drive enclosure. Supported drive enclosures are ordered separately.
NOTE: The E1024 and E1012 drive enclosures can be configured with single or dual expansion canisters.
Within Figure 12, the illustration on the left shows cabling of enclosures equipped with a single IOM. The empty IOM slot in each of the enclosures is covered with an IOM blank to ensure sufficient air flow during enclosure operation. The illustration on the right shows cabling of enclosures equipped with dual IOMs. The remaining illustrations in the section feature enclosures equipped with dual IOMs.
IMPORTANT: If the S3200/S2200 controller enclosure is configured with a single controller module, it must be installed in the upper slot, and an I/O module blank must be installed in the lower slot (shown above). This configuration is required to allow sufficient air flow through the enclosure during operation.
See the “Replacing a controller or expansion module” topic within the Lenovo CRU Installation and Replacement Guide for additional information.
Lenovo Storage S3200/S2200 Setup Guide 25
Page 26
NOTE: Controller enclosures and optional/cascaded drive enclosures
Controller A
Controller B
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
0A
0B
1A
1B
2A
2B
3A
3B
Controller enclosure
0
Drive
enclosure
1
Drive
enclosure
2
Drive
enclosure
3
0A
0B
1A
1B
2A
2B
3A
3B
Controller A
Controller B
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
Reverse cabling Straight-through cabling
Figure 13 on page 26 shows maximum supported enclosures for an S2200 array.
Figure 14 on page 27 shows maximum supported enclosures for an S3200 array.
Figure 13 Fault-tolerant cabling between a dual-controller enclosure and three 2U drive enclosures
The diagram at left (above) shows reverse cabling of a S3200/S2200 dual-controller enclosure and 2U drive enclosures configured with dual-expansion modules. Controller module 0A is connected to expansion module 1A, with a chain of connections cascading down (blue). Controller module 0B is connected to the lower expansion module (3B), of the last expansion enclosure, with connections moving in the opposite direction (green). Reverse cabling allows any expansion enclosure to fail—or be removed—while maintaining access to other enclosures.
The diagram at right (above) shows the same storage components connected using straight-through cabling. Using this method, if an expansion enclosure fails, the enclosures that follow the failed enclosure in the chain are no longer accessible until the failed enclosure is repaired or replaced.
The 2U drive enclosures shown in Figure 13 can either be of the same type (all E1024s or all E1012s) or they can be a mixture of these models. Given that supported drive enclosure models use 6 Gb SAS link-rate and SAS2.0 expanders, they can be ordered in desired sequence within the array, following the controller enclosure.
Refer to these diagrams when cabling multiple compatible drive enclosures together with the S3200 or S2200 controller enclosure.
IMPORTANT: Guidelines for stacking enclosures in the rack are provided in the rackmount bracket kit installation sheet provided with your product.
26 Installing the enclosures
Page 27
Controller A
Controller B
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
0A
0B
1A
1B
2A
2B
3A
3B
Controller enclosure
0
Drive
enclosure
1
Drive
enclosure
2
Drive
enclosure
3
7A
7B
Drive
enclosure
7
In
Out
In
Out
4A
4B
Drive
enclosure
4
In
Out
In
Out
5A
5B
Drive
enclosure
5
In
Out
In
Out
6A
6B
Drive
enclosure
6
In
Out
In
Out
0A
0B
1A
1B
2A
2B
3A
3B
7A
7B
4A
4B
5A
5B
6A
6B
Controller A
Controller B
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
In
Out
Reverse cabling
Straight-through cabling
Figure 14 Fault-tolerant cabling between a dual-controller enclosure and seven 2U drive enclosures
The diagrams above show dual-controller enclosures cabled to drive enclosures featuring dual-expansion modules. Cabling logic is explained in the narrative supporting Figure 13 on page 26.
Lenovo Storage S3200/S2200 Setup Guide 27
Page 28

Testing enclosure connections

Power cord connect
Power cycling procedures vary according to the type of power supply unit (PSU) provided with the enclosure. Some enclosure models are equipped with PSUs possessing power switches; whereas S3200/S2200 controller enclosures use PSUs that have no power switch.
The following section, Powering on/powering of f, describes power cycling procedures relative to PSUs installed within enclosures. Once the power-on sequence succeeds, the storage system is ready to be connected to hosts as described in Connecting the enclosure to hosts

Powering on/powering off

Before powering on the enclosure for the first time:
Install all disk drives in the enclosure so the controller can identify and configure them at power-up.
Connect the cables and power cords to the enclosure as described herein.
NOTE: Newer 2U AC PSUs do not have power switches. Switchless PSUs power on when
connected to a power source, and power off when disconnected.
Generally, when powering up, make sure to power up the enclosures and associated data host in the
following order:
•Drive enclosures first This ensures that the disks in the drive enclosure have enough time to completely spin up before
being scanned by the controller modules within the controller enclosure. While enclosures power up, their LEDs blink. After the LEDs stop blinking—if no LEDs on the front and
back of the enclosure are amber—the power-on sequence is complete, and no faults have been detected. See LED descriptions
• Controller enclosure next Depending upon the number and type of disks in the system, it may take several minutes for the
system to become ready.
• Data host last (if powered down for maintenance purposes).
on page 56
on page 30.
for descriptions of LED behavior.

AC PSU

TIP: Generally, when powering off, you will reverse the order of steps used for powering on.
Controller and drive enclosures configured with switchless PSUs rely on the power cord for power cycling. Connecting the cord from the PSU power cord connector to the appropriate power source facilitates power on; whereas disconnecting the cord from the power source facilitates power off.
Figure 15 AC PSU
28 Installing the enclosures
Page 29
To power on the system:
Power supply module
Rack power source
Power cord facilitates power on/power off
1. Plug the power cord into the power cord connector on the back of the drive enclosure. Plug the other
end of the power cord into the rack power source (see Figure 15 and Figure 16). Wait several seconds to allow the disks to spin up.
Repeat this sequence for each switchless PSU within each drive enclosure.
2. Plug the power cord into the power cord connector on the back of the controller enclosure. Plug the
other end of the power cord into the rack power source (see Figure 15 and Figure 16). Repeat the sequence for the controller enclosure’s other switchless PSU.
Figure 16 AC power cord
To power off the system:
1. Stop all I/O from hosts to the system (see Stopping I/O on page 48).
2. Shut down both controllers using either method described below:
• Use the Storage Management Console to shut down both controllers as described in the online help and Lenovo Storage Manager Guide.
Proceed to step 3.
• Use the command-line interface (CLI) to shut down both controllers, as described in the Lenovo
Storage CLI Reference Guide.
3. Disconnect the power cord’s male plug from the power source.
4. Disconnect the power cord’s female plug from the power cord connector on the PSU.
Lenovo Storage S3200/S2200 Setup Guide 29
Page 30

3 Connecting hosts

Host system requirements

Hosts connected to a Lenovo Storage S3200/S2200 controller enclosure must meet the following requirements:
Depending on your system configuration, host operating systems may require that multipathing is
supported. If fault tolerance is required, then multipathing software may be required. Host-based multipath
software should be used in any configuration where two logical paths between the host and any storage volume may exist at the same time. This would include most configurations where there are multiple connections to the host or multiple connections between a switch and the storage.
• Use native Microsoft MPIO DSM support with Windows Server 2008 and Windows Server 2012. Use either the Server Manager or the command-line interface (mpclaim CLI tool) to perform the installation.
See the following web sites for information about using native Microsoft MPIO DSM:
http://support.microsoft.com http://technet.microsoft.com (search the site for “multipath I/O overview”)

Cabling considerations

Common cabling configurations address hosts, controller enclosures, drive enclosures, and switches. Host interface ports on S3200/S2200 controller enclosures can connect to respective hosts via direct cable connection or switch attach.

Connecting the enclosure to hosts

A host identifies an external port to which the storage system is attached. The external port may be a port in an I/O adapter (such as an FC HBA) in a server. Cable connections vary depending on configuration. This section describes host interface protocols supported by S3200/S2200 controller enclosures, while showing a few common cabling configurations.
NOTE: S3200/S2200 controllers use Unified LUN Presentation (ULP): a controller feature enabling a host to access mapped volumes through any controller host port.
ULP can show all LUNs through all host ports on both controllers, and the interconnect information is managed by the controller firmware. ULP appears to the host as an active-active storage system, allowing the host to select any available path to access the LUN, regardless of disk group ownership.
TIP: See “Using the Configuration Wizard” in the Storage Manager Guide to initially configure the system or change system configuration settings (such as Configuring host ports).

CNC technology

Certain Lenovo Storage S3200/S2200 models use Converged Network Controller technology, allowing you to select the desired host interface protocol(s) from the available FC or iSCSI host interface protocols supported by the system. The small form-factor pluggable (SFP transceiver or SFP) connectors used in CNC ports are further described in the subsections below. Also see CNC ports used for host connection on page 10 for more information concerning use of CNC ports.
30 Connecting hosts
Page 31
NOTE: Controller modules are not shipped with pre-installed SFPs. Within your product kit, you will need to locate the qualified SFP options, and install them into the CNC ports. See Install an SFP transceiver on page 80.
IMPORTANT: Use the set host-port-mode CLI command to set the host interface protocol for CNC ports using qualified SFP options. S3200/S2200 models ship with CNC ports configured for FC. When connecting CNC ports to iSCSI hosts, you must use the CLI (not the SMC) to specify which ports will use iSCSI. It is best to do this before inserting the iSCSI SFPs into the CNC ports (see Change the CNC port
mode on page 44 for instructions).
Fibre Channel protocol
S3200/S2200 FC controller enclosures support one or two controller modules using the Fibre Channel interface protocol for host connection. The S3200 FC controller module provides four host ports, and the S2200 FC controller module provides two host ports. The CNC ports are designed for use with an FC SFP supporting data rates up to 16 Gbit/s.
The controllers support Fibre Channel Arbitrated Loop (public or private) or point-to-point topologies. Loop protocol can be used in a physical loop or for direct connection between two devices. Point-to-point protocol is used to connect to a fabric switch. Point-to-point protocol can also be used for direct connection, and it is the only option supporting direct connection at 16 Gbit/s. See the set host-parameters command within the CLI Reference Guide for command syntax and details about parameter settings relative to supported link speeds. Fibre Channel ports are used for attachment to FC hosts directly, or through a switch used for the FC traffic. The host computer must support FC and optionally, multipath I/O.
TIP: Use the SMC Configuration Wizard to set FC port speed. Within the Storage Manager Guide, see “Configuring host ports.” Use the set host-parameters CLI command to set FC port options, and use the show ports CLI command to view information about host ports.
10G b E iSCSI prot ocol
S3200/S2200 10GbE iSCSI controller enclosures support one or two controller modules using the Internet SCSI interface protocol for host connection. The S3200 10GbE iSCSI controller module provides four host ports, and the S2200 10GbE iSCSI controller module provides two host ports. The CNC ports are designed for use with a 10GbE iSCSI SFP supporting data rates up to 10 Gbit/s, using either one-way or mutual CHAP (Challenge-Handshake Authentication Protocol). The 10GbE iSCSI ports are used for attachment to 10GbE iSCSI hosts directly, or through a switch used for the 10GbE iSCSI traffic. The host computer must support Ethernet, iSCSI, and optionally, multipath I/O.
TIP: See the “Configuring CHAP” topic in the Storage Management Guide.
TIP: Use the SMC Configuration Wizard to set iSCSI port options. Within the Storage Manager Guide,
see “Configuring host ports.” Use the set host-parameters CLI command to set iSCSI port options, and use the show ports CLI command to view information about host ports.
1 Gb iSCSI protocol
S3200/S2200 1 Gb iSCSI controller enclosures support one or two controller modules using the Internet SCSI interface protocol for host port connection. The S3200 1 Gb iSCSI controller module provides four host ports, and the S2200 1 Gb iSCSI controller module provides two host ports. The CNC ports are designed for use with an RJ-45 SFP supporting data rates up to 1 Gbit/s, using either one-way or mutual CHAP (Challenge-Handshake Authentication Protocol). The 1 Gb iSCSI ports are used for attachment to 1
Lenovo Storage S3200/S2200 Setup Guide 31
Page 32
Gb iSCSI hosts directly, or through a switch used for the 1 Gb iSCSI traffic. The host computer must support Ethernet, iSCSI, and optionally, multipath I/O.
TIP: See the “Configuring CHAP” topic in the Storage Management Guide.
TIP: Use the SMC Configuration Wizard to set iSCSI port options. Within the Storage Manager Guide,
see “Configuring host ports.” Use the set host-parameters CLI command to set iSCSI port options, and use the show ports CLI command to view information about host ports.

HD mini-SAS technology

S3200/S2200 SAS models use mini-SAS SFF-8644 interface protocol for host connection.
12 Gb mini-SAS ports
S3200/S2200 12 Gb SAS controller enclosures support one or two controller modules. The S3200 SAS controller module provides four SFF-8644 HD mini-SAS host ports, and the S2200 SAS controller module provides two SFF-8644 HD mini-SAS host ports. These host ports support data rates up to 12 Gbit/s. HD mini-SAS host ports are used for attachment to SAS hosts directly, or via a switch. The host computer must support SAS, and optionally, multipath I/O. Use a qualified SFF-8644 to SFF-8644 cable option when connecting to a 12 Gbit/s host. Use a qualified SFF-8644 to SFF-8088 option when connecting to a supported 6 Gbit/s host.
S3200 host ports support standard cables; whereas S2200 host ports can be configured via management interfaces to use standard cables (see SAS cables with single connector at each end on page 33) or fan-out cables (see SAS cables with fan-out connectors on page 33).

Connecting direct attach configurations

S3200 controller enclosures support up to eight direct-connect server connections, four per controller module. S2200 controller enclosures support up to four direct-connect server connections, two per controller module. Server connections are non-redundant. Connect appropriate cables from the server’s HBAs to the controller module’s host ports as described below, and shown in the following illustrations.
Fibre Channel host connection
To connect S3200/S2200 FC controller modules supporting (4/8/16 Gb) FC host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified FC SFP option.
Qualified options support cable lengths of 1 m (3.28'), 2 m (6.56'), 5 m (16.40'), 15 m (49.21'), 30 m (98.43'), and 50 m (164.04') for OM4 multimode optical cables and OM3 multimode FC cables, respectively. A 0.5 m (1.64') cable length is also supported for OM3.
10GbE iSCSI host connection
To connect S3200/S2200 10GbE iSCSI controller modules supporting 10GbE iSCSI host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified 10GbE SFP option.
Qualified options support cable lengths of 1 m (3.28'), 2 m (6.56'), 5 m (16.40'), 15 m (49.21'), 30 m (98.43'), and 50 m (164.04') for OM4 multimode optical cables and OM3 multimode cables, respectively. A 0.5 m (1.64') cable length is also supported for OM3.
1 Gb iSCSI host connection
To connect S3200/S2200 1 Gb iSCSI controller modules supporting 1Gb iSCSI host interface ports to a server HBA or switch—using the controller’s CNC ports—select a qualified 1 Gb RJ-45 copper SFP option supporting (CAT5-E minimum) Ethernet cables of the same lengths specified for 10GbE iSCSI above.
32 Connecting hosts
Page 33
HD mini-SAS host connection
To connect S3200/S2200 SAS controller modules supporting SAS host interface ports to a server HBA or switch —using the controller’s SFF-8644 dual HD mini-SAS host ports—select a qualified HD mini-SAS cable option.
S3200 host ports support standard cables. For S2200 host ports, management interfaces distinguish between standard (dual cable with single connector at each end), and fan-out SAS cables. The fan-out SAS cable is comprised of a single SFF-8644 connector that branches into two cable segments, each of which is terminated by a connector. The terminating connectors attach to the host or switch, and are either both type SFF-8644 or SFF-8088. The storage system must be cabled using either standard cables or fan-out cables: a mixture of cable types is not supported. Qualified cable options for each of these SAS cable categories are described herein.
SAS cables with single connector at each end
A qualified SFF-8644 to SFF-8644 cable option is used for connecting to a 12 Gbit/s enabled host; whereas a qualified SFF-8644 to SFF-8088 cable option is used for connecting to a 6 Gbit/s host. Qualified SFF-8644 to SFF-8644 options support cable lengths of 0.5 m (1.64'), 1 m (3.28'), 2 m (6.56'), and 4 m (13.12'). Qualified SFF-8644 to SFF-8088 options support cable lengths of 1 m (3.28'), 2 m (6.56'), 3 m (9.84'), and 4 m (13.12').
SAS cables with fan-out connectors
Use of a bifurcated fan-out cable doubles the number of server host ports that can be connected to an HD mini-SAS controller module. A qualified SFF-8644 to SFF-8644 fan-out cable option is used for attaching to a 12 Gbit/s enabled host; whereas a qualified SFF-8644 to SFF-8088 fan-out cable option is used for attaching to a 6 Gbit/s host. Qualified fan-out cable options support lengths of 1 m (3.28'), 2 m (6.56'), and 4 m (13.12').
IMPORTANT: Before attaching a fan-out cable, make sure to update firmware for the SAS HBA—and switch if applicable—for devices that will be attached to the fan-out cable.
See the Storage Manager Guide or CLI Reference Guide for more information about the fan-out setting and changing of host-interface settings for controller modules.
NOTE: Supported qualified cable options for host connection are subject to change.
NOTE: The diagrams that follow use a single representation for each CNC cabling example. This is due
to the fact that the CNC port locations and labeling are identical for each of the three possible interchangeable SFPs supported by the system.
Within each cabling connection category, the HD mini-SAS model is shown beneath the CNC model.
Single-controller configurations
A single-controller configuration provides no redundancy in the event of controller failure. If the controller fails, the host loses access to the storage data. This configuration is suitable only in environments where high availability is not required, and loss of access to data can be tolerated until failure recovery actions are completed.
Lenovo Storage S3200/S2200 Setup Guide 33
Page 34
One server/one HBA/single path
6Gb/s
S
S A
Server
S3200 CNC
IOM blank
0A
0B
12Gb/s
S
S
A
6Gb/s
S
S A
12Gb/s
S
S
A
Server
S3200 SAS
IOM blank
0A
0B
6Gb/s
Server
S2200 CNC
IOM blank
0A
0B
6Gb/s
12Gb/s
S
S
A
Server
S2200 SAS
IOM blank
0A
0B
6Gb/s
12Gb/s
S
S
A
Server 1
S2200 SAS
IOM blank
Server 1
0A
0B
Figure 17 Connecting hosts: S3200 direct attach—one server/one HBA/single path
34 Connecting hosts
Figure 18 Connecting hosts: S2200 direct attach—one server/one HBA/single path
Figure 18 shows host connection of S2200 SAS models using standard cables (bottom diagram); whereas Figure 19 shows host connection using fan-out cables.
Figure 19 Connecting hosts: S2200direct attach—two servers/two HBAs/dual path (fan-out)
The five illustrations above show an IOM blank covering the bottom IOM slot (0B) on the controller enclosure. The remaining illustrations in the section feature enclosures equipped with dual IOMs.
IMPORTANT: If the S3200/S2200 controller enclosure is configured with a single controller module, the controller module must be installed in the upper slot, and an I/O module blank must be installed in the lower slot (shown above). This configuration is required to allow sufficient air flow through the enclosure during operation.
Page 35
See the “Replacing a controller or expansion module” topic within the CRU Installation and Replacement
6Gb/s
6Gb/s
S
S A
S
S A
Server
S3200 CNC
0A
0B
12Gb/s
S
S
A
6Gb/s
6Gb/s
S
S A
S
S A
12Gb/s
S
S
A
12Gb/s
S
S
A
12Gb/s
S
S
A
Server
S3200 SAS
0A
0B
6Gb/s
6Gb/s
Server
S2200 CNC
0A
0B
6Gb/s
6Gb/s
12Gb/s
S
S
A
12Gb/s
S
S
A
Server
S2200 SAS
0A
0B
Guide for additional information about installing IOMs.
Dual-controller configurations
A dual-controller configuration improves application availability because in the event of a controller failure, the affected controller fails over to the partner controller with little interruption to data flow. A failed controller can be replaced without the need to shut down the storage system.
In a dual-controller system, hosts use LUN-identifying information from both controllers to determine that up to four paths are available to a given storage volume. Assuming MPIO software is installed, a host can use any available data path to access a volume owned by either controller. The path providing the best performance is through host ports on the volume’s owning controller. Both controllers share one set of 1,024 LUNs (0-1,023) for use in mapping volumes to hosts (see “ULP” in the Storage Manager Guide).
The illustrations below show dual-controller configurations for S3200/S2200 controller enclosures equipped with either CNC ports or 12 Gb HD mini-SAS host ports.
One server/one HBA/dual path
Figure 20 Connecting hosts: S3200 direct attach—one server/one HBA/dual path
Figure 21 Connecting hosts: S2200 direct attach—one server/one HBA/dual path
Lenovo Storage S3200/S2200 Setup Guide 35
Page 36
Two servers/one HBA per server/dual path
6Gb/s
6Gb/s
S
S A
S
S A
S3200 CNC
Server 2
0A
0B
Server 1
12Gb/s
S
S
A
12Gb/s
S
S
A
6Gb/s
6Gb/s
S
S A
S
S A
12Gb/s
S
S
A
12Gb/s
S
S
A
S3200 SAS
Server 2
0A
0B
Server 1
6Gb/s
6Gb/s
S2200 CNC
Server 2
0A
0B
Server 1
6Gb/s
6Gb/s
12Gb/s
S
S
A
12Gb/s
S
S
A
S2200 SAS
Server 2
0A
0B
Server 1
Figure 22 Connecting hosts: S3200 direct attach—two servers/one HBA per server/dual path
36 Connecting hosts
Figure 23 Connecting hosts: S2200 direct attach—two servers/one HBA per server/dual path
Figure 23 shows host connection of S2200 SAS models using standard cables (bottom diagram); whereas Figure 24 on page 37 shows host connection using fan-out cables.
Page 37
Figure 24 Connecting hosts: S2200 direct attach—four servers/one HBA per server/dual path (fan-out)
6Gb/s
6Gb/s
12Gb/s
S
S
A
12Gb/s
S
S
A
S2200 SAS
Server 2
0A
0B
Server 1
Server 4Server 3
6Gb/s
S
S A
6Gb/s
S
S A
S3200 CNC
Server 2
0A
0B
Server 1
Server 4Server 3
12Gb/s
S
S
A
12Gb/s
S
S
A
6Gb/s
S
S A
6Gb/s
S
S A
12Gb/s
S
S
A
12Gb/s
S
S
A
S3200 SAS
Server 2
0A
0B
Server 1
Server 4Server 3
Four servers/one HBA per server/dual path
Figure 25 Connecting hosts: S3200 direct attach—four servers/one HBA per server/dual path
Figure 26 Connecting hosts: S3200 direct attach—four servers/one HBA per server/dual path
Lenovo Storage S3200/S2200 Setup Guide 37
Page 38

Connecting switch attach configurations

6Gb/s
S
S A
6Gb/s
S
S A
S3200 CNC
Server 1
Server 2
Switch A
Switch B
0A
0B
6Gb/s
6Gb/s
S2200 CNC
Server 1
Server 2
Switch A
Switch B
0A
0B
A switch attach solution—or SAN—places a switch between the servers and the controller enclosures. Using switches, a SAN shares a storage system among multiple servers, reducing the number of storage systems required for a particular environment. Using switches increases the number of servers that can be connected to the storage system. A S3200/S2200 controller enclosure supports 64 hosts.
Dual-controller configuration
Two servers/two switches
Figure 27 Connecting hosts: S3200 switch attach—two servers/two switches
Figure 28 Connecting hosts: S2200 switch attach—two servers/two switches
38 Connecting hosts
Page 39
Four servers/multiple switches/SAN fabric
6Gb/s
S
S A
6Gb/s
S
S A
S3200 SAS
Server 1
Server 2
SAN
0A
0B
Server 3 Server 4
Figure 29 Connecting hosts: S3200 switch attach—four servers/multiple switches/SAN fabric
S3200/S2200 controller enclosure iSCSI considerations
When installing an S3200/S2200 iSCSI controller enclosure, use at least three ports per server—two for the storage LAN, and one or more for the public LAN(s)—to ensure that the storage network is isolated from the other networks. The storage LAN is the network connecting the servers—via switch attach—to the controller enclosure (see Figure 27 on page 38 and Figure 29).
NOTE: These considerations apply to iSCSI and combination FC-and-iSCSI controller solutions only.
IP address scheme for the controller pair — two iSCSI ports per controller
The S3200 can use port 2 of each controller as one failover pair, and port 3 of each controller as a second failover pair for iSCSI traffic. Port 2 of each controller must be in the same subnet, and port 3 of each controller must be in second subnet. See Controller enclosure — rear panel layout on page 15 for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
C on tro ll er A p or t 2: 10 .10 .10 .10 0
Controller A port 3: 10.11.10.120
C on tro ll er B p or t 2: 10 .10 .10 .110
C on tro ll er B p or t 3 : 10 .11.10.130
The S3200/S2200 can use port 0 of each controller as one failover pair, and port 1 of each controller as a second failover pair. Port 0 of each controller must be in the same subnet, and port 1 of each controller must be in second subnet. See for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
Controller A port 0: 10.10.10.100
C on tro ll er A p or t 1: 10 .11.10.12 0
Controller B port 0: 10.10.10.110
C on tr o ll er B p or t 1: 10 .11.10 .13 0
Lenovo Storage S3200/S2200 Setup Guide 39
Page 40
IP address scheme for the controller pair — four iSCSI ports per controller
When all CNC ports are configured for iSCSI, the scheme is similar to the one described for two-ports above. See Controller enclosure — rear panel layout on page 15 for iSCSI port numbering.
For example (with a netmask of 255.255.255.0):
Controller A port 0: 10.10.10.100
C on tro ll er A p or t 1: 10 .11.10.12 0
C on tro ll er A p or t 2: 10 .10 .10 .110
Controller A port 3: 10.11.10.130
Controller B port 0: 10.10.10.140
C on tr o ll er B p or t 1: 10 .11.10 .15 0
Controller B port 2: 10.10.10.160
C on tro ll er B p or t 3 : 10 .11.10.170
In addition to setting the port-specific options described above, you can view settings using the GUI.
If using the SMC, in the System topic, select Action > Set Up Host Ports.
The Host Ports Settings panel opens, allowing you to access host connection settings.

Connecting a management host on the network

The management host directly manages storage systems out-of-band over an Ethernet network.
1. Connect an RJ-45 Ethernet cable to the network port on each controller.
2. Connect the other end of each Ethernet cable to a network that your management host can access
(preferably on the same subnet).
NOTE: Connections to this device must be made with shielded cables—grounded at both ends—with metallic Product Regulatory Compliance and Safety document
RFI/EMI connector hoods, in order to maintain compliance with FCC Rules and Regulations. See the

Updating firmware

After installing the hardware and powering on the storage system components for the first time, verify that the controller modules, expansion modules, and disk drives are using the current firmware release.
Using the Storage Management Console, in the System topic, select Action > Update Firmware.
The Update Firmware panel opens. The Update Controller Module tab shows versions of firmware components currently installed in each controller.
NOTE:
the partner controller. To enable or disable the setting, use the command, and set the partner-firmware-upgrade parameter. See the CLI Reference Guide for more information about command parameter syntax.
Optionally, you can update firmware using FTP (File Transfer Protocol) as described in the Storage Manager Guide.
The SMC does not provide a check-box for enabling or disabling Partner Firmware Update for
.
set advanced-settings
IMPORTANT: See the “Updating firmware” topic in the Storage Manager Guide before performing a firmware update. Partner Firmware Update (PFU) is enabled by default on S3200/S2200 systems.
40 Connecting hosts
Page 41

Obtaining IP values

You can configure addressing parameters for each controller module’s network port. You can set static IP values or use DHCP. DHCP is enabled by default on S3200/S2200 systems.
TIP: See the “Configuring network ports” topic in the Storage Manager Guide.

Setting network port IP addresses using DHCP

In DHCP mode, network port IP address, subnet mask, and gateway values are obtained from a DHCP server if one is available. If a DHCP server is unavailable, current addressing is unchanged. You must have some means of determining what addresses have been assigned, such as the list of bindings on the DHCP server.

Setting network port IP addresses using the CLI port and cable

If you did not use DHCP to set network port IP values, set them manually as described below. If you are using the USB CLI port and cable, you will need to enable the port for communication (also see Using the
CLI port and cable—known issues on Windows on page 79).
Network ports on controller module A and controller module B are configured with the following default values:
• Network port IP address: 10.0.0.2 (controller A), 10.0.0.3 (controller B)
•IP subnet mask: 255.255.255.0
• Gateway IP address: 10.0.0.1
If the default IP addresses are not compatible with your network, you must set an IP address for each network port using the CLI embedded in each controller module. The CLI enables you to access the system using the USB (Universal Serial Bus) communication interface and terminal emulation software.
NOTE: If you are using the mini USB CLI port and cable, see Appendix D - USB device connection:
Windows customers should download and install the device driver as described in Obtaining the
software download on page 78.
Linux customers should prepare the USB port as described in Setting parameters for the device driver on
page 79.
Use the CLI commands described in the steps below to set the IP address for the network port on each controller module.
Once new IP addresses are set, you can change them as needed using the SMC. Be sure to change the IP address before changing the network configuration. See Accessing the SMC on page 45 for more information concerning the web-based storage management application.
1. From your network administrator, obtain an IP address, subnet mask, and gateway address for
controller A and another for controller B. Record these IP addresses so you can specify them whenever you manage the controllers using the
SMC or the CLI.
2. Use the provided USB cable to connect controller A to a USB port on a host computer. The USB mini 5
male connector plugs into the CLI port as shown in Figure 30 on page 42 (generic S3200 controller module shown).
Lenovo Storage S3200/S2200 Setup Guide 41
Page 42
Figure 30 Connecting a USB cable to the CLI port
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
Host Interface
Not Shown
SERVICE−2
SERVICE−1
6G
b/s
Connect USB cable to CLI port on controller face plate
3. Enable the CLI port for subsequent communication:
• Linux customers should enter the command syntax provided in Setting parameters for the device
driver on page 79.
• Windows customers should locate the downloaded device driver described in Obtaining the
software download on page 78, and follow the instructions provided for proper installation.
4. Start and configure a terminal emulator, such as HyperTerminal or VT-100, using the display settings in
Table 5 and the connection settings in Table 6 (also, see the note following this procedure).
Table 5 Terminal emulator display settings
Parameter Value
Terminal emulation mode VT-100 or ANSI (for color support)
Font Terminal
Translations None
Columns 80
Table 6 Terminal emulator connection settings
Parameter Value
Connector COM3 (for example)
1, 2
Baud rate 115,200
Data bits 8
Parity N one
Stop bits 1
Flow control None
1
Your server or laptop configuration determines which COM port is used for Disk Array USB Port.
2
Verify the appropriate COM port for use with the CLI.
5. In the terminal emulator, connect to controller A.
6. Press Enter to display the CLI prompt (#).
The CLI displays the system version, MC version, and login prompt:
a. At the login prompt, enter the default user manage. b. Enter the default password
If the default user or password—or both—have been changed for security reasons, enter the secure login credentials instead of the defaults shown above.
42 Connecting hosts
!manage.
Page 43
7. At the prompt, enter the following command to set the values you obtained in step 1 for each Network
port, first for controller A, and then for controller B:
set network-parameters ip address netmask netmask gateway gateway controller a|b
where:
address is the IP address of the controller
netmask is the subnet mask
gateway is the IP address of the subnet router
a|b specifies the controller whose network parameters you are setting For example:
# set network-parameters ip 192.168.0.10 netmask 255.255.255.0 gateway
192.168.0.1 controller a
# set network-parameters ip 192.168.0.11 netmask 255.255.255.0 gateway
192.168.0.1 controller b
8. Enter the following command to verify the new IP addresses:
show network-parameters
Network parameters, including the IP address, subnet mask, and gateway address are displayed for each controller.
9. Use the ping command to verify connectivity to the gateway address.
For example:
# ping 192.168.0.1 Info: Pinging 192.168.0.1 with 4 packets. Success: Command completed successfully. - The remote computer responded
with 4 packets.(2011-12-19 10:20:37)
10. In the host computer's command window, type the following command to verify connectivity, first for
controller A and then for controller B:
ping controller-IP-address
If you cannot access your system for at least three minutes after changing the IP address, you might need to restart the Management Controller(s) using the serial CLI.
When you restart a Management Controller, communication with it is temporarily lost until it successfully restarts.
Enter the following command to restart the Management Controller in both controllers:
restart mc both
IMPORTANT: When configuring an iSCSI system or a system using a combination of FC and iSCSI SFPs, do not restart the Management Controller or exit the terminal emulator session until configuring the CNC ports as described in Change the CNC port mode on page 44.
11 . When you are done using the CLI, exit the emulator. 12 . Retain the IP addresses (recorded in step 1) for accessing and managing the controllers using the SMC
or the CLI.
NOTE: Using HyperTerminal with the CLI on a Microsoft Windows host:
On a host computer connected to a controller module’s mini-USB CLI port, incorrect command syntax in a HyperTerminal session can cause the CLI to hang. To avoid this problem, use correct syntax, use a different terminal emulator, or connect to the CLI using telnet rather than the mini-USB cable.
Be sure to close the HyperTerminal session before shutting down the controller or restarting its Management Controller. Otherwise, the host’s CPU cycles may rise unacceptably.
Lenovo Storage S3200/S2200 Setup Guide 43
Page 44
If communication with the CLI is disrupted when using an out-of-band cable connection, communication can sometimes be restored by disconnecting and reattaching the mini-USB CLI cable as described in step 2 and Figure 30 on page 42.

Change the CNC port mode

This subsection applies to S3200/S2200 CNC models only. While the USB cable is still connected and the terminal emulator session remains active, perform the following steps to change the CNC port mode from the default setting (FC), to either iSCSI or FC-and-iSCSI used in combination.
When using FC SFPs and iSCSI SFPs in combination (applies to S3200 only), host ports 0 and 1 are set to FC (either both 16 Gbits/s or both 8 Gbit/s), and host ports 2 and 3 must be set to iSCSI (either both 10GbE or both 1 Gbit/s).
Set CNC port mode to iSCSI
To set the CNC port mode for use with iSCSI SFPs, run the following command at the command prompt:
set host-port-mode iSCSI
The command notifies you that it will change host port configuration, stop I/O, and restart both controllers. When asked if you want to continue, enter y to change the host port mode to use iSCSI SFPs.
Once the set host-port-mode command completes, it will notify you that the specified system host port mode was set, and that the command completed successfully.
Continue with step 11 of Setting network port IP addresses using the CLI port and cable on page 41.
Set CNC port mode to FC and iSCSI
To set the CNC port mode for use with FC SFPs and iSCSI SFPs in combination (applies to S3200 only), run the following command at the command prompt:
set host-port-mode FC-and-iSCSI
The command notifies you that it will change host port configuration, stop I/O, and restart both controllers. When asked if you want to continue, enter y to change the host port mode to use FC and iSCSI SFPs.
Once the set host-port-mode command completes, it will notify you that the specified system host port mode was set, and that the command completed successfully.
Continue with step 11 of Setting network port IP addresses using the CLI port and cable on page 41.
Configure the system
NOTE:
After using either of the CLI command sequences shown above, you may see events stating that the
SFPs installed are not compatible with the protocol set for the host ports. The new host port mode setting will be synchronized with the qualified SFP option once the controller modules restart.
See Appendix E—SFP option for CNC ports for instructions about locating and installing your qualified
SFP transceivers within the CNC ports.
After changing the CNC port mode, you can invoke the SMC and use the Configuration Wizard to initially configure the system, or change system configuration settings as described in the Storage Manager Guide and Basic operation.
44 Connecting hosts
Page 45

4 Basic operation

Verify that you have successfully completed the sequential “Installation Checklist” instructions in Table 3 on page 21. Once you have successfully completed steps 1 through 8 therein, you can access the management interfaces using your web-browser, to complete the system setup.

Accessing the SMC

Upon completing the hardware installation, you can access the controller module’s web-based management interface (the SMC) to configure, monitor, and manage the storage system. Invoke your web browser, and enter the IP address of the controller module’s network port in the address field (obtained during completion of “Installation Checklist” step 8), then press Enter. To sign-in to the SMC, use the default user name manage and password !manage. If the default user or password—or both—have been changed for security reasons, enter the secure login credentials instead of the defaults shown above. This brief Sign In discussion assumes proper web browser setup.
IMPORTANT: For detailed information on accessing and using the SMC, see the “Getting Started” section in the web-posted Storage Manager Guide.
In addition to summarizing the processes to configure and provision a new system for the first time—using the wizards—the Getting Started section provides instructions for signing in to the SMC, introduces key system concepts, addresses browser setup, and provides tips for using the main window and the help window.
TIP: After signing-in to the SMC, you can use online help as an alternative to consulting the Storage Management Guide.

Configuring and provisioning the storage system

Once you have familiarized yourself with the SMC GUI, use the interface to configure and provision the storage system. Refer to the following topics within the Storage Management Guide or online help:
Getting started
Configuring the system
Provisioning the system
IMPORTANT: If the system is used in a VMware environment, set the system’s Missing LUN Response
option to use its Illegal Request setting. To do so, see either the configuration topic “Changing the missing LUN response” in the Storage Manager Guide or the command topic “set-advanced-settings” in the CLI Reference Guide.
Lenovo Storage S3200/S2200 Setup Guide 45
Page 46
5Troubleshooting

USB CLI port connection

Lenovo Storage S3200/S2200 controllers feature a CLI port employing a mini-USB Type B form factor. If you encounter problems communicating with the port after cabling your computer to the USB device, you may need to either download a device driver (Windows), or set appropriate parameters via an operating system command (Linux). See Appendix D for more information.

Fault isolation methodology

S3200/S2200 storage systems provide many ways to isolate faults. This section presents the basic methodology used to locate faults within a storage system, and to identify the pertinent CRUs (Customer Replaceable Units) affected.
As noted in Basic operation on page 45, use the SMC to configure and provision the system upon completing the hardware installation. As part of this process, configure and enable event notification so the system will notify you when a problem occurs that is at or above the configured severity (see “Using the Configuration Wizard > Configuring event notification” within the Storage Manager Guide). With event notification configured and enabled, you can follow the recommended actions in the notification message to resolve the problem, as further discussed in the options presented below.

Basic steps

The basic fault isolation steps are listed below:
Gather fault information, including using system LEDs
(see Gather fault information on page 47)
Determine where in the system the fault is occurring
(see Determine where the fault is occurring on page 47)
Review event logs
(see Review the event logs on page 47)
If required, isolate the fault to a data path component or configuration
(see Isolate the fault on page 48)

Options available for performing basic steps

When performing fault isolation and troubleshooting steps, select the option or options that best suit your site environment. Use of any option (four options are described below) is not mutually-exclusive to the use of another option. You can use the SMC to check the health icons/values for the system and its components to ensure that everything is okay, or to drill down to a problem component. If you discover a problem, the SMC and the CLI provide recommended-action text online. Options for performing basic steps are listed according to frequency of use:
Use the SMC
Use the CLI
Monitor event notification
View the enclosure LEDs
Use the SMC
The SMC uses health icons to show OK, Degraded, Fault, or Unknown status for the system and its components. The SMC enables you to monitor the health of the system and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown. Use the web application’s GUI to drill down to find each component that has a problem, and follow actions in the component Health Recommendations field to resolve the problem.

46 Troubleshooting

Page 47
Use the CLI
As an alternative to using the SMC, you can run the show system command in the CLI to view the health of the system and its components. If any component has a problem, the system health will be Degraded, Fault, or Unknown, and those components will be listed as Unhealthy Components. Follow the recommended actions in the component Health Recommendation field to resolve the problem.
Monitor event notification
With event notification configured and enabled, you can view event logs to monitor the health of the system and its components. If a message tells you to check whether an event has been logged, or to view information about an event in the log, you can do so using the SMC or the CLI. Using the SMC, you would view the event log and then click on the event message to see detail about that event. Using the CLI, you would run the detail for an event.
show events detail command (with additional parameters to filter the output) to see the
View the enclosure LEDs
You can view the LEDs on the hardware (while referring to LED descriptions for your enclosure model) to identify component status. If a problem prevents access to the SMC or the CLI, this is the only option available. However, monitoring/management is often done at a management console using storage management interfaces, rather than relying on line-of-sight to LEDs of racked hardware components.

Performing basic steps

You can use any of the available options described above in performing the basic steps comprising the fault isolation methodology.
Gather fault information
When a fault occurs, it is important to gather as much information as possible. Doing so will help you determine the correct action needed to remedy the fault.
Begin by reviewing the reported fault:
Is the fault related to an internal data path or an external data path?
Is the fault related to a hardware component such as a disk drive module, controller module, or power
supply unit?
By isolating the fault to one of the components within the storage system, you will be able to determine the necessary corrective action more quickly.
Determine where the fault is occurring
Once you have an understanding of the reported fault, review the enclosure LEDs. The enclosure LEDs are designed to immediately alert users of any system faults, and might be what alerted the user to a fault in the first place.
When a fault occurs, the Fault ID status LED on an enclosure’s right ear illuminates (see the diagram pertaining to your product’s front panel components on page 14). Check the LEDs on the back of the enclosure to narrow the fault to a CRU, connection, or both. The LEDs also help you identify the location of a CRU reporting a fault.
Use the SMC to verify any faults found while viewing the LEDs. The SMC is also a good tool to use in determining where the fault is occurring if the LEDs cannot be viewed due to the location of the system. This web-application provides you with a visual representation of the system and where the fault is occurring. The SMC can also provide more detailed information about CRUs, data, and faults.
Review the event logs
The event logs record all system events. Each event has a numeric code that identifies the type of event that occurred, and has one of the following severities:
Critical. A failure occurred that may cause a controller to shut down. Correct the problem immediately.
Error. A failure occurred that may affect data integrity or system stability. Correct the problem as soon
as possible.
Lenovo Storage S3200/S2200 Setup Guide 47
Page 48
Warning. A problem occurred that may affect system stability, but not data integrity. Evaluate the
problem and correct it if necessary.
Informational. A configuration or state change occurred, or a problem occurred that the system
corrected. No immediate action is required.
The event logs record all system events. It is very important to review the logs, not only to identify the fault, but also to search for events that might have caused the fault to occur. For example, a host could lose connectivity to a disk group if a user changes channel settings without taking the storage resources assigned to it into consideration. In addition, the type of fault can help you isolate the problem to either hardware or software.
Isolate the fault
Occasionally, it might become necessary to isolate a fault. This is particularly true with data paths, due to the number of components comprising the data path. For example, if a host-side data error occurs, it could be caused by any of the components in the data path: controller module, cable, switch, or data host.

If the enclosure does not initialize

It may take up to two minutes for all enclosures to initialize. If an enclosure does not initialize:
Perform a rescan
Power cycle the system
Make sure the power cord is properly connected, and check the power source to which it is connected
Check the event log for errors

Correcting enclosure IDs

When installing a system with drive enclosures attached, the enclosure IDs might not agree with the physical cabling order. This is because the controller might have been previously attached to enclosures in a different configuration, and it attempts to preserve the previous enclosure IDs, if possible. To correct this condition, make sure that both controllers are up, and perform a rescan using the SMC or the CLI. This will reorder the enclosures, but can take up to two minutes for the enclosure IDs to be corrected.
To perform a rescan using the CLI, type the following command:
rescan
To rescan using the SMC:
1. Verify that both controllers are operating normally.
2. Do one of the following:
•Point to the System tab and select Rescan Disk Channels.
•In the System topic, select Action > Rescan Disk Channels.
3. Click Rescan.
NOTE: The reordering enclosure IDs action only applies to Dual Controller mode. If only one controller is
available, due to either Single Controller configuration or controller failure, a manual rescan will not reorder the drive enclosure IDs.

Stopping I/O

When troubleshooting disk drive and connectivity faults, stop I/O to the affected disk groups from all hosts as a data protection precaution. As an additional data protection precaution, it is helpful to conduct regularly scheduled backups of your data.
IMPORTANT: Stopping I/O to a disk group is a host-side task, and falls outside the scope of this document.
48 Troubleshooting
Page 49
When on-site, you can verify that there is no I/O activity by briefly monitoring the system LEDs; however, when accessing the storage system remotely, this is not possible. Remotely, you can use the show disk-group-statistics command to determine if input and output has stopped. Perform these steps:
1. Using the CLI, run the show disk-group-statistics command.
The Number of Reads and Number of Writes outputs show the number of these operations that have occurred since the statistic was last reset, or since the controller was restarted. Record the numbers displayed.
2. Run the show disk-group-statistics
This provides you a specific window of time (the interval between requesting the statistics) to determine if data is being written to or read from the disk group. Record the numbers displayed.
3. To determine if any reads or writes occur during this interval, subtract the set of numbers you recorded
in step 1 from the numbers you recorded in step 2.
• If the resulting difference is zero, then I/O has stopped.
• If the resulting difference is not zero, a host is still reading from or writing to this disk group. Continue to stop I/O from hosts, and repeat step 1 and step 2 until the difference in step 3 is zero.
NOTE: See the CLI Reference Guide for additional information. Optionally, you can use the SMC to monitor IOPs and MB/s.

Diagnostic steps

command a second time.
This section describes possible reasons and actions to take when an LED indicates a fault condition during initial system setup. See Appendix A – LED descriptions for descriptions of all LED statuses.
NOTE: Once event notification is configured and enabled using the SMC, you can view event logs to monitor the health of the system and its components using the GUI.
In addition to monitoring LEDs via line-of-sight observation of the racked hardware components when performing diagnostic steps, you can also monitor the health of the system and its components using the management interfaces previously discussed. Bear this in mind when reviewing the Actions column in the following diagnostics tables, and when reviewing the step procedures provided later in this chapter.

Is the enclosure front panel Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes A fault condition exists/occurred.
If installing an I/O module CRU, the module has gone online and likely failed its self-test.
Check the LEDs on the back of the controller to
narrow the fault to a CRU, connection, or both.
Check the event log for specific information
regarding the fault; follow any Recommended Actions.
If installing an IOM CRU, try removing and
reinstalling the new IOM, and check the event log for errors.
If the above actions do not resolve the fault,
isolate the fault, and contact Lenovo for assistance. Replacement may be necessary.
Table 7 Diagnostics LED status: Front panel “Fault/Service Required”
Lenovo Storage S3200/S2200 Setup Guide 49
Page 50

Is the controller back panel CRU OK LED off?

Answer Possible reasons Actions
No (blinking)
Yes The controller module is not powered on.
System functioning properly. System is booting.
The controller module has failed.
No action required. Wait for system to boot.
Check that the controller module is fully inserted
and latched in place, and that the enclosure is powered on.
Check the event log for specific information
regarding the failure.
Table 8 Diagnostics LED status: Rear panel “CRU OK”

Is the controller back panel Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes (blinking) One of the following errors
occurred:
Hardware-controlled power-up error
Cache flush error
Cache self-refresh error
Restart this controller from the other controller
using the SMC or the CLI.
If the above action does not resolve the fault,
remove the controller module and reinsert it.
If the above action does not resolve the fault,
contact Lenovo for assistance. It may be necessary to replace the controller module.
Table 9 Diagnostics LED status: Rear panel “Fault/Service Required”

Are both disk drive module LEDs off?

Answer Possible reasons Actions
Yes There is no power
The drive is offline
The drive is not configured
Table 10 Diagnostics LED status: Disk drives (LFF and SFF modules)
NOTE: See Disk drives used in S3200/S2200 enclosures on page 15.

Is the disk drive module Fault LED amber?

Answer Possible reasons Actions
Yes, and the online/activity LED is off.
Yes, and the online/activity LED is blinking.
Table 11 Diagnostics LED status: Disk drive fault status (LFF and SFF modules)
The disk drive is offline. An event message may have been received for this device.
The disk drive is active, but an event message may have been received for this device.
Check that the drive is fully inserted and latched in place, and that the enclosure is powered on.
Check the event log for specific information regarding
the fault.
Isolate the fault.
Contact Lenovo for assistance.
Check the event log for specific information regarding
the fault.
Isolate the fault.
Contact Lenovo for assistance.
NOTE: See FDE considerations on page 21 for S3200 enclosures.
50 Troubleshooting
Page 51

Is a connected host port Host Link Status LED on?

Answer Possible reasons Actions
Yes System functioning properly. No action required (see Link LED note: page 68).
No The link is down. Check cable connections and reseat if necessary.
Inspect cable for damage.
Swap cables to determine if fault is caused by a
defective cable. Replace cable if necessary.
Verify that the switch, if any, is operating properly. If
possible, test with another port.
Verify that the HBA is fully seated, and that the PCI slot
is powered on and operational.
In the SMC, review event logs for indicators of a
specific fault in a host data path component.
Contact Lenovo for assistance.
See Isolating a host-side connection fault on page 53.
Table 12 Diagnostics LED status: Rear panel “Host Link Status”

Is a connected port Expansion Port Status LED on?

Answer Possible reasons Actions
Yes System functioning properly. No action required.
No The link is down. Check cable connections and reseat if necessary.
Inspect cable for damage. Replace cable if necessary.
Swap cables to determine if fault is caused by a
defective cable. Replace cable if necessary.
In the SMC, review the event logs for indicators of a
specific fault in a host data path component.
Contact Lenovo for assistance.
See Isolating a controller module expansion port
connection fault on page 53.
Table 13 Diagnostics LED status: Rear panel “Expansion Port Status”

Is a connected port’s Network Port link status LED on?

Answer Possible reasons Actions
Yes System functioning properly. No action required.
No The link is down.
Table 14 Diagnostics LED status: Rear panel “Network Port Link Status”
Use standard networking troubleshooting procedures to
isolate faults on the network.
Contact Lenovo for assistance.
Lenovo Storage S3200/S2200 Setup Guide 51
Page 52

Is the power supply Input Power Source LED off?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The power supply is not receiving
adequate power.
Verify that the power cord is properly connected, and check
the power source to which it connects.
Check that the power supply CRU is firmly locked into
position.
Check the event log for specific information regarding the
fault.
If the above action does not resolve the fault, isolate the
fault, and contact Lenovo for assistance.
Table 15 Diagnostics LED status: Rear panel power supply “Input Power Source”

Is the Voltage/Fan Fault/Service Required LED amber?

Answer Possible reasons Actions
No System functioning properly. No action required.
Yes The power supply unit or a fan is
operating at an unacceptable voltage/RPM level, or has failed.
When isolating faults in the power supply, remember that the fans in both modules receive power through a common bus on the midplane, so if a power supply unit fails, the fans continue to operate normally.
Verify that the power supply CRU is firmly locked into
position.
Verify that the power cable is connected to a power source.
Verify that the power cable is connected to the enclosure’s
power supply unit.
Table 16 Diagnostics LED status: Rear panel power supply “Voltage/Fan Fault/Service Required”
For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.

Controller failure in a single-controller configuration

Cache memory is flushed to CompactFlash in the case of a controller failure or power loss. During the write to CompactFlash process, only the components needed to write the cache to the CompactFlash are powered by the supercapacitor. This process typically takes 60 seconds per 1 Gbyte of cache. After the cache is copied to CompactFlash, the remaining power left in the supercapacitor is used to refresh the cache memory. While the cache is being maintained by the supercapacitor, the Cache Status LED flashes at a rate of 1/10 second on and 9/10 second off.

If the controller has failed or does not start, is the Cache Status LED on/blinking?

Answer Actions
No, the Cache LED status is off, and the controller does not boot.
No, the Cache Status LED is off, and the controller boots.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller does not boot.
If the problem persists, replace the controller module.
The system has flushed data to disks. If the problem persists, replace the controller module.
You may need to replace the controller module.
Yes, at a strobe 1:10 rate - 1 Hz, and the controller boots.
52 Troubleshooting
The system is flushing data to CompactFlash. If the problem persists, replace the controller module.
Page 53
Answer Actions
Yes, at a blink 1:1 rate - 1 Hz, and the controller does not boot.
Yes, at a blink 1:1 rate - 1 Hz, and the controller boots.
Table 17 Diagnostics LED status: Rear panel “Cache Status” (Continued)
NOTE: See also Cache Status LED details on page 69.
You may need to replace the controller module.
The system is in self-refresh mode. If the problem persists, replace the controller module.

Isolating a host-side connection fault

For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.

Host-side connection troubleshooting featuring CNC ports

For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.

Host-side connection troubleshooting featuring SAS host ports

For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.

Isolating a controller module expansion port connection fault

For additional information, contact support.lenovo.com, select Product Support, and navigate to Storage Products.

Resolving voltage and temperature warnings

1. Check that all of the fans are working by making sure the Voltage/Fan Fault/Service Required LED on
each power supply module is off, or by using the SMC to check enclosure health status.
In the lower corner of the footer, overall health status of the enclosure is indicated by a health status icon. For more information, point to the System tab and select View System to see the System panel. You can sel ect Front, Rear, and Table views on the System panel. If you hover over a component, its associated metadata and health status displays onscreen.
See Options available for performing basic steps on page 46 for a description of health status icons and alternatives for monitoring enclosure health.
2. Make sure that all modules are fully seated in their slots and that their latches are locked.
3. Make sure that no slots are left open for more than two minutes.
If you need to replace a module, leave the old module in place until you have the replacement or use a blank module to fill the slot. Leaving a slot open negatively affects the airflow and can cause the enclosure to overheat.
4. Try replacing each power supply one at a time.
5. Replace the controller modules one at a time.
6. Replace SFPs one at a time.
Lenovo Storage S3200/S2200 Setup Guide 53
Page 54

Sensor locations

The storage system monitors conditions at different points within each enclosure to alert you to problems. Power, cooling fan, temperature, and voltage sensors are located at key points in the enclosure. In each controller module and expansion module, the enclosure management processor (EMP) monitors the status of these sensors to perform SCSI enclosure services (SES) functions.
The following sections describe each element and its sensors.

Power supply sensors

Each enclosure has two fully redundant power supplies with load-sharing capabilities. The power supply sensors described in the following table monitor the voltage, current, temperature, and fans in each power supply. If the power supply sensors report a voltage that is under or over the threshold, check the input voltage.
Table 18 Power supply sensor descriptions
Description Event/Fault ID LED condition
Power supply 1 Voltage, current, temperature, or fan fault
Power supply 2 Voltage, current, temperature, or fan fault

Cooling fan sensors

Each power supply includes two fans. The normal range for fan speed is 4,000 to 6,000 RPM. When a fan speed drops below 4,000 RPM, the EMP considers it a failure and posts an alarm in the storage system event log. The following table lists the description, location, and alarm condition for each fan. If the fan speed remains under the 4,000 RPM threshold, the internal enclosure temperature may continue to rise. Replace the power supply reporting the fault.
Table 19 Cooling fan sensor descriptions
Description Location Event/Fault ID LED condition
Fan 1 Power supply 1 < 4,000 RPM
Fan 2 Power supply 1 < 4,000 RPM
Fan 3 Power supply 2 < 4,000 RPM
Fan 4 Power supply 2 < 4,000 RPM
During a shutdown, the cooling fans do not shut off. This allows the enclosure to continue cooling.

Temperature sensors

Extreme high and low temperatures can cause significant damage if they go unnoticed. Each controller module has six temperature sensors. Of these, if the CPU or FPGA (Field-programmable Gate Array) temperature reaches a shutdown value, the controller module is automatically shut down. Each power supply has one temperature sensor.
54 Troubleshooting
Page 55
When a temperature fault is reported, it must be remedied as quickly as possible to avoid system damage. This can be done by warming or cooling the installation location.
Table 20 Controller module temperature sensor descriptions
Description Normal operating
range
CPU temperature 3C–88C0C–3C,
FPGA temperature 3C–97C0C–3C,
Onboard temperature 1 0C–70CNoneNoneNone
Onboard temperature 2 0C–70CNoneNoneNone
Onboard temperature 3 (Capacitor temperature)
CM temperature 5C–50C 5C,
0C–70CNoneNoneNone
Warning operating range
88C–90C
97C–100C
50C
Critical operating range
> 90C0C
None 0C
0C, 55C
Shutdown values
100 C
105C
None
When a power supply sensor goes out of range, the Fault/ID LED illuminates amber and an event is logged to the event log.
Table 21 Power supply temperature sensor descriptions
Description Normal operating range
Power supply 1 temperature –10C–80C
Power supply 2 temperature –10C–80C

Power supply module voltage sensors

Power supply voltage sensors ensure that an enclosure’s power supply voltage is within normal ranges. There are three voltage sensors per power supply.
Table 22 Voltage sensor descriptions
Sensor Event/Fault LED condition
Power supply 1 voltage, 12V < 11.00V
> 13.00V
Power supply 1 voltage, 5V < 4.00V
> 6.00V
Power supply 1 voltage, 3.3V < 3.00V
> 3.80V
Lenovo Storage S3200/S2200 Setup Guide 55
Page 56

A LED descriptions

Front panel LEDs

The S3200/S2200 enclosures support 2U24 and 2U12 enclosures in dual-purpose fashion. The 2U24 chassis—configured with 24 2.5" small form factor (SFF) disks—is used as either a controller enclosure or expansion enclosure. The 2U12 chassis—configured with 12 3.5" large form factor (LFF) disks—is also used as either a controller enclosure or expansion enclosure.
Supported expansion enclosures are used for adding storage. The E1012 12-drive enclosure is the LFF drive enclosure used for storage expansion. The E1024 24-drive enclosure is the SFF drive enclosure used for storage expansion.

Enclosure bezels

Each S3200/S2200 enclosure is equipped with a removable bezel designed to cover the front panel during enclosure operation. The bezels look very similar, but there are differences between the two models. The bezel fitting the 2U24 chassis provides two embossed pockets used during bezel removal (Figure 31); whereas the bezel fitting the 2U12 chassis provides two debossed pockets, is equipped with an EMI (Electromagnetic Interference shield), and may or may not be equipped with the serviceable dust filtration air filter option (Figure 32).
Figure 31 Front panel enclosure bezel: 24-drive enclosure (2U24)
Figure 32 Front panel enclosure bezel: 12-drive enclosure (2U12)

Enclosure bezel attachment and removal

When you initially attach or remove the front panel enclosure bezel for the first time, refer to the appropriate pictorials for your enclosure(s) from the list below, and follow the instructions provided.
Front view of 24-drive enclosure (2U24): Figure 31
Front view of 12-drive enclosure (2U12): Figure 32
Bezel alignment for 24-drive enclosure (2U24): Figure 33 on page 57
Bezel alignment for 12-drive enclosure (2U12): Figure 34 on page 57
Enclosure bezel attachment
Orient the enclosure bezel to align its back side with the front face of the enclosure as shown in Figure 33 on page 57 and Figure 34 on page 57. Face the front of the enclosure, and while supporting the base of the bezel, position it such that the mounting sleeves within the integrated ear caps align with the ball studs, and then gently push-fit the bezel onto the ball studs to securely attach the bezel to the front of the enclosure.
56 LED descriptions
Page 57
Enclosure bezel removal
Ball stud on chassis ear (typical 4 places)
Enclosure bezel
Pocket opening (typical 2 places)
Ball stud on chassis ear (typical 4 places)
Enclosure bezel sub-assembly (EMI shield and removable air filter)
Pocket opening (typical 2 places)
While facing the front of the enclosure, insert the index finger of each hand into the top of the respective (left or right) pocket opening, and insert the middle finger of each hand into the bottom of the respective opening, with thumbs on the bottom of the bezel face. Gently pull the top of the bezel while applying slight inward pressure below, to release the bezel from the ball studs.
Figure 33 Partial assembly showing bezel alignment with 2U24 chassis
NOTE: The dual-purpose 2U24 and 2U12 enclosure front panel illustrations that follow assume that you
have removed the enclosure bezel to reveal underlying components.
Figure 34 Partial assembly showing bezel alignment with 2U12 chassis
Lenovo Storage S3200/S2200 Setup Guide 57
Page 58

24-drive enclosure front panel LEDs

OK
1
2
3
Left ear
Right ear
Integers on disks indicate drive slot numbering sequence.
231230 4 5 6 7 8 9101112131415 16171819202122
7
5
4
6 7
5
4
6
(Silk screens on bezel)
Notes: Enclosure bezel is removed to show front panel LEDs.
The enclosure bezel is removed to reveal the underlying 2U24 enclosure front panel LEDs. The front panel LEDs—including SFF disk LEDs—are described in the table below the illustration.
LED Description Definition
1 Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented by management software. Sequential enclosure ID numbering of controller enclosures begins with the integer 0. The enclosure ID for an attached drive enclosure is nonzero.
2 Disk drive — Left LED See Disk drive LEDs on page 60.
3 Disk drive — Right LED See Disk drive LEDs on page 60.
4 Unit Locator White blink — Enclosure is identified
Off — Normal operation
5 Fault/Service Required Amber — On
Enclosure-level fault condition exists. The event has been acknowledged but the problem needs attention. Off — No fault condition exists.
6 CRU OK Green — On
The enclosure is powered on with at least one power supply operating normally. Off — Both power supplies are off; the system is powered off.
7 Temperature Fault Green — On
The enclosure temperature is normal. Amber — On The enclosure temperature is above threshold.
Figure 35 LEDs: 2U24 enclosure front panel
58 LED descriptions
Page 59

12-drive enclosure front panel LEDs

21
4
3
5 6 7
Left ear
Right ear
0
4
8
1
5
9
2
6
10
3
7
11
OK
7
5
4
6
(Silk screens on bezel)
Integers on disks indicate drive slot numbering sequence.
Notes: Enclosure bezel is removed to show front panel LEDs.
The enclosure bezel is removed to reveal the underlying 2U12 enclosure front panel LEDs. The front panel LEDs—including LFF disk LEDs—are described in the table below the illustration.
LED Description Definition
1 Enclosure ID Green — On
Enables you to correlate the enclosure with logical views presented by management software. Sequential enclosure ID numbering of controller enclosures begins with the integer 0. The enclosure ID for an attached drive enclosure is nonzero.
2 Disk drive — Upper LED See Disk drive LEDs on page 60.
3 Disk drive — Lower LED See Disk drive LEDs on page 60.
4 Unit Locator White blink — Enclosure is identified
Off — Normal operation
5 Fault/Service Required Amber — On
Enclosure-level fault condition exists. The event has been acknowledged but the problem needs attention. Off — No fault condition exists.
6 CRU OK Green — On
The enclosure is powered on with at least one power supply operating normally. Off — Both power supplies are off; the system is powered off.
7 Temperature Fault Green — On
The enclosure temperature is normal. Amber — On The enclosure temperature is above threshold.
Figure 36 LEDs: 2U12 enclosure front panel
The enclosure bezel for this model provides the EMI protection for the LFF disk drive modules. The bezel should be securely attached to the enclosure during operation (see Enclosure bezel attachment on page 56 and Figure 34 on page 57).
CAUTION: Whether configured with or without an air filter, to ensure adequate EMI protection, the enclosure bezel should be properly installed while the enclosure is in operation.
Lenovo Storage S3200/S2200 Setup Guide 59
Page 60

Disk drive LEDs

21
2.5" SFF disk drive module
2
1
3.5" LFF disk drive module (see table below)
(see table below)
You must remove the enclosure bezel to facilitate visual observation of disk LEDs. Alternatively, you can use management interfaces to monitor disk LED behavior.
LED No./Description Color State Definition
1— Power/Activity Green On
Blink
Off
2— Fault Amber On
Blink
Off
Blue Blink Leftover disk from disk group is located (alternates blinking
The disk drive module is operating normally.
The disk drive module is initializing; active and processing I/O; performing a media scan; or the disk group is initializing or reconstructing.
If not illuminated and Fault is not illuminated, the disk is not powered on.
The disk has failed; experienced a fault; is a leftover; or the disk group that it is associated with is down or critical.
Physically identifies the disk; or locates a leftover (also see Blue).
If not illuminated and Power/Activity is not illuminated, the disk is not powered on.
amber).
Figure 37 LEDs: Disk drive modules
For information about disk drive types supported in S3200/S2200 LFF and SFF disk drive modules, see
Disk drives used in S3200/S2200 enclosures on page 15.
For information about replacing a disk drive module in an S3200/S2200 controller enclosure or an E1024/E1012 drive enclosure, see the “Replacing a disk drive module” topic in the CRU Installation and Replacement Guide.
For information about creating a disk groups, see the “Provisioning the system” topic within the Storage Manager Guide.
60 LED descriptions
IMPORTANT: For information about self-encrypting disk (SED) drives, see FDE considerations on page 21 and the Storage Management Guide or online help.
NOTE: Additional information pertaining to disk drive LED behavior is provided in the supplementary tables on the following page.
Page 61
Table 23 LEDs: Disks in SFF and LFF enclosures
Disk drive module LED behavior LFF/SFF disks
Description
Disk drive OK, FTOL
State Color Action
Off None None
On (operating normally) Green On
OK to remove Green Blink
Blue On
1
Identifying self — offline/online Green
On
Amber Blink
Disk drive I/O Initializing Green Blink
Active and processing I/O Green Blink
Performing a media scan Green Blink
Disk drive leftover
Disk drive is a leftover Amber On
Identifying a leftover Amber Blink
1
Blue
On
Disk drive failed Fault or failure Green1On
Amber On
Fault and remove disk drive Green On
Amber On
Fault and identify disk drive Green On
Amber On
Fault, identify, and remove disk drive Green On
Amber Blink
Blue On
1
This color may or may not illuminate.
Table 24 LEDs: Disk groups in SFF and LFF enclosures
Disk group LED behavior LFF/SFF disks
Description State Color Action
FTOL On (operating normally) Green On
Disk group activity
Disk group degraded
1
Individual disks will display fault LEDs
Disk group is reconstructing Green Blink
Disk group is initializing Green Blink
Disk group is critical/down
See note 1 below
NOTE: Disk LED descriptions within this section relate to 2U24 and 2U12 enclosures.
Lenovo Storage S3200/S2200 Setup Guide 61
Page 62
Controller enclosure — rear panel layout
CACHE
LINK
ACT
6Gb/s
CACHE
LINK
ACT
6Gb/s
CLI
CLI
PORT 2 PORT 3
SERVICE−1SERVICE−2
PORT 0 PORT 1
CLI
CLI
PORT 2 PORT 3
SERVICE−1SERVICE−2
PORT 0 PORT 1
148
9
11
5
10
1
2
3
S3200 FC or 10GbE iSCSI model is shown as a locator example
67
The diagram and table below display and identify important component items that comprise the rear panel layout of an S3200/S2200 controller enclosure. In Figure 38 below, a S3200 CNC model is shown as a representative example. Diagrams and tables on the following pages describe rear panel LED behavior. The rear panel layout applies to 2U24 and 2U12 chassis form factors.
1 AC power supplies 2 Controller module A 3 Controller module B 4 CNC ports: used for host connection 5 CLI port (USB - Type B) [Stickers removed] 6 Service port 2 (used by service personnel only)
7 Reserved for future use 8 Network port 9 Service port 1 (used by service personnel only)
10 Disabled button (used by engineering/test only)
(Stickers shown covering the openings)
11 SAS expansion ports
Figure 38 S3200/S2200 controller enclosure: rear panel
A controller enclosure accommodates two AC power supply CRUs within the two power supply slots (see two instances of callout No.1 above). The controller enclosure accommodates two controller module CRUs of the same type within the I/O module (IOM) slots (see callouts No.2 and No.3 above).
IMPORTANT: If the S3200/S2200 controller enclosure is configured with a single controller module, the controller module must be installed in the upper slot (see callout No.2 above) and the I/O module blank must be installed in the lower slot (see callout No.3 above). This configuration is required to allow sufficient air flow through the enclosure during operation (also see Figure 12 on page 25).
The diagrams with tables that immediately follow provide descriptions for the different controller modules and power supply modules that can be installed into the rear panel of an S3200/S2200 controller enclosure. Showing controller modules and power supply modules separately from the enclosure enables improved clarity in identifying the component items called out in the diagrams and described in the tables.
LED descriptions are also provided for optional drive enclosures supported by the S3200/S2200 controller enclosures.
For information about replacing S3200/S2200 controller enclosure CRUs, refer to the appropriate CRU replacement procedure in the CRU Installation and Replacement Guide.
62 LED descriptions
Page 63
S3200 CNC controller module — rear panel LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1 PORT 2 PORT 3
= FC LEDs = iSCSI LEDs
157
23468910
LED Description Definition
2,3
1
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
1Host 4/8/16 Gb FC
Link Status/ Link Activity
2Host 10GbE iSCSI
Link Status/ Link Activity
3Network Port Link
Active Status
4
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4 Network Port Link Speed4Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
On — The port is connected and the link is up.
1
When in FC mode, the SFPs must be a qualified 8 Gb or 16 Gb fibre optic option. A 16 Gbit/s SFP can run at 16 Gbit/s, 8 Gbit/s,
4 Gbit/s, or auto-negotiate its link speed. An 8 Gbit/s SFP can run at 8 Gbit/s, 4 Gbit/s, or auto-negotiate its link speed.
2
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
4
When port is down, both LEDs are off.
Figure 39 LEDs: S3200 CNC controller module (FC and 10GbE SFPs)
Lenovo Storage S3200/S2200 Setup Guide 63
Page 64
NOTE: For information about supported combinations of host interface protocols using CNC ports, see CNC ports
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1 PORT 2 PORT 3
= FC LEDs = iSCSI LEDs
157
23468910
used for host connection on page 10 and the “Configuring host ports topic” in the Storage Management Guide.
LED Description Definition
2,3
1
1 Not used in example
2Host 1 Gb iSCSI
Link Status/ Link Activity
The FC SFP is not shown in this example (see Figure 39 on page 63).
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
3Network Port Link
Active Status
4
4 Network Port Link Speed
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4
Off — Link is up at 10/100base-T negotiated speeds. Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
1
When in FC mode, the SFPs must be a qualified 8 Gb or 16 Gb fibre optic option. A 16 Gbit/s SFP can run at 16 Gbit/s, 8 Gbit/s,
4 Gbit/s, or auto-negotiate its link speed. An 8 Gbit/s SFP can run at 8 Gbit/s, 4 Gbit/s, or auto-negotiate its link speed.
2
When in 1 GbE iSCSI mode, the SFPs must be a qualified 1 GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
4
When port is down, both LEDs are off.
Figure 40 LEDs: S3200 CNC controller module (1 Gb RJ-45 SFPs)
64 LED descriptions
On — The port is connected and the link is up.
Page 65
S3200 SAS controller module—rear panel LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 0 SAS 1
ACT
LINK
12Gb/s
S
S
A
ACT
LINK
SAS 2 SAS 3
157
23468910
LED Description Definition
1Host 12 Gb SAS,
1- 3
Link Status
2Host 12 Gb SAS
1- 3
Link Activity
Off — No link detected. Green — The port is connected and the link is up.
Off — The link is idle. Blinking green — The link has I/O activity.
3Network Port Link
Active Status
4
4 Network Port Link Speed
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4
Off — Link is up at 10/100base-T negotiated speeds. Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
On — The port is connected and the link is up.
1
Cables must be qualified HD mini-SAS host cable options.
2
Use a qualified SFF-8644 to SFF-8644 cable option when connecting the S3200 SAS controller to a 12 Gb SAS HBA.
3
Use a qualified SFF-8644 to SFF-8088 cable option when connecting the S3200 SAS controller to a 6 Gb SAS HBA.
4
When port is down, both LEDs are off.
Figure 41 LEDs: S3200 SAS controller module (HD mini-SAS)
Lenovo Storage S3200/S2200 Setup Guide 65
Page 66
S2200 CNC controller module — rear panel LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1
= FC LEDs
= iSCSI LEDs
157
23468910
LED Description Definition
2,3
1
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
1Host 4/8/16 Gb FC
Link Status/ Link Activity
2Host 10GbE iSCSI
Link Status/ Link Activity
3Network Port Link
Active Status
4
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4 Network Port Link Speed4Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
On — The port is connected and the link is up.
1
When in FC mode, the SFPs must be a qualified 8 Gb or 16 Gb fibre optic option. A 16 Gbit/s SFP can run at 16 Gbit/s, 8 Gbit/s,
4 Gbit/s, or auto-negotiate its link speed. An 8 Gbit/s SFP can run at 8 Gbit/s, 4 Gbit/s, or auto-negotiate its link speed.
2
When in 10GbE iSCSI mode, the SFPs must be a qualified 10GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
4
When port is down, both LEDs are off.
Figure 42 LEDs: S2200 CNC controller module (FC and 10GbE SFPs)
66 LED descriptions
Page 67
LED Description Definition
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
PORT 0 PORT 1
= FC LEDs
= iSCSI LEDs
157
23468910
2,3
1
The FC SFP is not shown in this example (see Figure 39 on page 63).
Off — No link detected. Green — The port is connected and the link is up. Blinking green — The link has I/O activity.
1 Not used in example
2Host 1 Gb iSCSI
Link Status/ Link Activity
3Network Port Link
Active Status
4
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4 Network Port Link Speed4Off — Link is up at 10/100base-T negotiated speeds.
Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
On — The port is connected and the link is up.
1
When in FC mode, the SFPs must be a qualified 8 Gb or 16 Gb fibre optic option. A 16 Gbit/s SFP can run at 16 Gbit/s, 8 Gbit/s,
4 Gbit/s, or auto-negotiate its link speed. An 8 Gbit/s SFP can run at 8 Gbit/s, 4 Gbit/s, or auto-negotiate its link speed.
2
When in 1 GbE iSCSI mode, the SFPs must be a qualified 1 GbE iSCSI optic option.
3
When powering up and booting, iSCSI LEDs will be on/blinking momentarily, then they will switch to the mode of operation.
4
When port is down, both LEDs are off.
Figure 43 LEDs: S2200 CNC controller module (1 Gb RJ-45 SFPs)
Lenovo Storage S3200/S2200 Setup Guide 67
Page 68
S2200 SAS controller module—rear panel LEDs
CACHE
CLI
CLI
LINK
ACT
6Gb/s
SERVICE−1SERVICE−2
ACT
LINK
ACT
LINK
SAS 0 SAS 1
12Gb/s
157
23468910
LED Description Definition
1Host 12 Gb SAS,
1- 3
Link Status
2Host 12 Gb SAS
1- 3
Link Activity
Off — No link detected. Green — The port is connected and the link is up.
Off — The link is idle. Blinking green — The link has I/O activity.
3Network Port Link
Active Status
4
4 Network Port Link Speed
Off — The Ethernet link is not established, or the link is down. Green — The Ethernet link is up (applies to all negotiated link speeds).
4
Off — Link is up at 10/100base-T negotiated speeds. Amber — Link is up and negotiated at 1000base-T.
5 OK to Remove Off — The controller module is not prepared for removal.
Blue — The controller module is prepared for removal.
6 Unit Locator Off — Normal operation.
Blinking white — Physically identifies the controller module.
7 CRU OK Off — Controller module is not OK.
Blinking green — System is booting. Green — Controller module is operating normally.
8 Fault/Service Required Amber — A fault has been detected or a service action is required.
Blinking amber — Hardware-controlled power-up or a cache flush or restore error.
9 Cache Status Green — Cache is dirty (contains unwritten data) and operation is normal.
The unwritten information can be log or debug data that remains in the cache, so a Green cache status LED does not, by itself, indicate that any user data is at risk or that any action is necessary. Off — In a working controller, cache is clean (contains no unwritten data). This is an occasional condition that occurs while the system is booting. Blinking green — A CompactFlash flush or cache self-refresh is in progress, indicating cache activity. See also Cache Status LED details on page 69.
10 Expansion Port Status Off — The port is empty or the link is down.
On — The port is connected and the link is up.
1
Cables must be qualified HD mini-SAS host cable options.
2
Use a qualified SFF-8644 to SFF-8644 cable option when connecting the S2200 SAS controller to a 12 Gb SAS HBA.
3
Use a qualified SFF-8644 to SFF-8088 cable option when connecting the S2200 SAS controller to a 6 Gb SAS HBA.
4
When port is down, both LEDs are off.
NOTE: Once a Link Status LED is lit, it remains so, even if the controller is shut down via the SMC or the CLI.
68 LED descriptions
Figure 44 LEDs: S2200 SAS controller module (HD mini-SAS)
Page 69
When a controller is shut down or otherwise rendered inactive—its Link Status LED remains illuminated—
1
2
falsely indicating that the controller can communicate with the host. Though a link exists between the host and the chip on the controller, the controller is not communicating with the chip. To reset the LED, the controller must be power-cycled (see Powering on/powering off
Cache Status LED details
If the LED is blinking evenly, a cache flush is in progress. When a controller module loses power and write cache is dirty (contains data that has not been written to disk), the supercapacitor pack provides backup power to flush (copy) data from write cache to CompactFlash memory. When cache flush is complete, the cache transitions into self-refresh mode.
If the LED is blinking momentarily slowly, the cache is in a self-refresh mode. In self-refresh mode, if primary power is restored before the backup power is depleted (3–30 minutes, depending on various factors), the system boots, finds data preserved in cache, and writes it to disk. This means the system can be operational within 30 seconds, and before the typical host I/O time-out of 60 seconds, at which point system failure would cause host-application failure. If primary power is restored after the backup power is depleted, the system boots and restores data to cache from CompactFlash, which can take about 90 seconds. The cache flush and self-refresh mechanism is an important data protection feature; essentially four copies of user data are preserved: one in controller cache and one in CompactFlash of each controller. The Cache Status LED illuminates solid green during the boot-up process. This behavior indicates the cache is logging all POSTs, which will be flushed to the CompactFlash the next time the controller shuts down.
CAUTION: If the Cache Status LED illuminates solid green—and you wish to shut-down the controller—do so from the user interface, so unwritten data can be flushed to CompactFlash.
on page 28).

Power supply LEDs

Power redundancy is achieved through two independent load-sharing power supplies. In the event of a power supply failure, or the failure of the power source, the storage system can operate continuously on a single power supply. Greater redundancy can be achieved by connecting the power supplies to separate circuits. Power supplies are used by controller and drive enclosures.
LED No./Description Color State Definition
1 — Input Source Power Good Green On Power is on and input voltage is normal.
Off Power is off, or input voltage is below the minimum
threshold.
2 — Voltage/Fan Fault/Service
Required
Figure 45 LEDs: AC power supply unit — rear panel
Amber On Output voltage is out of range, or a fan is operating below
Off Output voltage is normal.
the minimum required RPM.
Lenovo Storage S3200/S2200 Setup Guide 69
Page 70
NOTE: For more information about power-cycling enclosures, see Powering on/powering off on page 28.
00
IN OUT
00
IN OUT
12
63 54 7
1

E1024/E1012 drive enclosure rear panel LEDs

The rear panel layout of a 2U (E1024/E1012) drive enclosure is shown below. Using mini-SAS (SFF-8088) external connectors, these drive enclosures support a 6-Gbps data rate for backend SAS expansion.
See Powering on/powering off
on page 28 for more information.
LED No./Description Color State Definition
1 — Power Supply See Power supply LEDs on page 69.
2 — Unit Locator White Off
Blink
Normal operation.
Physically identifies the expansion module.
3 — OK to Remove Blue Off Not implemented.
4 — Fault/Service Required Amber On
Blink
A fault is detected or a service action is required.
Hardware-controlled power-up.
5 — CRU OK Green On
Off
Blink
6 — SAS In Port Status Green On
Off
7 — SAS Out Port Status Green On
Off
Figure 46 LEDs: E1024/E1012 drive enclosure — rear panel
Expansion module is operating normally.
Expansion module is not OK.
System is booting.
Port is connected and the link is up.
Port is empty or link is down.
Port is connected and the link is up.
Port is empty or link is down.
70 LED descriptions
Page 71

B Specifications and requirements

Safety requirements

Install the system in accordance with the local safety codes and regulations at the facility site. Follow all cautions and instructions marked on the equipment.

Site requirements and guidelines

The following sections provide requirements and guidelines that you must address when preparing your site for the installation.
When selecting an installation site for the system, choose a location not subject to excessive heat, direct sunlight, dust, or chemical exposure. These conditions greatly reduce the system’s longevity and might void your warranty.
NOTE: Specifications within this section relate to 2U24 and 2U12 enclosures.

Site wiring and AC power requirements

The following are required for all installations using AC power supplies:
.
Table 25 Power requirements - AC Input
Measurement Rating
Input power requirements 100-240 VAC, 50/60 Hz
Maximum input power 475 W maximum continuous
Heat dissipation 1,622 BTUs/hour
All AC mains and supply conductors to power distribution boxes for the rack-mounted system must be
enclosed in a metal conduit or raceway when specified by local, national, or other applicable government codes and regulations.
Ensure that the voltage and frequency of your power source match the voltage and frequency inscribed
on the equipment’s electrical rating label.
To ensure redundancy, provide two separate power sources for the enclosures. These power sources
must be independent of each other, and each must be controlled by a separate circuit breaker at the power distribution point.
The system requires voltages within minimum fluctuation. The customer-supplied facilities’ voltage must
maintain a voltage with not more than suitable surge protection.
Site wiring must include an earth ground connection to the AC power source. The supply conductors
and power distribution boxes (or equivalent metal enclosure) must be grounded at both ends.
Power circuits and associated circuit breakers must provide sufficient power and overload protection. To
prevent possible damage to the AC power distribution boxes and other components in the rack, use an external, independent power source that is isolated from large switching loads (such as air conditioning motors, elevator motors, and factory loads).
5 percent fluctuation. The customer facilities must also provide

Weight and placement guidelines

Refer to Physical requirements on page 73 for detailed size and weight specifications.
Refer to the rackmount bracket kit installation sheet pertaining to your product for guidelines about
installing enclosures into the rack.
The weight of an enclosure depends on the number and type of modules installed.
Lenovo Storage S3200/S2200 Setup Guide 71
Page 72
Ideally, use two people to lift an enclosure. However, one person can safely lift an enclosure if its
weight is reduced by removing the power supply modules and disk drive modules.
Do not place enclosures in a vertical position. Always install and operate the enclosures in a horizontal
(level) orientation.
When installing enclosures in a rack, make sure that any surfaces over which you might move the rack
can support the weight. To prevent accidents when moving equipment, especially on sloped loading docks and up ramps to raised floors, ensure you have a sufficient number of helpers. Remove obstacles such as cables and other objects from the floor.
To prevent the rack from tipping, and to minimize personnel injury in the event of a seismic occurrence,
securely anchor the rack to a wall or other rigid structure that is attached to both the floor and to the ceiling of the room.

Electrical guidelines

These enclosures work with single-phase power systems having an earth ground connection. To reduce
the risk of electric shock, do not plug an enclosure into any other type of power system. Contact your facilities manager or a qualified electrician if you are not sure what type of power is supplied to your building.
Enclosures are shipped with a grounding-type (three-wire) power cord. To reduce the risk of electric
shock, always plug the cord into a grounded power outlet.
Do not use household extension cords with the enclosures. Not all power cords have the same current
ratings. Household extension cords do not have overload protection and are not meant for use with computer systems.

Ventilation requirements

Refer to Environmental requirements on page 74 for detailed environmental requirements.
Do not block or cover ventilation openings at the front and rear of an enclosure. Never place an
enclosure near a radiator or heating vent. Failure to follow these guidelines can cause overheating and affect the reliability and warranty of your enclosure.
Leave a minimum of 15.2 cm (6 inches) at the front and back of each enclosure to ensure adequate
airflow for cooling. No cooling clearance is required on the sides, top, or bottom of enclosures.
Leave enough space in front and in back of an enclosure to allow access to enclosure components for
servicing. Removing a component requires a clearance of at least 38.1 cm (15 inches) in front of and behind the enclosure.

Cabling requirements

Keep power and interface cables clear of foot traffic. Route cables in locations that protect the cables
from damage.
Route interface cables away from motors and other sources of magnetic or radio frequency
interference.
Stay within the cable length limitations.
Controller and drive enclosures are suitable for connection to intra-building or non-exposed wiring or
cabling only.
Controller and drive enclosures are suitable for installation in Network Telecommunication Facilities
and locations where the NEC applies. Enclosures are not suitable for Outside Plant (OSP) installations.

Management host requirements

A local management host with at least one mini-USB connection is recommended for the initial installation and configuration of a controller enclosure. After you configure one or both of the controller modules with an IP address, you then use a remote management host on an Ethernet network to manage and monitor.
NOTE: Connections to this device must be made with shielded cables – grounded at both ends – with metallic RFI/EMI connector hoods, in order to maintain compliance with FCC Rules and Regulations.
72 Specifications and requirements
Page 73

Physical requirements

A
C
BE
DF
G
H
1
1
2
3
4
5
Key for generic enclosure diagram:
Enclosure front view without bezel or disks
2
Enclosure top view (section removed)
3
Enclosure side view (section removed)
4
Enclosure bezel top view (reference only)
5
Enclosure bezel side view (reference only) Critical−fit dimensions (see table)
A−H:
Note: For clarity, only key view projection lines are shown.
45° orthographic bisector
The floor space at the installation site must be strong enough to support the combined weight of the rack, controller enclosures, drive enclosures, and any additional equipment. The site also requires sufficient space for installation, operation, and servicing of the enclosures, together with sufficient ventilation to allow a free flow of air to all enclosures.
Figure 47 and Table 26 on page 74 show enclosure dimensions and weights. Enclosure designators are
described below. Enclosure weights assume the following configuration characteristics:
2U12 enclosure (LFF – also see Table 4 on page 24):
• “2U12” denotes the 3.5" 12-drive enclosure (with controller or expansion modules)
• The 2U12 chassis is equipped with a disk in each disk drive slot
2U24 enclosure (SFF – also see Table 4 on page 24):
• “2U24” denotes the 2.5" 24-drive enclosure (with controller or expansion modules)
• The 2U24 chassis is equipped with a disk in each disk drive slot
Two controller modules or two expansion modules per enclosure
Two power supply modules per enclosure
ABCDEFGH
Form cm in cm in cm in cm in cm in cm in cm in cm in
1
2U24
2U12
..
1
The 2U24 enclosure uses 2.5" SFF disks. Remove the enclosure bezel to view disk drive module LEDs.
2
The 2U12 enclosure uses 3.5" LFF disks. Remove the enclosure bezel to view disk drive module LEDs.
44.7 17.6 8.9 3.5 47.9 18.9 2.5 .98
2
Figure 47 Rackmount enclosure dimensions
47.6
18 . 7
3.0
1. 2
51. 8
20.4
57.9
22.8
52.7
20.5
3.0
1. 2
54.9
21. 6
59.9
23.6
Lenovo Storage S3200/S2200 Setup Guide 73
Page 74
..
Table 26 Rackmount controller enclosure weights
Specifications Rackmount
SFF controller enclosure (2U24)
Chassis with FRUs (no disks)
Chassis with FRUs (including disks)
1- 3
1- 4
LFF controller enclosure (2U12)
Chassis with FRUs (no disks)
Chassis with FRUs (including disks)
1
Weights shown are nominal, and subject to variances.
2
Rail kits add between 2.8 kg (6.2 lb) and 3.4 kg (7.4 lb) to the aggregate enclosure weight.
3
Weights may vary due to different power supplies, IOMs, and differing calibrations between scales.
4
Weights may vary due to actual number and type of disk drives (SAS or SSD) and air management modules installed.
1- 3
1- 4
8.6 kg (19.0 lb) [chassis]
17.4 kg (38.4 lb) 23 . 4 k g ( 51. 6 l b )
9.3 kg (20.6 lb) [chassis]
18 .1 kg (4 0 . 0 l b )
27.7 kg (61.0 lb)
NOTE: The table below provides comparative information about optional drive enclosures used with S3200/S2200 controller enclosures.
Table 27 Rackmount compatible drive enclosure weights (ordered separately)
Specifications Rackmount
E1024 (SFF 2.5" 24-drive enclosure)
Chassis with FRUs (no disks)
1- 3
Chassis with FRUs (including disks)
E1012 (LFF 3.5" 12-drive enclosure)
Chassis with FRUs (no disks)
1- 3
Chassis with FRUs (including disks)
1- 4
1- 4
8.6 kg (19.0 lb) [chassis]
16.2 kg (35.8 lb)
22.2 kg (49.0 lb)
8.5 kg (18.8 lb) [chassis]
16.1 kg (35.6 lb)
25.6 kg (56.6 lb)
1
Weights shown are nominal, and subject to variances.
2
Rail kits add between 2.8 kg (6.2 lb) and 3.4 kg (7.4 lb) to the aggregate enclosure weight.
3
Weights may vary due to different power supplies and differing calibrations between scales.
4
Weights may vary due to actual number and type of disk drives (SAS or SSD) and air management modules installed.

Environmental requirements

Table 28 Operating environmental specifications
Specification Range
Altitude To 3,000 meters (9,843 feet )
Temperature* 5ºC to 40ºC (41ºF to 104ºF)
Humidity 10% to 90% RH up to 40ºC (104ºF) non-condensing
Shock 3.0 g, 11 ms, ½ sine pulses, X, Y, Z
Vibration (Shaped-spectrum)
5 Hz to 500 Hz, 0.14 G
*Temperature is de-rated by 2ºC (3.6ºF) for every 1 km (3,281) feet above sea level.
Table 29 Non-operating environmental specifications
Specification Range
Altitude To 12,000 meters (39,370 feet)
total X, Y, Z
rms
Temperature -40ºC to 70ºC (-40ºF to 158ºF)
74 Specifications and requirements
Page 75
Table 29 Non-operating environmental specifications (Continued)
Specification Range
Humidity Up to 93% RH @ 104ºF (40ºC) non-condensing
Shock 15.0 g, 11 ms, ½ sine pulses, X, Y, Z
Vibration (Shaped-spectrum)
2.8 Hz to 365.4 Hz, 0.852 G
2.8 Hz to 365.4 Hz, 1.222 G

Electrical requirements

Site wiring and power requirements

Each enclosure requires two power supply modules for redundancy. If full redundancy is required, use a separate power source for each module. The AC power supply unit in each power supply module is auto-ranging and is automatically configured to an input voltage range from 100-240 VAC with an input frequency of 50/60 Hz. The power supply modules meet standard voltage requirements for both U.S. and international operation. The power supply modules use standard industrial wiring with line-to-neutral or line-to-line power connections.

Power cable requirements

Each enclosure requires two power cables designed for use with the enclosure power supply module. Each power cable connects one of the power supply modules to an independent, external power source. To ensure power redundancy, connect the two power cables to two separate circuits; for example, to one commercial circuit and one uninterruptible power source (UPS).
total (horizontal)
rms
total (vertical)
rms
Lenovo Storage S3200/S2200 Setup Guide 75
Page 76

C Electrostatic discharge

Preventing electrostatic discharge

To prevent damaging the system, be aware of the precautions you need to follow when setting up the system or handling parts. A discharge of static electricity from a finger or other conductor may damage system boards or other static-sensitive devices. This type of damage may reduce the life expectancy of the device.
To prevent electrostatic damage:
Avoid hand contact by transporting and storing products in static-safe containers.
Keep electrostatic-sensitive parts in their containers until they arrive at static-protected workstations.
Place parts in a static-protected area before removing them from their containers.
Avoid touching pins, leads, or circuitry.
Always be properly grounded when touching a static-sensitive component or assembly.

Grounding methods to prevent electrostatic discharge

Several methods are used for grounding. Use one or more of the following methods when handling or installing electrostatic-sensitive parts:
Use a wrist strap connected by a ground cord to a grounded workstation or computer chassis. Wrist
straps are flexible straps with a minimum of 1 megohm (± 10 percent) resistance in the ground cords. To provide proper ground, wear the strap snug against the skin.
Use heel straps, toe straps or boot straps at standing workstations. Wear the straps on both feet when
standing on conductive floors or dissipating floor mats.
Use conductive field service tools.
Use a portable field service kit with a folding static-dissipating work mat.
If you do not have any of the suggested equipment for proper grounding, have an authorized technician install the part. For more information about static electricity or assistance with product installation, contact
support.lenovo.com
, select Product Support, and navigate to Storage Products.
76 Electrostatic discharge
Page 77

D USB device connection

CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
Host Interface
Not Shown
SERVICE−2
SERVICE−1
6Gb/s
Connect USB cable to CLI port on controller face plate

Rear panel USB ports

S3200/S2200 controllers contain two different USB (universal serial bus) management interfaces: a Host interface and a Device interface. Both interfaces pertain to the Management Controller (MC). The Device interface is accessed via a port on the controller module face plate. The Host interface (USB Type A)—reserved for future use—is accessible from the midplane-facing end of the controller module (see
Figure 11 on page 20), and its discussion is deferred.
This appendix describes the port labeled CLI (USB Type B), which enables direct connection between a management computer and the controller, using the command-line interface and appropriate cable (see
Figure 48).

USB CLI port

NOTE: In the illustration above, the sticker suggesting that you “install the USB driver before using the port” has been removed to show the port.
S3200/S2200 controllers feature a USB CLI port used to cable directly to the controller and initially set IP addresses, or perform other configuration tasks. The USB CLI port employs a mini-USB Type B form factor, and requires a specific cable and additional support, so that a server or other computer running a Linux or Windows operating system can recognize the controller enclosure as a connected device. Without this support, the computer might not recognize that a new device is connected, or might not be able to communicate with it.
For Linux computers, no new driver files are needed, but a Linux configuration file must be created or modified (see Follow the instructions accompanying the device driver topic for Microsoft Windows. on page 78). For Windows computers a special device driver file, CD or web site, and installed on the computer that will be cabled directly to the controller’s CLI port (see
Microsoft Windows on page 78).

Emulated serial port

Once attached to the controller module, the management computer should detect a new USB device. Using the Emulated Serial Port interface, the S3200/S2200 controller presents a single serial port using a customer vendor ID and product ID. Effective presentation of the emulated serial port assumes the management computer previously had terminal emulator installed (see Supported host applications). Serial port configuration is unnecessary.
Figure 48 USB device connection — CLI port
gserial.inf, must be downloaded from a
Lenovo Storage S3200/S2200 Setup Guide 77
Page 78
IMPORTANT: Certain operating systems require a device driver or special mode of operation to enable proper functioning of the USB CLI port (see Device driver/special operation mode).

Supported host applications

S3200/S2200 controllers support the following applications to facilitate connection.
Table 30 Supported terminal emulator applications
Application Operating system
HyperTerminal and TeraTerm Microsoft Windows (all versions)
Minicom Linux (all versions)

Command-line Interface

Once the management computer detects connection to the USB-capable device, the Management Controller awaits input of characters from the host computer via the command-line. To see the command-line prompt, you must press Enter. The MC provides direct access to the CLI.
NOTE: Directly cabling to the CLI port is an out-of-band connection, because it communicates outside of the data paths used to transfer information from a computer or network to the controller enclosure.
Solaris
HP-UX

Device driver/special operation mode

Certain operating systems require a device driver or special mode of operation. Product and vendor identification information required for such setup is provided below.
Table 31 USB vendor and product identification codes
USB Identification code type Code
USB Vendor ID 0x210c
USB Product ID 0xa4a7

Microsoft Windows

Microsoft Windows operating systems provide a USB serial port driver. However, the USB driver requires details for connecting to S3200/S2200 controller enclosures. Lenovo provides a device driver for use in the Windows environment. The USB device driver and installation instructions are available via a download.
Obtaining the software download
1. Verify that the management computer has Internet access.
2. See Lenovo’s customer support website: support.lenovo.com a. Select Product Support, and navigate to Storage Products.
Peruse the location for information about the “USB driver.”
b. Follow the instructions accompanying the device driver topic for Microsoft Windows.
.
Although Linux operating systems do not require installation of a device driver, certain parameters must be provided during driver loading to enable recognition of the S3200/S2200 controller enclosures.
78 USB device connection
Page 79
Setting parameters for the device driver
1. Enter the following command:
modprobe usbserial vendor=0x210c product=0xa4a7 use_acm=1
2. Press Enter to execute the command.
The Linux device driver is loaded with the parameters required to recognize the controllers.
NOTE: Optionally, this information can be incorporated into the /etc/modules.conf file.
Using the CLI port and cable—known issues on Windows
When using the CLI port and cable for setting network port IP addresses, be aware of the following known issues on Microsoft Windows platforms.

Problem

On Windows operating systems, the USB CLI port may encounter issues preventing the terminal emulator from reconnecting to storage after the Management Controller (MC) restarts or the USB cable is unplugged and reconnected.

Workaround

Follow these steps when using the mini-USB cable and USB Type B CLI port to communicate out-of-band between the host and controller module for setting network port IP addresses.
To create a new connection or open an existing connection (HyperTerminal):
1. From the Windows Control Panel, select Device Manager.
2. Connect using the USB COM port and Detect Carrier Loss option. a. Select Connect To > Connect using: > pick a COM port from the list. b. Select the Detect Carrier Loss check box.
The Device Manager page should show “Ports (COM & LPT)” with an entry entitled “Disk Array USB Port (COMn)”—where n is your system’s COM port number.
3. Set network port IP addresses using the CLI (see procedure on page 41).
To restore a hung connection when the MC is restarted (any supported terminal emulator):
1. If the connection hangs, disconnect and quit the terminal emulator program. a. Using Device Manager, locate the COMn port assigned to the Disk Array Port. b. Right-click on the hung Disk Array USB Port (COMn), and select Disable. c. Wait for the port to disable.
2. Right-click on the previously hung—now disabled—Disk Array USB Port (COMn), and select Enable.
3. Start the terminal emulator and connect to the COM port.
4. Set network port IP addresses using the CLI (see procedure on page 41).
Lenovo Storage S3200/S2200 Setup Guide 79
Page 80

E SFP option for CNC ports

PORT 1
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
SERVICE−2
SERVICE−1
PORT 0
PO
RT 1
PORT 2
PORT 3
PORT 1
6Gb/s
Installed SFP (actuator closed)
Target CNC port
Align SFP for installation (plug removed/actuator open)
Controller module face plate
Fibre-optic interface cable
PORT 1
CACHE
LINK
DIRTY
LINK
ACT
CLI
CLI
SERVICE−2
SERVICE−1
PORT 0
PORT 1
PORT 1
6Gb/s
Installed SFP (actuator closed)
Target CNC port
Align SFP for installation (plug removed/actuator open)
Controller module face plate
Fibre-optic interface cable

Locate the SFP transceivers

Locate the qualified SFP options for your CNC controller module within your product ship kit. The SFP transceiver (SFP) should look similar to the generic SFP shown in the figures below.
Figure 49 Install a qualified SFP option into an S3200 CNC controller module
Figure 50 Install a qualified SFP option into an S2200 CNC controller module

Install an SFP transceiver

For each target CNC port, perform the following procedure to install an SFP. Refer to the appropriate figure above when performing the steps. an SFP.
80 SFP option for CNC ports
Follow the guidelines provided in Electrostatic discharge when installing
Page 81
1. Orient the SFP as shown above, and align it for insertion into the target CNC port. The SFP should be positioned such that the actuator pivot-hinge is on top.
2. If the SFP has a plug, remove it before installing the transceiver. Retain the plug.
3. Flip the actuator open as shown in the figure (near the left detail view).
The actuator on your SFP option may look slightly different than the one shown, and it may not open to a sweep greater than 90 (as shown in the figure).
4. Slide the SFP into the target CNC port until it locks into place.
5. Flip the actuator down, as indicated by the down-arrow next to the open actuator in the figure.
The installed SFP should look similar to the position shown in the right detail view.
6. When ready to attach to the host, obtain and connect a qualified fibre-optic interface cable into the duplex jack at the end of the SFP connector.
NOTE: To remove an SFP module, perform the above steps in reverse order.

Verify component operation

View the CNC port Link Status/Link Activity LED on the controller module face plate. A green LED indicates that the port is connected and the link is up (see LED descriptions for information about controller module LEDs).
Lenovo Storage S3200/S2200 Setup Guide 81
Page 82

F SAS fan-out cable option

To controller host interface port
Upper cable to 1st HBA port
Lower cable to 2nd HBA port
Plan view: fan−out cable
HD mini−SAS to mini−SAS
To controller host interface port
Upper cable to 1st HBA port
Lower cable to 2nd HBA port
Profile view: fan−out cable
HD mini−SAS to mini−SAS
Bottom of connector

Locate the SAS fan-out cable

Locate the appropriate qualified SAS fan-out cable option for your 2-port SAS controller module. Qualified fan-out cable options are described within Cable requirements for storage enclosures on page 23 and HD
mini-SAS host connection on page 33. Cabling examples showing use of SAS fan-out cables are provided:
See Figure 19 on page 34: direct attach featuring one server/two HBAs/dual path (single-IOM)
See Figure 24 on page 37:
NOTE: Hosts should be connected to the same ports on both controllers to align with the utilization shown
in the SMC.

Install the SAS fan-out cable

Orient the cable for connection to the controller module and host as shown in Figure 51 and Figure 52 on page 83. For each fan-out cable type, the pull-tab is facing upwards when aligned for insertion into the host interface port of the controller module. The port closer to the pull-tab is the first of the two ports, and the port located away from the pull tab is the second of the two ports.
direct attach featuring four servers/variable HBAs/dual path (dual-IOMs)
Simplified plan and profile views of the bifurcated HD mini-SAS to mini-SAS cable show orientation for connection to the controller module (on left) and the host (on right).
82 SAS fan-out cable option
Figure 51 HD mini-SAS to mini-SAS fan-out cable
Page 83
Figure 52 HD mini-SAS to HD mini-SAS fan-out cable
To controller host interface port
Upper cable to 1st HBA port
Lower cable to 2nd HBA port
Plan view: fan−out cable
HD mini−SAS to HD mini−SAS
To controller host interface port
Upper cable to 1st HBA port
Lower cable to 2nd HBA port
Profile view: fan−out cable
HD mini−SAS to HD mini−SAS
Simplified plan and profile views of the bifurcated HD mini-SAS to mini-SAS cable show orientation for connection to the controller module (on left) and the host (on right).
Lenovo Storage S3200/S2200 Setup Guide 83
Page 84

Index

Numerics
2U12
3.5" 12-drive enclosure
2U24
2.5" 24-drive enclosure
73 73
A
accessing
CLI (Command-line Interface) SMC (web-based management GUI)
audience
10
41
B
bezel
2U12 enclosure 2U24 enclosure
56 56
C
cables
FCC compliance statement shielded
cabling
cable routing requirements connecting controller and drive enclosures considerations direct attach configurations switch attach configurations
cache
post-write read-ahead
clearance requirements
service ventilation
CNC ports
change port mode locate and install SFPs SFP transceivers
Command-line Interface
using to set controller IP addresses
CompactFlash
memory card location
components
12-drive enclosure front panel E1024/E1012 rear panel Power Supply Unit (PSU)
S2200 rear panel
40, 72
30
19
19
72
72
44
30
AC
15
12 Gb SAS ports CLI (reserved for future use) CLI port (USB) CNC ports (1 Gb iSCSI) CNC ports (FC/10GbE) expansion port
17, 18
17, 18
40, 72
72
32
38
80, 82
41
20
14
19
18
17, 18
18 17
45
22
network port service port 1 service port 2
S3200 rear panel
12 Gb SAS ports CLI (reserved for future use) CLI port (USB) CNC ports (1 Gb iSCSI) CNC ports (FC/10GbE) expansion port network port service port 1 service port 2
connecting
controller enclosures to hosts to remote management hosts
connections
test
28
verify
28
console requirement controller enclosures
connecting to hosts connecting to remote management hosts
controller modules
2-port 1 Gb iSCSI (CNC) 2-port 10GbE iSCSI (CNC) 2-port 12 Gb SAS 2-port 8/16 Gb FC (CNC) 4-port 1 Gb iSCSI (CNC) 4-port 10GbE iSCSI (CNC) 4-port 12 Gb SAS 4-port 8/16 Gb FC (CNC)
conventions
document
17, 18
17, 18 17, 18
17
16, 17
16, 17
16
16
16, 17
16, 17
16, 17 16, 17
30 40
72
30
9
9
9
9
9
9
9
9
12
D
DHCP
obtaining IP addresses server
41
direct attach configurations disk drive
LEDs
general specific states
document
conventions prerequisite knowledge related documentation
60
12
41
30
61
10
11
E
electrostatic discharge
grounding methods precautions
76
76
40
Lenovo Storage S3200/S2200 Setup Guide 84
Page 85
enclosure
cabling IDs, correcting initial configuration input frequency requirement input voltage requirement installation checklist site requirements troubleshooting weight
Ethernet cables
requirements
23
48
21
75
21
73
48
74
40
F
faults
isolating
methodology
46
H
host interface ports
FC (8/16 Gb) FC host interface protocol
loop topology
point-to-point protocol iSCSI (10GbE) iSCSI (1Gb) iSCSI host interface protocol
mutual CHAP SAS (12Gb) SAS host interface protocol
hosts
defined optional software stopping I/O system requirements
humidity non-operating range humidity operating range
31
31
31
31
31
31
32
30
30
48
30
74
74
I
IDs, correcting for enclosure 48
IP addresses
setting using DHCP setting using the CLI
41
41
L
LEDs
2U12 front panel
CRU OK
Disk drive
Enclosure ID
Fault/Service Required
Temperature Fault
Unit Locator 2U24 front panel
CRU OK
Disk drive
Enclosure ID
Fault/Service Required
59
59
59
59
59
59
58
58
58
58
75
32
Temperature Fault Unit Locator
Disk
Fault
60
Power/Activity
E1024/E1012 face plate
CRU OK Fault/Service Required OK to Remove SAS In Port Status SAS Out Port Status Unit Locator
enclosure rear panel
S2200 CNC
10GbE iSCSI Host Link Status/Link Activity 1Gb iSCSI Host Link Status/Link Activity Cache Status CRU OK Expansion Port Status Fault/Service Required FC Host Link Status/Link Activity Network Port Link Active Network Port Link Speed OK to Remove Unit Locator
S2200 SAS
12 Gb Host Link Activity 12 Gb Host Link Status Cache Status CRU OK Expansion Port Status Fault/Service Required Network Port Link Active Network Port Link Speed OK to Remove Unit Locator
S3200 CNC
10GbE iSCSI Host Link Status/Link Activity 1Gb iSCSI Host Link Status/Link Activity Cache Status CRU OK Expansion Port Status Fault/Service Required FC Host Link Status/Link Activity Network Port Link Active Network Port Link Speed OK to Remove Unit Locator
S3200 SAS
12 Gb Host Link Activity 12 Gb Host Link Status Cache Status CRU OK Expansion Port Status Fault/Service Required Network Port Link Active Network Port Link Speed OK to Remove Unit Locator
70
66, 67
68
63, 64
65
58
58
60
70
70
70
70
70
66, 67
66, 67
66, 67
66
66, 67
66, 67
66, 67
66, 67
68
68
68
68
68
68
68
68
68
63, 64
63, 64
63, 64
63
63, 64
63, 64
63, 64
63, 64
65
65
65
65
65
65
65
65
65
66
67
63
64
Lenovo Storage S3200/S2200 Setup Guide 85
Page 86
using to diagnose fault conditions 49
local management host requirement
72
M
MPIO DSM
native Microsoft installation see related documentation
30
30
N
non-operating ranges, environmental 74
O
operating ranges, environmental 74
optional software
30
P
physical requirements 73
power cord requirements power cycle
power on
power supply
AC power requirements
prerequisite knowledge
29
75
71
10
safety precautions
SMC
web-based storage management interface
storage system setup
configuring getting started
provisioning supercapacitor pack switch attach configurations
71
45
45
45
20
38
T
temperature non-operating range 74
temperature operating range troubleshooting
controller failure, single controller configuration
correcting enclosure IDs
enclosure does not initialize
expansion port connection fault
host-side connection fault
using event notification
using system LEDs
using the CLI
using the SMC
46
47, 49
47
46
74
48
48
53
53
47
45
52
R
regulatory compliance
notices
shielded cables related documentation remote management requirements
cabling clearance Ethernet cables host system physical
ventilation RFI/EMI connector hoods rugged chassis
European Telco compliant
72
72
73
72
40, 72
11
40
40
30
40, 72
9
S
safety precautions 71
sensors
locating
power supply
temperature
voltage SFP transceivers
installing
locating
supported options
verifying operation shock non-operating range shock operating range site planning
local management host requirement
physical requirements
54
54
54
55
80
80, 82
30
81
74
74
73
72
U
Unified LUN Presentation 30
USB device connection
Command-line Interface (CLI) device driver emulated serial port rear panel USB ports supported host applications vendor and product ID codes
78
77
77
V
ventilation requirements 72
vibration non-operating range vibration operating range
74
W
warnings
temperature voltage
53
53
78
78
78
74
86 Index
Loading...