HP EVA Array iSCSI Connectivity Option User Manual

HP StorageWorks EVA iSCSI connectivity user guide
Par t number: 5697-7577
enth edition: July 2008
T
Legal and notice information
© Copyright 2006-2008 Hewlett-Packard Development Company, L.P.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Apple and the Apple logo are trademarks of Apple Computer, Inc., registered in the U.S. and oth er countries.
Microsoft, Windows, and Windows XP are U.S. registered trademarks of Microsoft Corporation.
Intel is a trademark of Intel Corporation in the U.S. and other countries.
Red Hat and Enterprise Linux are registered trademarks of Red Hat, Inc., in the United States and other countries.
Java is a US trademark of Sun Microsystems, Inc.
Contents
Aboutthisguide .......................... 17
Overview......................................... 17
Intendedaudience................................... 17
Relateddocumentation................................. 17
Documentconventionsandsymbols ............................. 18
Rackstability ....................................... 18
HPtechnicalsupport.................................... 19
Subscriptionservice .................................... 19
OtherHPwebsites..................................... 19
1OverviewoftheEVAandEVA4400iSCSIconnectivityoption ..... 21
EVAandEVA4400iSCSIconnectivityproductdescription.................... 21
EVAandEVA4400iSCSIconnectivityoptions ......................... 22
EVAandEVA4400iSCSIconnectivityhardwareandsoftwaresupport............... 25
Hardwaresupport................................... 25
Thempx100/100bdatatransport .......................... 25
FibreChannelswitchhardwaresupport........................ 25
Storagesystems.................................. 26
Softwaresupportrequirements.............................. 26
Managementsoftwarerequirements ......................... 26
Multipathsoftwarerequirements ........................... 26
Security ......................................... 28
ConguringHPStorageWorksContinuousAccessEVAandBusinessCopy............. 28
2Configurationrulesandguidelines................. 29
EVAandEVA4400iSCSIConnectivityoption ......................... 29
EVAandEVA4400iSCSIconnectivityoptionarchitecturaldesignlimits ............ 29
EVAandEVA4400iSCSIconnectivityoptionsupportedmaximums .............. 29
GeneralEVAandEVA4400iSCSIconnectivityrules .................... 30
Operatingsystemssupported .............................. 30
Initiatorrulesandguidelines................................. 31
iSCSIInitiatorrulesandguidelines............................ 31
VMwareiSCSIInitiatorrulesandguidelines ........................ 31
Networkteaming ................................. 31
iSCSIInitiatorsoftware............................... 31
WindowsiSCSIInitiatorrulesandguidelines........................ 32
Windowsrequirements............................... 32
WindowsiSCSIInitiatormultipathrequirements..................... 32
AppleMacOSXiSCSIInitiatorrulesandguidelines .................... 32
LinuxiSCSIInitiatorrulesandguidelines.......................... 32
SolarisiSCSIInitiatorrulesandguidelines......................... 33
SolarisiSCSIInitiatormultipathrulesandguidelines................... 33
OpenVMSiSCSIInitiatorrulesandguidelines ....................... 33
OpenVMShardwarerequirements .......................... 33
OpenVMSsoftwarerequirements .......................... 34
OpenVMSiSCSIrulesandguidelines......................... 34
iSCSIInitiatorsoftware............................... 34
EVAstoragesystemrulesandguidelines ........................... 34
HPStorageWorksEVAstoragesystemsoftware....................... 35
SupportedfeaturesforiSCSIhosts .......................... 36
EVA iSCSI connectivity user guide
3
FeaturesnotsupportedforiSCSIhosts......................... 36
FibreChannelswitch/fabricrulesandguidelines........................ 36
HPCommandViewEVAmanagementrulesandguidelines ................... 37
SupportedIPnetworkadapters ............................... 37
IPnetworkrequirements................................. 38
3InstallingandupgradingEVAiSCSIconnectivity........... 39
Verifyingyoursystemrequirements.............................. 39
Verifyyourinstallationtypeandcomponents.......................... 39
InstallingEVAandEVA4400iSCSIconnectivity ........................ 40
Reconguring the factory-installed EVA or EVA4400 iSCSI connectivity op tion to fabric attach mode 41 Field direct connect—HP StorageWorks EVA or EVA4400 iSCSI connectivity option with direct connect
attachmentmode ................................... 42
Field fabric attach—HP StorageWorks EVA or EVA4400 iSCSI connectivity option with fabric attach
mode ........................................ 42
Multipath direct connect—HP StorageWorks EVA or EVA4400 iSCSI upgrade option for multipathing
capabilityanddirectconnectattachmentmode ...................... 43
Multipath fabric attach—HP StorageWorks EVA or EVA4400 iSCSI upgrade option with multipathing
capabilityandfabricattachmode ............................ 43
Rackmountingthempx100/100b .............................. 43
Connectingthempx100/100btoanIPswitch ......................... 44
Startingthempx100/100b ................................. 45
Setting the mpx100/100b management port to use HP StorageWorks Command View EVA . . . . . . 45
4Configuringthempx100/100b .................. 49
Generaldescriptionofthempx100/100b........................... 49
Thempx100/100b .................................. 49
ChassisLEDs ..................................... 49
PowerLED(green)................................. 50
HeartbeatLED(green) ............................... 50
SystemFaultLED(amber).............................. 50
Chassiscontrols.................................... 50
Maintenancebutton .................................. 51
Resettingthempx100/100b............................. 51
Resetting the IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
EnablingDHCP.................................. 51
Resetting to factory default conguration ....................... 52
FCports ....................................... 52
PortLEDs..................................... 52
ActivityLED(amber) ................................ 52
StatusLED(green) ................................. 52
AlertLED(yellow) ................................. 52
Transceivers...................................... 53
iSCSI/GigabitEthernetports .............................. 53
PortLEDs..................................... 53
ActivityLED(green) ................................ 54
LinkStatusLED(green) ............................... 54
ManagementEthernetport ............................... 54
Serialport ...................................... 54
Installationandmaintenance ................................ 55
Powerrequirements .................................. 55
Environmentalconditions ................................ 55
Connectingtheservertothempx100/100b........................ 55
Conguringtheserver ................................. 55
SettingtheserverIPaddress............................. 56
Conguringtheserverserialport........................... 56
InstallingthempxManagerasastandaloneapplication................... 56
HPStorageWorksmpxManagerforWindows..................... 57
HPStorageWorksmpxManagerforLinux....................... 58
4
Connectingthempx100/100btoACpower........................ 58
Starting and conguringthempx100/100b........................ 58
StartingHPStorageWorksmpxManagerforWindows ................. 58
StartingHPStorageWorksmpxManagerforLinux ................... 59
Conguringthempx100/100b ........................... 60
Conguring the mpx100/100b iSCSI ports for Internet Storage Name Service (iSNS) ( o p tional) 60
Installing the mpx100/100b rmware........................... 62
Using HP StorageWorks mpx Manager to install mpx100/100b rmware ......... 63
Using the CLI to install mpx100/100b rmware .................... 63
5SettinguptheiSCSIInitiatorandstorage.............. 65
iSCSIInitiatorsetup .................................... 65
iSCSIInitiatorsetupforWindows(single-path)......................... 65
StoragesetupforWindows(single-path) ........................... 67
AboutMicrosoftWindowsserver2003scalablenetworkingpack................. 67
SNPsetupwithHPNC3xxxGbEmultifunctionadapter................... 67
iSCSIInitiatorversion3.10setupforAppleMacOSX(single-path) ................ 68
SetuptheiSCSIInitiatorforAppleMacOSX ....................... 68
StoragesetupforAppleMacOSX............................ 74
iSCSIInitiatorsetupforLinux ................................ 75
Installing and conguringtheSUSELinuxEnterprise10iSCSIdriver.............. 75
Installing and conguringforRedHat5........................ 77
Installing and conguringforRedHat3,4andSUSE8and9.............. 78
InstallingtheinitiatorforRedHat3andSUSE8 .................... 78
InstallingtheiSCSIdriver .............................. 79
Assigningdevicenames ................................ 79
Targetbindings .................................... 80
Mounting lesystems.................................. 80
Unmounting lesystems ................................ 81
PresentingEVAstorageforLinux............................. 81
iSCSIInitiatorsetupforSolaris(single-path) .......................... 81
EVALUN0withSolarisiSCSIInitiators .......................... 81
DisablingControllerLUNAutoMapusingthempxCLI.................. 82
Prepare for a Solaris iSCSI conguration ......................... 82
CongureforEVAiSCSItargetdiscovery ......................... 83
Set target discovery using MPX iSCSI por t address . . . . . . . . . . . . . . . . . . . 83
SettargetdiscoveryusingiSNSserveraddress..................... 83
CreatinganiSCSIhostandvirtualdisksfortheSolarisiSCSIInitiator ............. 84
Command View 6.0.2 and 7.0 only—Remove LUN 0 from the Solaris iSCSI Initiator using the CLI . 84
AccessingiSCSIdisks ................................. 85
Monitoring your iSCSI conguration ........................... 86
iSCSIInitiatorsetupforVMware............................... 86
iSCSIInitiatorsetupforOpenVMS .............................. 89
ConguringTCP/IPservices............................... 90
ConguringVLANs .................................. 90
EnablingEthernetjumboframes ............................. 90
Conguringtargetdiscovery............................... 90
StartingtheiSCSIInitiator................................ 92
StoppingtheiSCSIInitiator ............................... 92
SettingupstorageforOpenVMS............................. 92
6SettinguptheiSCSIInitiatorformultipathing ............ 95
Overview......................................... 95
UnderstandingFibreChannelmultipathingforthempx100/100b............... 95
EVAstoragearrayperspective............................ 95
Thempx100/100bperspective ........................... 96
UnderstandingiSCSImultipathingwiththempx100/100b.................. 97
iSCSIInitiatorperspective.............................. 98
ConguringmultipathwithWindowsiSCSIInitiator....................... 100
EVA iSCSI connectivity user guide
5
MicrosoftMPIOmultipathingsupportforiSCSI....................... 100
InstallingtheMPIOfeatureforWindowsServer2008.................. 100
InstallingtheMPIOfeatureforWindowsServer2003.................. 102
LoadbalancingfeaturesofMicrosoftMPIOforiSCSI .................. 106
MicrosoftMPIOwithQLogiciSCSIHBA........................ 107
ConguringmultipathwiththeVMwareiSCSIInitiator...................... 112
NativemultipathingsolutionforiSCSI........................... 112
Setting up multipath congurations............................ 112
Managingmultipathing................................. 113
VMwareLUNmultipathingpolicies............................ 113
Viewingandchangingmultipathing ........................... 114
ViewingrawmappedLUNsproperties .......................... 114
Importantinformationaboutmulti-initiatorsandVMFSclusters ................ 115
ConguringmultipathwiththeSolaris10iSCSIInitiator ..................... 116
MPxIOoverview.................................... 116
Preparingthehostsystem.............................. 116
Setting congurationsettings ............................ 117
Verifying congurationsettings............................ 119
VerifyingthatasingledeviceisdisplayedforeachLUN................. 120
EnablingMPxIO.................................. 121
Verifying the MPxIO conguration .......................... 121
Verifympath-supportparameterandTargetPortalGroup ................ 122
ConguringmultipathwiththeOpenVMSiSCSIInitiator..................... 123
Pathdescriptionstringformat .............................. 123
Displayingpathinformation............................... 124
Manualpathswitching................................. 125
DeviceMapperMultipathEnablementKitforHPStorageWorksDiskArrays............. 126
Supportedoperatingsystems .............................. 126
InstallingtheDeviceMapperMultipathEnablementKit ................... 126
iSCSIInitiatortimeoutsettingsforRedHat4andSUSE9systems............... 127
iSCSIInitiatortimeoutsettingsforRedHat5andSUSE10systems .............. 127
HPDMMultipathrestrictions............................... 127
7UsingHPCommandViewEVAtoconfigure LUNs to iSCSI Initiators . . . 129
Initialdiscoveryofthempx100/100bviaHPCommandViewEVA ................ 129
CreatinganiSCSIInitiatorhostviaHPCommandViewEVA ................... 130
PresentingavirtualdisktoaniSCSIInitiatorviaHPCommandViewEVA.............. 131
Unpresenting a virtual disk to an iSCSI Initiator using HP Command View EVA . . . . . . . . . . . 131
8 iSCSI Boot from SAN . . .................... 133
HPMultifunctionGigabitserveradapterrequirements...................... 133
SupportedoperatingsystemsfortheHPMultifunctionGigabitserveradapter.......... 133
Supportedhardware.................................. 133
iSCSIoptionROM................................... 134
QLogiciSCSIHostBusadapter ............................... 134
SupportedoperatingsystemsfortheQLogiciSCSIHBA ................... 134
Supportedhardware.................................. 134
SupportedBIOS.................................... 134
Installing the OS on HP Multifunction Gigabit Server Adapters . . . . . . . . . . . . . . . . . . 134
ConguringtheBIOSonQLogiciSCSIHBAs ......................... 134
InstallingtheOSonQLogiciSCSIHBAs............................ 135
9 EVA4400 iSCSI Connectivity 32 Initiator Upgrade License . . . . . . . 137
InstallingtheEVA4400iSCSIConnectivity32InitiatorUpgradeLicense .............. 137
Installing the EVA4400 iSCSI Connectivity 32 Initiator Upgrade License with mpx Manager GUI . . . . 138
ACommandlineinterface ..................... 141
6
Commandlineinterfaceforthempx100/100b......................... 141
mpx100/100blog-on,useraccounts,andbackupandrestore .................. 141
LoggingontoaSANmpx100/100b........................... 141
Useraccounts..................................... 141
Backupandrestore .................................. 141
Commands...................................... 141
Admincommand ................................. 142
Beaconcommand................................. 143
Datecommand .................................. 143
Clearcommand.................................. 144
FRUcommand .................................. 144
Helpcommand .................................. 145
Historycommand ................................. 147
Imagecommand ................................. 147
Initiatorcommand................................. 147
Logoutcommand ................................. 148
LUNmaskcommand ................................ 148
Passwordcommand ................................ 150
Pingcommand .................................. 151
Quitcommand .................................. 151
Rebootcommand ................................. 151
Resetcommand.................................. 152
Savecommand .................................. 152
Setcommand................................... 152
SetCHAPcommand................................ 153
SetFCcommand ................................. 153
SetFeaturescommand ............................... 154
SetiSCSIcommand ................................ 154
SetiSNScommand ................................ 155
SetMGMTcommand ............................... 156
SetNTPcommand................................. 156
SetPropertiescommand .............................. 157
SetSNMPcommand................................ 157
SetSystemcommand................................ 158
SetVLANcommand ................................ 158
Showcommand.................................. 158
ShowChapcommand ............................... 159
ShowFeaturescommand.............................. 160
ShowLogsCommand ............................... 160
ShowLUNinfocommand.............................. 160
ShowPerfcommand................................ 160
ShowPropertiescommand ............................. 161
ShowSystemcommand............................... 161
ShowMGMTcommand .............................. 162
ShowiSCSIcommand ............................... 162
ShowFCcommand ................................ 162
ShowInitiatorscommand.............................. 163
ShowInitiatorsLUNMaskcommand ......................... 163
ShowiSNScommand ............................... 164
ShowLUNscommand ............................... 164
ShowLUNMaskcommand............................. 165
ShowNTPcommand................................ 166
ShowPresentedTargetscommand .......................... 166
ShowSNMPcommand............................... 167
ShowStatscommand ............................... 168
ShowTargetscommand .............................. 171
ShowVLANcommand............................... 172
Targetcommand.................................. 172
BDiagnosticsandtroubleshooting.................. 173
EVA iSCSI connectivity user guide
7
Chassisdiagnostics .................................... 173
InputPowerLEDisextinguished ............................. 173
SystemAlertLEDisilluminated.............................. 173
Power-onself-testdiagnostics .............................. 174
HeartbeatLEDblinkpatterns ............................ 174
mpx100/100blogdata ................................ 175
Thempx100/100bstatistics............................... 175
TroubleshootingLUNpresentationfromtheEVAtotheiSCSIInitiator................ 175
TroubleshootingEVAcommunicationwiththempx100/100b................. 176
TroubleshootingEVAorLUNFCportconnections...................... 176
TroubleshootingiSCSIInitiatorconnectionstotheEVAiSCSItargets.............. 176
iSCSIInitiator,EVAiSCSItarget,andEVALUNconnections ................. 177
HPCommandViewEVArefresh ............................... 177
CLogdata............................ 179
Informationallogmessages................................. 179
Applicationmodules.................................. 179
iSCSIdriver...................................... 180
FibreChanneldriver.................................. 180
Errorlogmessages..................................... 181
Applicationmodules.................................. 181
iSCSIdriver...................................... 184
FibreChanneldriver.................................. 185
Usermodules..................................... 186
System........................................ 187
Fatallogmessages..................................... 187
iSCSIdriver...................................... 187
FCdriver....................................... 189
System........................................ 190
DSimpleNetworkManagementProtocol............... 191
SNMPproperties ..................................... 191
SNMP trap conguration.................................. 191
ManagementInformationBase................................ 192
Systeminformation................................... 192
qsrSerialNumber ................................. 192
qsrHwVersion................................... 192
qsrSwVersion ................................... 192
qsrNoOfFcPorts.................................. 192
qsrNoOfGbEPorts................................. 193
qsrAgentVersion.................................. 193
Networkporttable................................... 193
qsrNwPorttable .................................. 193
qsrNwPortEntry .................................. 193
QsrNwPortEntry.................................. 193
qsrNwPortRole .................................. 194
qsrNwPortIndex.................................. 194
qsrNwPortAddressMode .............................. 194
qsrIPAddressType ................................. 194
qsrIPAddress ................................... 194
qsrNetMask ................................... 194
qsrGateway ................................... 194
qsrMacAddress .................................. 195
qstNwLinkStatus.................................. 195
qsrNwLinkRate .................................. 195
FCporttableinformation................................ 195
qsrFcPortTable................................... 195
qsrFcPortEntry................................... 195
qsrFcPortRole ................................... 196
8
qsrFcPortIndex................................... 196
qsrFcPortNodeWwn ................................ 196
qsrFcPortWwn................................... 196
qsrFcPortId .................................... 196
qsrFcPortType................................... 197
qsrFcLinkStatus .................................. 197
qsrFcLinkRate................................... 197
Sensortable ..................................... 197
qsrSensorTable .................................. 197
qsrSensorEntry .................................. 197
qsrSensorType................................... 198
qsrSensorIndex .................................. 198
qsrSensorUnits .................................. 198
qsrSensorValue .................................. 198
qsrUpperThreshold................................. 198
qsrLowerThreshold................................. 198
qsrSensorState .................................. 199
Notications ....................................... 199
Noticationobjects .................................. 199
qsrEventSeverity.................................. 199
qsrEventDescription ................................ 199
qsrEventTimeStamp ................................ 200
Agent startup notication................................ 200
Agent shutdown notication............................... 200
Network port down notication ............................. 200
FC port down notication................................ 200
Sensor notication................................... 200
Generic notication .................................. 200
ESettingupauthentication..................... 203
CHAPrestrictions ..................................... 203
Thempx100/100bCHAPsecretrestrictions ........................ 203
MicrosoftInitiatorCHAPsecretrestrictions......................... 204
Linuxversion3.6.3CHAPrestrictions........................... 204
ATTOMacintoshChaprestrictions ............................ 204
RecommendedCHAPpolicies.............................. 204
iSCSIsessiontypes................................... 204
Thempx100/100bCHAPmodes ............................ 204
Enablingsingle–directionCHAPduringdiscoveryandnormalsession............... 205
EnablingCHAPforthempx100/100b-discoverediSCSIinitiatorentry............. 206
EnableCHAPfortheMicrosoftiSCSIInitiator ....................... 207
Enabling single–direction CHAP during discovery and bidirectional CHAP during normal session . . . 207 Enabling bi–directional CHAP during discovery a nd single–direction CHAP during normal session . . . 209
Enabling bidirectional CHAP during discovery and bidirectional CHAP during normal session . . . . . 211
F Saving and restoring the mpx100/100b conguration ........ 213
Saving the mpx100/10 0b conguration............................ 213
Saving the congurationusingthempx100/100bGUI ................... 213
Saving the congurationusingthempx100/100bCLI.................... 213
Restoring the mpx100/100b conguration........................... 214
Restoring the congurationusingthempx100/100bGUI .................. 214
Restoring the congurationusingthempx100/100bCLI................... 214
GRegulatorycomplianceandsafety................. 217
Regulatorycompliance................................... 217
FederalCommunicationsCommissionnoticeforClassAequipment.............. 217
Declaration of conformity for products marked with the FCC logo, United States only . . . . . 217
Modications................................... 217
EVA iSCSI connectivity user guide
9
Cables...................................... 217
Regulatory compliance identicationnumbers ....................... 218
Laserdevice ..................................... 218
Lasersafetywarning................................ 218
Laserproductlabel................................. 218
Internationalnoticesandstatements ........................... 218
Canadiannotice(avisCanadien) .......................... 218
ClassAequipment................................. 218
EuropeanUnionnotice............................... 219
BSMInotice.................................... 219
Japanesenotice.................................. 219
Koreannotices .................................. 220
Safety .......................................... 220
Batteryreplacementnotice ............................... 220
Taiwanbatteryrecyclingnotice ............................. 220
Powercords...................................... 220
Japanesepowercordstatement ............................. 221
Glossary............................. 223
Index .............................. 225
10
Figures
1
2
3
4
5
6 7
8
9
Direct connect iSCSI-Fibre Channel attachment mode conguration........... 22
EVA4400 direct connect iSCSI-Fibre Channel attachment mode conguration ...... 23
HP Command View EVA deployment conguration1................. 23
HP Command View EVA deployment conguration2................. 24
EVA8x00mpx100andWindowshostdirect-connectonly .............. 24
FabriciSCSI-FibreChannelattachmentmode .................... 25
Multipath direct connect iSCSI-Fibre Channel attachment mode conguration ...... 27
Multipath Fabric-iSCSI-Fibre Channel attachment mode conguration.......... 27
EVA4/6/8x00 fabric with four iSCSI–Fibre Channel controller host ports . . . . . . . . 28
10
The mpx1 00
11
DiscoveriSCSIdevices.............................. 46
12
Hardware
13
Thempx100externalcomponents......................... 49
14
Chassis
15
Chassiscontrols ................................ 51
16
Fibre C
17
GigabitEthernet(iSCSI)ports........................... 53
18
ManagementEthernetport............................ 54
19
Serialport................................... 54
20
21
22
23 Inactivetargetstatus............................... 66
24
25 Discovertargets ................................ 69
26 27
ecttothempx100/100b .......................... 59
Conn
TypicalmpxManagerdisplay .......................... 59
inganIPaddress.............................. 66
Add
Connectedtargetstatus ............................. 67
AddstaticIPaddress .............................. 69
Discoveredtargetlist .............................. 70
portandLEDlocations........................ 45
/iSCSIdevices............................. 47
LEDs.................................. 50
hannelLEDs............................... 52
28
SNS discovery and verication.......................... 71
i
29
Discoveredtargets ............................... 72
0
3
Selecting newly added target . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
31
Selectstatus .................................. 74
32
PresentedEVALUNs .............................. 75
33
Congureinitiatorandtargets .......................... 76
34
DiscoveredTargetstab ............................. 76
EVA iSCSI connectivity user guide
11
35
Targetlogin .................................. 77
36
ConnectedTargetstab.............................. 77
37
Congurationtab................................ 87
38
Security proleinformation............................ 87
39
Generalpropertiesdialogbox .......................... 88
40
Addsendtargetsserverdialogbox ........................ 88
41
Rescan dialo
42
Example: Single mpx100 multipath—WWPN conguration.............. 96
43
Example: Du
44
Example: Single mpx100 multipath—iSCSI target conguration ............ 97
45
Example: Du
46
Example:FibreChanneltoIPport/targettranslation................. 98
47
Example:Singlempx100iSCSIportIPaddressing.................. 98
48
Example: Dual mpx100 iSCSI port IP addressing . . . . . . . . . . . . . . . . . . 99
49
Add Featu
50
MPIOPropertiespage.............................. 102
51
Softwar
52
PropertiesscreenforiSCSIInitiator ........................ 103
53
Logontotarget ................................ 104
Computermanagement ............................. 105
54
55
iSCSIInitiatorproperties............................. 106
56
MicrosoftiSCSIInitiatorservicesscreen ...................... 107
57
Connecttohostscreen.............................. 108
58
Start general congurationwindow ........................ 109
gbox ............................... 89
al mpx100 multipath—WWPN conguration .............. 96
al mpx100 multipath—iSCSI target conguration............. 97
respage............................... 101
e update installation wizard . . . . . . . . . . . . . . . . . . . . . . . . 103
59
60
61
62
63
64
65
66
67 68
69
70
71
72
12
ort target conguration........................... 109
HBA p
Targetsettingstab ............................... 110
HBAiSCSIportconnections ........................... 111
iSCSIInitiatorproperties—Targetstab ...................... 112
TypicaliSCSIpathdescriptionstring........................ 124
SHOWDEVICE/MULTIPATHExample ...................... 124
SHOWDEVICE/FULLExample.......................... 125
SETDEVICE/SWITCH/PATHExample....................... 126
AddahostforCommandViewEVA........................ 131
Newlicensekeydialogbox ........................... 139
Licensekeycompletedialogbox ......................... 139
Displayinstalledlicensekey ........................... 140
ChassisLEDs.................................. 173
Normalheartbeatblinkpattern.......................... 174
73
Systemerrorblinkpattern ............................ 174
74
Port IP address conictblinkpattern........................ 175
75
Over-temperatureblinkpattern .......................... 175
76
Class1laserproductlabel............................ 218
EVA iSCSI connectivity user guide
13
Tables
1
2
3
4
5
6 Operatingsystems ............................... 30
7
8
9
10
11
12
13 PortLEDmessages ............................... 53
14
15
Documentconventions.............................. 18
iSCSIFibreChannelattachmentoptionpartnumbers................. 21
Connectivityattachmentmodesupported...................... 25
Multipathingsoftwarerequirements ........................ 26
Supportedmpx100maximums .......................... 29
EVA congurationtable(mpx100/100b)...................... 35
SupportforEVAstoragesystemsoftwarewithiSCSIconnectivity............ 36
FibreChannelswitch/fabricrequirements...................... 36
Supported IP network adapters . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Installationinformation ............................. 40
Installationcomponents ............................. 40
Serial port pin denition............................. 55
mpxManagerGUIserverrequirements ...................... 57
16
Single mpx100/100b multipath conguration.................... 99
17
Example: Dual mpx100/100b multipath conguration................ 99
18
19
20
21
22
23
24
25
26
27
28
29
30
31
liSCSIpathdescriptionstring........................ 124
Typica
Operating Systems supported by the Device Mapper M ultipath Enablement Kit . . . . . . 126
Command-linecompletion ............................ 142
Applicationmodules—informationallogmessages.................. 179
driver—informationallogmessages...................... 180
SCSI
FCdriver—informationallogmessages....................... 180
Applicationmodule—errorlogmessages...................... 181
iSCSIdriver—errorlogmessages ......................... 184
river—errorlogmessages .......................... 185
FC d
Usermodules—errorlogmessages ........................ 186
System—errorlogmessages ........................... 187
iSCSIdriver—fatallogmessages ......................... 187
FCdriver—fatallogmessages .......................... 189
System—fatallogmessages ........................... 190
2
3
SNMPproperties................................ 191
33
SNMPparameters ............................... 192
14
34
CHAPsingledirectionsettings .......................... 205
35
CHAPsingleandbidirectionalsettings....................... 207
36
CHAP bidirecti
37
CHAPbidirectionalandbidirectionalsettings.................... 211
onalandsinglesettings....................... 209
EVA iSCSI connectivity user guide
15
16
About t his guide
This user guide provides information to help you:
Install the Enterprise Virtual Array (EVA) or EVA4400 iSCSI connectivity option
Install an additional EVA or EVA4400 iSCSI connectivity option for high availability
Congure EVA or EVA4400 iSCSI connectivity multipath software
Install software initiators for different operating systems
Congure EVA iSCSI LUNs using H P Command View EVA
Congure the mpx100 or mpx100b
This section contains the following topics:
•Overview, page 17
Conventions, page 18
HP technical support, page 19
Subscription service, page 19
Other HP web sites, page 19
Overview
This section contains the following topics:
• Intended audience, page 17
•Relatedd
ocumentation, page 17
Intended audience
This guide is intended for system administrators with knowledge of:
HP StorageWorks EVA4x00/6x00/8x00 or EVA3000/5000 storage systems
Conguring LUNs using HP Command View EVA
HP Fibre Channel Storage Area Networks (SANs)
TCP/IP networking
iSCSI
Relate
ddocumentation
The following documents provide related information:
H P StorageWorks EVA iSCSI conne ctivity quick start i nstructions for Windows
HP Stor
HP StorageWorks Command View EVA user guide
HP StorageWorks Interactive Help for Command View EVA
HP Sto
HP StorageWorks 4400 Enterprise Virtual Array user guide
H P StorageWorks 4400 Enterprise Virtual Array installation guide
HP Sto
HP StorageWorks Enterprise Virtual Array 3000/5000 user guide
H P StorageWorks Replication Solutions Manager installation guide
ageWorks iSCSI Connectivity Option for EVA release notes
rageWorks SAN design reference guide
rageWorks 4000/6000/8000 Enterprise Virtual Array user guide
EVA iSCSI connectivity user guide
17
Document conven
Table 1 provides the conventions and symbols used in this document.
Table 1 Document conventions
tions and symbols
Convention
Blue text: Table 1
Blue, underlined text: http://www.hp.com
Bold text
Italic text Text emphasis
Monospace text
Monospace, italic text
Monosp
ace, bold text
Element
Cross-referen
Web site addresses
Keys th at are pressed
Text typed into a GUI element, such as a box
GUI elements that are clicked or selected, such as
menu and list items, buttons, tabs, and check boxes
File and directory names
System output
Code
Commands, their arguments, and argument values
Code variables
Command variables
Emphasized monospace text, including le and directory names, system output, code, and text entered at the command line.
ce links and e-mail addresses
WARNING!
Indicates that failure to follow directions could result in bodily harm or death.
CAUTION:
Indicates that failure to follow directions could result in damage to equipment or data.
IMPORTANT:
Provides clarifying information or specic instructions.
NOTE:
Provides additional information.
TIP:
Provides helpful hints and shortcuts.
Rack stability
Observe the following rack stability warning to protect personnel and equipment.
18
About this guide
WARN ING!
To reduce the risk of personal injury or damage to equipment:
Extend leveling jacks to the oor.
Ensure that the full weight of the rack rests on the leveling jacks.
Install stabilizing feet on the rack.
In multiple-rack installations, fasten racks together securely.
Extend only one rack component at a time. Racks can become unstable if more than one component
is extended.
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
ttp://www.hp.com/support/.
h
Collect the following information before calling:
Technical support registration number (if applicable)
Product serial numbers
Product model names and numbers
Error messages
Operating system type and revision level
Detailed questions
For continuous quality improvement, calls may be recorded or monitored.
Subscription service
HP strongly recommends that customers register online using the Subscriber's choice website:
h
ttp://www.hp.com/go/e-updates.
Subscribing to this service provides you with e-mail updates on the latest product enhancements, newest driver versions, and rmware documentation updates as well as instant access to numerous other product resources.
After subscribing, locate your products by selecting Business support and then Storage under Product Category.
Other HP websites
For additional information, see the following HP websites:
ttp://www.hp.com
•h
•http://www.hp.com/go/storage
•http://www.hp.com/service_locator
•http://www.docs.hp.com
EVA iSCSI connectivity user guide
19
20
About this guide
1OverviewoftheEVAand EVA4400 iSC SI connec tivity option
This chapter contains the following topics:
• EVA and EVA4400 iSCSI connectivity product description, page 21
• EVA and EVA4400 iSCSI connectivity options, page 22
EVA and EVA4400 iSCSI connectivity hardware and software support, page 25
Security, page 28
Conguring HP StorageWorks Continuous Access EVA and Business Copy, page 28
EVA and EVA4
The EVA fami the iSCSI connectivity option. The connectivity option uses the mpx100 (all EVA models) or mpx100b (EVA4400 only) hardware and HP Command View EVA management software. This option is available from HP or as a eld upgrade to an existing EVA storage system. With this option, iSCSI connectivity to the EVA is provided for servers through a standard Gigabit Ethernet (GbE) network interface controller (NIC).
NOTE:
The EVA iSC EVA3000/5000 storage systems. The EVA4400 iSCSI connectivity option (mpx100b) is supported only with EVA4400 storage systems.
The EVA and EVA4400 iSCSI connectivity options are currently not supported with the EVA4400 with the embedded
Table 2 lists the part numbers required to congure various EVA iSCSI Connectivity Options for a
direct-connection with the iSCSI–Fibre Channel attachment. For a complete list of the components included in each option, see Table 12 on page 40.
Table 2 iSCSI Fibre Channel attachment option part numbers
Part No.
AE324A
ly of Fibre Channel (FC) storage systems is supported for integrated iSCSI connectivity using
SI connectivity option (mpx100) is supported with EVA4000/4100/4400 /6x0 0/8x00 and
Fibre Channel switch. Contact an HP storage representative for the latest support information.
HP StorageW orks EVA iSCSI Connectivity Option
400 iSCSI connectivity product description
Option name
An EVA storage system or order separately to upgrade an existing EVA
Order with: Option includes:
One mpx100 hardware unit and the components necessary to install in any EVA rack.
AE352A
AJ713A
HP StorageW orks EVA iSCSI Upgrade Option (optional)
HP StorageWorks EVA4400 iSCSI Connectivity Option
The AE324A connectivity option and an EVA storage system to provide high-availability multipath connectivity or to upgrade an existing EVA with iSCSI connectivity for multipath
An EVA4400 storage system or to upgrade an existing EVA4400
EVA iSCSI connectivity user guide
Aredundantmpx100 hardware unit for customers who require high availability.
One mpx1 00b hardware unit and the necessary components to install in any EVA rack. Supports up to 16 iSCSI initiators.
21
AJ714A
HP StorageWorks EVA4400 iSCSI Upgrade Option (optional)
The AJ7 13A connectivity option and an EVA4400 storage system to provide high availability multipath connectivity, or order separately to upgrade an existing EVA4400 with iSCSI connectivity for multipath.
A redundant mpx100b hardware unit for customers who require high availability. Supports up to 16 iSCSI initiators.
HP StorageWorks EVA4400
T5471A
iSCSI Connectivity 32 Initiator Upgrade License (LTU)
For use with the EVA4400 and mpx100b only.
(optional)
The follo
wing additional equipment is required to congure the EVA or EVA4400 iSCSI option for fabric
iSCSI–Fibre Channel attachment mode:
B-Series, C-Series, or M-Series Fibre Channel switch
Optical
SFPs
Optical Fibre Channel cables
Contact your HP storage representative for specic switch model support.
EVA and EVA4400 iSCSI connectivity options
An EVA storage system can be congured for simultaneous connectivity to iSCSI and Fibre Channel attached hosts. Support for iSCSI is provided through a d e d icated EVA host port (direct connect) or shared with Fibre Channel through an existing fabric host port (fabric attach).
Figure 1 illustrates the direct connect iSCSI–Fibre Channel a ttachment mode conguration. This
conguration is used with an EVA 4000/4100/6x00/8x00 storage system. Figure 2 illustrates the direct connect iSCSI-Fibre Channel attachment mode for an EVA4400 storage system.
Install one u
pgrade license to increase the number of iSCSI Initiators from 16 to 48. Install asecondupgradelicense to increas iSCSI Ini maximum s
ethenumberof
tiatorsfrom48tothe
upported limit of
150 .
NOTE:
Direct connect mode requires a dedicated host port on each HSV controller. Unused controller host ports require loop-back connectors. See Table 1 2 on page 40 for more information.
Discovery
IP Address
D1 FP1B
Presented
iSCSI Targets
FP1A
mpx100
HP StorageWorks
mpx100
FC1
FC2
GE1
FC1 FC2
Storage System
FP1
FP2
A
B
FP1
FP2
GE1
EVA
IP Network
management
D1
(WAN/LAN)
IP Network iSCSI data
(LAN/VLAN)
Existing
Fibre Channel
HA fabric 1
Existing
Fibre Channel
HA fabric 2
iSCSI
NIC
Command View EVA
25162a
MGMT
MGMT
IOIOI
!
GE2
Figure 1 Direct connect iSCSI-Fibre C h a nnel atta chment mode conguration
22
Overview of the EVA and EVA4400 iSCSI connectivity option
Discovery
IP Address
Presented
iSCSI Targets
D1 FP1A
FP1B
D2
FP1A
FP1B
D3
D4
FP1A
FP1B
FP1A
FP1B
mpx100/100b 1 mpx100/100b 2
FC1
FC1 FC2 GE1
MGMT
HP StorageWorks
MGMT
mpx100
IOIOI
!
GE2
GE1
FC2
GE1
HP StorageWorks
mpx100
FC1
FC2
FC1 FC2GE2 GE2
GE1
IP Network management (WAN/LAN)
MGMT
MGMT
IOIOI
!
GE2
D2 D1 D4 D3
IP Network iSCSI data
(LAN/VLAN)
iSCSI
NIC
EVA4400
Storage System
Command
View EVA
AB
26381b
Figure 2 EVA4400 direct connect iSCSI-Fibre Channel attachment mode conguration
Figure 3 and Figure 4 illustrate the H P Command View EVA iSCSI deployment congurations. These
congurations are used with EVA 4000/4100/4400/6x00/8x00 storage systems and allow for HP Command View connectivity without the need for a Fibre Channel switch. Figure 4 shows a redundant conguration using two mps100/100b's.
Discovery
IP address
D1 FP1B
Presented
iSCSI targets
FP1A
mpx100/100b
HP StorageWorks
mpx100
FC1
GE1
FC2
GE1
FC1 FC2
IP network
management
D1
(WAN/LAN)
IP network
iSCSI data
(LAN/VLAN)
NIC
MGMT
MGMT
IOIOI
!
GE2
EVA
Storage System
Figure 3 HP Command View EVA deployment conguration 1
iSCSI initiator &
Command View
EVA server
26484a
EVA iSCSI connectivity user guide
23
Discovery
IP Address
Presented
iSCSI Targets
D1 FP2A
FP2B
D2
FP2A FP2B
D3
D4
FP1A
FP1B FP1A FP1B
mpx100/100b 1 mpx100/100b 2
FC1
FC1 FC2 GE1
MGMT
HP StorageWorks
MGMT
mpx100
IOIOI
!
GE2
GE1
FC2
GE1
HP StorageWorks
mpx100
FC1
FC2
FC1 FC2GE2 GE2
GE1
IP Network management (WAN/LAN)
MGMT
MGMT
IOIOI
!
GE2
D2 D1 D4 D3
IP Network
iSCSI data
(LAN/VLAN)
NIC
iSCSI initiator & Command View
EVA server
EVA
Storage System
26485a
Figure 4 HP Command View EVA deployment conguration 2
Figure 5 illustrates the EVA8x00 mpx100 and Windows host direct-connect only iSCSI–Fibre Channel
attachment mode. This conguration is used with EVA4000/4100/6x00/8x00 storage systems that have all controller host ports congured for direct connect mode.
Discovery
IP Address
Presented
iSCSI Targets
D1 FP2A
FP2B
D2
D3
D4
FP2A FP2B FP1A
FP1B FP1A FP1B
GE2
GE1
mpx100 2
MGMT
MGMT
IOIOI
!
FC1 FC2GE2 GE2
Storage System
FP1 FP2
mpx100 1
HP StorageWorks
mpx100
FC1
FC2
GE1
FC1 FC2 GE1
A
B
FP2FP1
FC1
EVA
IP Network management (WAN/LAN)
MGMT
HP StorageWorks
MGMT
mpx100
IOIOI
!
GE2
GE1
FC2
D2 D1
IP Network
D4
iSCSI data
D3
(LAN/VLAN)
FPn
FPn
iSCSI
NIC
Command View EVA
25163a
Figure 5 EVA8x00 mpx100 and Windows host direct-connect only
Figure 6 illustrates the fabric iSCSI–Fibre Channel attachment mode conguration. This conguration
is used with EVA3000/5000 and EVA4000/4100/6x00/8x00 storage systems with mpx100, and with EVA4400 using mpx1 00b.
24
Overview of the EVA and EVA4400 iSCSI connectivity option
Discovery
IP address
D1 FP2A
Presented
iSCSI targets
FP2B
FP1A
FP1B
mpx100/100b*
FC1 FC2
HP StorageWorks
mpx100
FC1
GE1
FC2
GE1
EVA
storage system
FP2
FP1
A
B
FP1
FP2
IP network
management
D1
(WAN/LAN)
IP network
iSCSI data
(LAN/VLAN)
Existing
Fibre Channel
HA fabric 1
Existing
Fibre Channel
HA fabric 2
iSCSI
NIC
Command
View EVA
MGMT
MGMT
IOIOI
!
GE2
*mpx100b supported on EVA4400 only
25164b
Figure 6 Fabric iSCSI-Fibre Channel attachment mode
EVA and EVA4400 iSCSI connectivity hardware and software support
This section identies the hardware, devices, and operating systems compatible with the mpx100/mpx100b.
Hardware support
The mpx 100/ 100b dat a tra nsp ort
The EVA and EVA4400 iSCSI options suppor t both direct connect and Fibre Channel fabric connectivity through the mpx100/100b to the EVA storage system.
Table 3 sh
Table 3 Connectivity attachment mode supported
EVA storage system
EVA4400
ows the c onnectivity attachment mode supported, based on the EVA storage system model.
Array software
XCS 09000000 or later
iSCSI–Fibre Channel attachment mode
The mpx100b direct connect (Figure 2 on page 23)
The mpx100b fabric through a Fibre Channel switch (Figure 6)
The mpx100 direct connect (Figure 1 on page 22)
1
1
EVA4x00/6x00/ 8x00
EVA3000/5000
1
A Fibre Channel switch is not required for the mpx100 and Windows host direct connect or HP Command View EVA iSCSI
XCS 6.100 or later
VCS 4.001 or later
deployment. See Figure 1, Figure 2, Figure 3, Figure 4,andFigure 5 for more information.
Fibre Channel switch hardware support
The EVA and EVA4400 iSCSI options are supported with most B-Series, C-Series, and M-Series product line switches. The EVA and EVA440
The mpx100 and Windows host direct connect only (Figure
5 on page 24). All controller host ports are direct connect.
The mpx100 fabric through a Fibre Channel switch (Figure 6)
The mpx100 fabric through a Fibre Channel switch (Figure 6)
0 iSCSI connectivity options are currently not s upported with
EVA iSCSI connectivity user guide
25
the EVA4400 embedded Fibre Channel switch. Contact an HP storage representative for the latest information about support for specic switch models.
Storage system
The mpx100 is su
s
pported with the EVA4000/4100/6x00/8x00 and EVA3000/5000 storage systems.
The mpx100b is supported only with the EVA4400.
Software support requirements
Management software requirements
HP Command Vi
HP Command View 8.0 or later is required for the mpx100/100b running rmware version
2.4.0.0 or later.
The HP Storag
mpx100/100
Multipath software requirements
Table 4 lists the operating system multipathing requirements for EVA storage systems.
Table 4 Multipathing software requirements
rating
Ope system
rosoft
Mic
dows
Win
08,
20
crosoft
Mi
ndows
Wi
003 SP1,
2
icrosoft
M
indows
W 2003 r2 ,
nl
Microsoft Windows 2003 SP2
ew 6.0.2 or later is required to congure iSCSI LUNs.
eWorks mpx Manager Graphical User Interface (GUI) is required for the
b management.
rage system
Sto
4x00/6x00/
EVA
0
8x0
VA3000/5000 4.001 or later Fabric attach only
E
troller code
Con version
6.100 or later, 09000000 or later
ments
Com
Windows MPIO
Microsoft Windows XP Professional SP2, SP1
Apple Mac OS X
Linux
Solaris 10 Update 4
VMware ESX
3.5
OpenVMS
26
Overview of the EVA and EVA4400 iSCSI connectivity option
EVA4x00/6x00/ 8x00
EVA3000/5000
EVA4x00/6x00/ 8x00
EVA4x00/6x00/ 8x00
EVA3000/5000 4.001 or later
EVA4x00/6x00/ 8x00
EVA4400
EVA4x00/6x00/ 8x00
EVA3000/5000 (active/active)
6.100 or later, 09000000 or later
4.001 or later
6.100 or later, 09000000 or later
6.100 or later, 09000000 or later
6.100 or later, 09000000 or later
6.100 or later, 09000000 or later
6.100 or later, 09000000 or later (the mpx100 only)
4.001 or later
N/A
N/A
N/A
Device Mapper
Solaris MPxIO
VMware MPxIO
Native multipath support
Figure 7 illustrates the high-availability multipath direct connect iSCSI–Fibre Channel attachment mode
conguration. This conguration is used with the EVA4000/4100/6x00/8x00 using the mpx100's and with the EVA4400 using the mpx100b's.
Discovery
IP Address
D1 FP1A
D2
Presented
iSCSI Targets
FP1B
mpx100/100b 1*
HP StorageWorks
mpx100
GE2
FC1
GE1
FC2
GE1
FC1 FC2 GE1
mpx100/100b 2
MGMT
MGMT
IOIOI
!
HP StorageWorks
FC1
FC2
FC1 FC2GE2 GE2
IP Network management (WAN/LAN)
MGMT
MGMT
mpx100
IOIOI
!
GE2
GE1
D1 D2
IP Network iSCSI data
(LAN/VLAN)
iSCSI
NIC
EVA
Storage System
FP1 FP2
A
B
FP2FP1
*mpx100b supported on EVA4400 only
Existing
Fibre Channel
HA fabric 1
Existing
Fibre Channel
HA fabric 2
Command
View EVA
25165b
Figure 7 Multipath direct connect iSCSI-Fibre Channel a ttachment mode conguration
Figure 8 illustrates the high-availability multipath fabric iSCSI–Fibre Channel attachment mode
conguration. This conguration is used with EVA4000/4100/6x00/8x00, with the mpx100s and with the EVA4400 using the mpx100b's.
Discovery
IP Address
Presented
iSCSI Targets
D1 FP1A
FP1B
D2
FP2B
FP2A
mpx100/100b 1*
FC1 FC2
HP StorageWorks
mpx100
FC1
GE1
FC2
GE1
mpx100/100b 2
MGMT
MGMT
IOIOI
!
GE2
FC1 FC2
FC1
EVA
Storage System
FP2
FP1
IP Network
management
D1
D2
(WAN/LAN)
IP Network iSCSI data
(LAN/VLAN)
Existing
Fibre Channel
HA fabric 1
iSCSI
NIC
Command
View EVA
MGMT
HP StorageWorks
MGMT
mpx100
IOIOI
!
GE2
GE1
FC2
GE1
A
B
FP1
FP2
*mpx100b supported on EVA4400 only
Existing
Fibre Channel
HA fabric 2
25166b
Figure 8 Multipath Fabric-iSCSI-Fibre Channel attachm ent mode conguration
EVA iSCSI connectivity user guide
27
NOTE:
Dual NICs and dual IP fabrics are supported for complete redundancy.
Figure 9 illustrates the high availability multipath fabric iSCSI–Fibre Channel a ttachment
mode conguration with four iSCSI controller host ports. This conguration is used with EVA4000/4100/6x00/8x00 storage systems.
Discovery
IP Address
Figure 9 EVA4/6/8x00 fabric with four iSCSI–Fibre Channel controller host ports
Security
Presented
iSCSI Targets
D1 FP1A
FP1B
D2
D3
D4
FP2A FP2B FP2A FP2B FP1A FP1B
mpx100 1 mpx100 2
FC1 FC2 GE1
Note: Zoning is required to limit access to the targets shown in the table.
MGMT
HP StorageWorks
MGMT
mpx100
IOIOI
!
GE2
FC1
GE1
FC2
GE1
Storage System
FP1 FP2
A
B
FP2FP1
HP StorageWorks
mpx100
FC1
FC2
FC1 FC2GE2 GE2
EVA
IP Network
management
(WAN/LAN)
MGMT
MGMT
IOIOI
!
GE2
GE1
D2 D1 D4 D3
IP Network
iSCSI data
(LAN)
Existing
Fibre Channel
HA fabric 1
Existing
Fibre Channel
HA fabric 2
iSCSI
NIC
Command View EVA
25174a
The mpx100/100b supports Challenge Handshake Authentication Protocol (CHAP) at the connection setup. CHAP is a security protocol that includes support for both the bidirectional (mutual) authentication and the one-way (target) authentication options. You can congure and set up CHAP in the mpx100/100b. Thetargetmpx100/100bcanhaveitsownuniquepasswordforOne-WayCHAP option. The initiator itself can have its unique password for the bidirectional CHAP o ption with the mpx100/100b target. See “CHAP restrictions” on page 203 for more information.
Conguring HP StorageWorks Continuous Access EVA and Business Copy
Currently supported EVA software applications for Fibre Channel hosts such as HP StorageWorks Continuous Access, Business Copy (BC), Storage System Scripting Utility (SSSU), and Replication Solutions M anager (RSM) are supported with the EVA iSCSI connectivity option. The limitations of using some of these applications on iSCSI hosts are discussed in Chapter 2, page 29.
28
Overview of the EVA and EVA4400 iSCSI connectivity option
2Configuration rules and guidelines
This chapter discusses the rules and guidelines for the HP StorageWorks EVA and EVA4400 iSCSI connectivity option. This chapter contains the following topics:
Operating system rules and guidelines,page30
EVA storage system rules and guidelines,page34
Fibre Channel switch/fabric rules a nd guidelines, page 36
Command View EVA management rules and guidelines, page 37
IP net work rules and guidelines,page37
EVA and EVA4400 iSCSI Connectivity option
This section contains information about limits, rules, and guidelines for the EVA and EVA4400 iSCSI Connectiv
• EVA and EVA4400 iSCSI connectivity option architectural d esign limits
• EVA and EVA4400 iSCSI connectivit y option supported maximums
• General E
• Operating system rules and guidelines
ity option, including:
VA and EVA4400 iSCSI connectivity rules
EVA and EVA4400 iSCSI connectivity option architectural design limits
Maximum of 256 connections per iSCSI port
Maximum of 16 Fibre Channel targets (a target connected to both Fibre Channel (FC) ports is
only counted once)
NOTE:
The architectural design limits listed do not constitute supported congurations.
EVA and EVA4400 iSCSI connectivity option supported maximums
Table 5 shows the supported mpx100 maximums.
Table 5 Supported mpx100 maximums
Description
EVA storage system
The mpx100 / 10 0 b
Total number of iSCSI Initiators
Maximum per EVA or EVA4400 iSCSI connectivity solution
Hardware
1
2
Conguration
1 mpx100—150 (single-path or multipath) 1 mpx100b—16 (base), 48 (license upgrade 1), 150 (license upgrade 2) Note that the mpx100/100b can serve both single-path and multipath
LUNs concurrently.
EVA iSCSI connectivity user guide
29
Description
Maximum per EVA or EVA4400 iSCSI connectivity solution
Total number of iSCSI LUNs
Total number of iSCSI targets per initiator
150 LUN s m a xim um
8(seeFigure 9 on page 28)
General EVA and EVA4400 iSCSI connectivity rules
NOTE:
The EVA iSCSI c
onnectivity option (mpx100) is supported with EVA4000/4100/4400/6x00/8x00 and EVA3000/5000 storage systems. The E VA4400 iSCSI connectivity option (mpx100b) is supported only with EVA4400 storage systems.
Each EVA storage system can h ave a maximum of two mpx100 or two mpx10 0b bridges.
Each EVA controller host por t can connect to a maximum of two mpx100/100b FC por ts.
B oth mpx10 0/100b FC ports can connect only to the same EVA storage system.
Each mpx100/100b FC port can connect to a maximum of one EVA port.
Each iSCSI Initiator can have a maximum of eight mpx100/100b iSCSI targets.
Operating systems supported
Table 6 provides the operating system rules and guidelines.
Table 6 Operating systems
Operating system Version
HP OpenVMS
Apple Mac OS X
Microsoft Windows Server 2003
Microsoft Windows XP Microsoft Windows Server
2008 Microsoft Windows Server
2008 Server Core
Linux
Sun Solaris 10
VMware 3.5
8.3–H1 (IA64) (native iSCSI driver) (mpx100 only)
10.5. 3 , 10 . 5.2, 10.4.11, 10 .4 .10, (Power PC and Intel Power Mac G5, Xserve, MacPro)
SP2, SP1, R2 (x86 32/64–bit, IA64) Professional Workstation SP1, SP2 (x86 32/64–bit, IA64)
Red Hat Linux 5 update 1 Red Hat Linux 4 update 6, update 5,
update 4 SUSE Linux 1 0 SP1, 10 SUSE Linux 9 SP4, SP3 (x86
32–bit/64–bit, IA-64)
Update 5 and update 4 (Sparc and x86) (EVA4x00/6x00/8x00 only)
Supported with the following guest operating systems: Windows 2003 SP2, SP1; Red Hat 5.1, 5.0, 4 update 6, update 5; SUSE 10 SP1, 10, 9 SP4, SP3
Cluster support
OpenVMS Clusters
None
30
Conguration rules and guidelines
Initiator rules
and guidelines
This section de
iSCSI Initiator rules and guidelines” on page 31
VMware iSCSI Initiator rules and guidelines” on page 31
Windows iSCS
OpenVMS iSCSI Initiator rules and guidelines” on page 33
Apple Mac OS X iSCSI Initiator rules and guidelines” on page 32
Linux iSCSI
Solaris iSCSI Initiator rules and guidelines” on page 33
scribes the following iSCSI Initiator rules and guidelines:
I Initiator rules and guidelines”onpage32
Initiator rules and guidelines” on page 32
iSCSI Initiator rules and guidelines
This section describes iSCSI Initiator rules and guidelin es.
iSCSI Initiators and mpx100/100b iSCSI ports can reside in different IP subnets. This requires set-
ting the mpx100/100b iSCSI gateway feature. See “Conguring t h e mpx 10 0 /10 0 b”onpage49 and “Command line interface” on page 141 for more information.
There can be a maximum of eight mpx100/100b iSCSI targets per iSCSI Initiator.
Both single path and multipath initiators are supported on the same mpx100/100b.
Fibre Channel LUNs and iSCSI LUNs are not supported on the same server.
VMware iSCSI Initiator rules and guidelines
Supports
N a tive iSCSI software initiator in VMware ESX 3.5
Guest OS SCSI Controller, LSI Logic and/or BUS Logic (BUS Logic only with SUSE Linux)
ESX serv
GuestOSbootfrommpx100/100biSCSIdevice
VMFS le system data stores and raw device mapping for guest OS virtual machines
Multi-
VMware ESX server 3.5 supports multipath, using ESX server's native multipath solution based
Does no
Hardware iSCSI HBA
BUS Logic Guest OS SCSI controller with Windows and/or Red Hat Linux
EVA30
Supported by the EVA iSCSI option with VMware:
NIC teaming
VMwar
:
er's native multipath solution, based on NIC teaming in the server
initiator access to the same LUN via VMFS cluster le system
on NIC teaming
t support:
00/5000
enativeiSCSIsoftwareinitiator.
See “Installing and upgrading EVA iSCSI connectivity” on page 39.
Network teaming
The EVA iSCSI option supports NIC teaming with VMware.
iSCSI Initiator software
The EVA iSCSI option supports the VMware native iSCSI software Initiator. See
talling and upgrading EVA iSCSI connectivity” on page 39 for information on version support.
Ins
EVA iSCSI connectivity user guide
31
Windows iSCSI In
itiator rules and guidelines
Windows requirements
Microsoft iSCSI Initiator versions 2.06 , 2.07
TCPIP parameter Tcp1323Opts must be entered in the registry with a value of DWord=2
under the registry setting# HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Ser-
vices\Tcpip\Parameters.
NOTE:
This parameter is automatically set by the
Option for Enterprise Virtual Array Windows software kit
ttp://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html
h
CAUTION:
Using the Registry Editor incorrectly can cause serious problems that may require reinstallation of the operating system. Backup the registry before making any changes. Use Registry Editor at your own risk.
Windows iSCSI Initiator multipath requirements
The following system requirements must be met in Windows:
The iSCSI Initiator multipath in Windows supports the Microsoft iSCSI MPIO device
The iSCSI Initiator multipath in Windows supports multipath on single or dual mpx100's/100b's.
The iSCSI Initiator multipath in Windows does not support:
•SecurePath
• Multipathing with EVA3000/5000 GL 3.x active/passive storage systems
• Windows XP Professional multipath
HP StorageWorks iSCSI Connectivity
. This kit is available at
Apple Mac OS X iSCSI Initiator rules and guidelines
Firmware and hardware required:
Power PC a
AT TO Mac driver version 3.10
Supports
ISNS
CHAP
Does not support:
Multipa
iSCSI I nitiator operating system considerations
Congure the Mac host mode iSCSI Initiator setting through HP Command View 6.0.2 or later.
See “HP StorageWorks mpx manager for Windows” on page 57 for more information on HP Storag
Host mode setting is Apple Mac OS X.
nd Intel Power Mac G5, Xserve, MacPro
thing
eWorks mpx Manager.
Linux iSCSI Initiator rules and g uidelines
Supports:
32
Conguration rules and guidelines
Fibre Channel LUNs and iSCSI LUNs are not supported on the same server.
N IC bonding is not supported.
Does not suppor t:
NIC bonding
iSCSI Initiator operating system considerations:
Congure the Linux host mode iSCSI Initiator set ting through the mpx Manager GUI, CLI or through
HP Command View 6.0.2 or later. See “HP StorageWorks mpx manager for Windows” on page 57 for more information.
Solaris iSCSI Initiator rules and guidelines
Congure the host mode iSCSI Initiator setting through the mpx Manager GUI, CLI or through HP Command View. See HP StorageWorks mpx m anager for Windows, page 57 for more information.
Operating system considerations:
For HP Comma
For HP Command View 8.0 and later, Solaris iSCSI initiators should be set to Solaris host mode.
Supports:
Solaris 10,
Multipath
Any native Solaris 1 GbE NIC
Does not su
TOE or iSCSI HBA
nd View 7.0 and earlier, Solaris iSCSI initiators should be set to Linux host mode.
update 4 and update 5, iSCSI Initiator only
pport:
Solaris i SCSI Initiator multipath rules and guidelines
Supports:
Solaris 1 0 MPxIO only.
Multipath on single or dual mpx100's/100b's.
MPxIO Symmetric option only.
MPxIO round-robin.
MPxIO auto-failback.
Does not suppor t:
LUN 0
SecurePath
OpenVMSiSCSIInitiatorrulesandguidelines
The following lists OpenVMS iSCSI Initiator rules and guidelines:
OpenVMS hardware requirements
The OpenVMS iSCSI Initiator has the following hardware requirements:
Architecture
• The software supplied in the iSCSI Initiator Technology Demonstration Kit (TDK) is supported
on both Alpha and I64 architectures.
EVA iSCSI connectivity user guide
33
NOTE:
Note that since V8.3-1H1 is an I64 only release, Alpha s upport will not be made available until the next Alpha release.
iSCSI Targets
• The only supported iSCSI targets are the HP StorageWorks EVA iSCSI connectivity option (mpx100) . The mpx100 provides iSCSI access to EVA-based storage.
• N o other iSCSI storage targets are currently supported.
NICs
The platform on which the initiator is installed must contain at least one supported GbE NIC.
Net work Switches
For best performance, the network switches to which the iSCSI-related NICs are connected should support:
• The maximum link speed of the NICs
• Jumbo frames
•VLANs
OpenVMS software requirements
The OpenVMS iSCSI Initiator has the following software requirements:
OpenVMS version—The OpenVMS Software-Based iSCSI Initiator TDK is available on OpenVMS
V8.3-1H1 and later. Since OpenVMS V8.3-1H1 is an I64-only release, Alpha support will be made available with the next Alpha release.
TCP/IP—The OpenVMS Software-Based iSCSI Initiator utilizes HP TCP/IP Services for O penVMS.
Third-party TCP/IP products are not supported and will not work with the initiator.
OpenVMS iSCSI rules and guidelines
Operating system considerations:
EVA3000/5000 active/active support only
EVA4000/4100/4400/6x00/8x00 supported (mpx100 only)
Supported:
iSNS
Multipath
OpenVMS V8.3-1H1 or later
Not supported:
CHAP
Header and Data Digests
iSCSI boot
iSCSI Initiator software
The EVA iSCSI option supports the OpenVMS native iSCSI software Initiator. See OpenVMS software
requirements, page 34 for information on version support.
EVA storage system rules and guidelines
Table 7 identies the EVA storage system rules and guidelines.
34
Conguration rules and guidelines
Table 7 EVA conguration table (mpx100/100b)
EVA array
EVA4x00/
6x00/8x00
EVA3000/
5000
EVA4x00/
6x00/8x00
Operating
system
Microsoft
Windows
Apple Mac
OS X
HP
OpenVMS
Microsoft
Windows
Apple Mac
OS X
HP
OpenVMS
Sun Solaris
VMware ESX
Software
version
XCS 6. 100
nl
or later, XCS
09000000
or later
XCS 6. 100
nl
or later, XCS
09000000
or later
XCS 6. 100
nl
or later,
nl
XCS
09000000
or later
(mpx100
only)
VCS 4.001
nl
or later
VCS 4.00 1
or later
VCS 4.001
nl
or later
XCS 6. 100
nl
or later, XCS
09000000
or later
XCS 6. 100
or later),
nl
XCS
0900 1000
or later
One
mpx100
Second
mpx100
1
Fabric attach
mpx100
Direct
connect
mpx100
√√√ √
Not
supported
Not
supported
√√
√√
√√√
Not
Not
supported
√√√
supported
Not
supported
√√√ √
√√√ √
1
For congu
for single-
rations that include both single-path and multipath operating systems, single-path operating systems are supported
path attachment on either (but not both) multipath mpx100s.
An EVA storage system conguration is considered to have two redundant HSV controllers. The following list details the limitations of an EVA storage system by failover mode conguration:
In a fabric connect conguration, a maximum of t wo mpx100's/100b's can be zoned with one
EVA storag
esystem.
In a fabric connect conguration, a maximum of one EVA storage system can be zoned with a
maximum of two mpx100's/100b's.
In a direct connect conguration, a maximum of two mpx100's/100b's are supported to one
EVA stora
ge system.
An EVA storage system can present LUNs to iSCSI Initiators and Fibre Channel hosts concurrently.
HP StorageWorks EVA storage system software
The EVA iSCSI connectivity option is supported with current EVA storage software applications such as HP StorageWorks Continuous Access, Business Copy, SSSU, and Replication Solutions Manager. There are some restrictions with iSCSI hosts when using the EVA iSCSI connectivity option, as described in the following sections.
EVA iSCSI connectivity user guide
35
Supported featu
res for iSCSI hosts
For HP StorageW
orks Business Copy, iSCSI hosts are supported with the following basic Business Copy
features:
Snapshots of LUNs presented to iSCSI hosts
Snapclones (normal or 3-phase) of LUNs presented to iSCSI hosts
Instant Resto
re from snapclone to original source
iSCSI h osts can access and write to EVA snapshots
iSCSI hosts can acc ess and write to EVA snapclones
HP Command Vi
ew, SSSU, or RSM can be used to create snapshots manually or automatically
on a schedule using RSM
CLI support to enter replication commands from iSCSI hosts
Features not supported for iSCSI hosts
Advanced replication features for LUNs presented to iSCSI hosts through the mpx100 that require a host agent on the iSCSI Initiator are not supported, as there are currently no iSCSI replication host agents available. The following features are not supported:
Mounting and u nmounting LUNs via a host agent. Mounting and unmounting LUNs must be
done manually
Accessing the host's view of the storage, such as viewing an F drive drive from a host
Deploying host agents to allow customers to launch a script on the iSCSI host
Table 8 lists the support available for EVA storage system software when using the EVA iSCSI connectivity
option.
Table 8 Support for EVA storage system software with iSCSI connectivity
HP StorageWorks storage product
HP StorageWorks Business Copy
HP StorageWorks Replication Solutions Manager, SSSU
HP StorageWorks Continuous Access EVA
HP StorageWorks Continuous Access EVA with HP supported FCIP gateways
mpx100 direct connect or mpx100 fabric attach
iSCSI hosts supported with basic Business Copy
Fibre Channel hosts supported with full Business Copy
(operating-system dependent)
iSCSI and Fibre Channel hosts supported
iSCSI and Fibre Channel hosts LUN remote replication supported
iSCSI and Fibre Channel hosts LUN remote replication supported
Fibre Channel switch/fabric rules and guidelines
In fabric-attachment mode, the mpx100/100b is supported with Fibre Channel switches (Table 9). For minimum switch rmware version, contact you HP representative.
Table 9 Fibre Channel switch/fabric requirements
Switch series
B-Series
C-Series
Model/Firmware level
3.x
6.x, 5.x
All
M-Series
36
Conguration rules and guidelines
All
HP Command View E
VA management rules and guidelines
The following r
ules and guidelines for HP Command View EVA are applicable when using the EVA and
EVA4400 iSCSI connectivity option:
Suppor ts HP Command View EVA iSCSI connectivity (Fibre Channel switch not required). See Figure 3, Figur
ttp://h7
at: h
e4, and the HP StorageWorks Command View EVA iSCSI deployment whitepaper
1028.www7.hp.com/ERC/downloads/4AA2-0607ENA.pdf.
A maximum of two mpx100's/100b's can be discovered by an EVA storage system.
H P Command View EVA 8.0 or later is required for the EVA4400 and the mpx100b.
HP Command Vi
ew EVA manages the mpx100/100b out of band (IP) through the mpx100/100b Mgmt IP port. The HP Command View EVA application server must be on the same IP network with the mpx100/100b Mgmt IP port.
The HP StorageWorks mpx100/100b iSCSI Initiator or iSCSI LUNmasking information does
notresidei
n the HP Command View EVA database. All iSCSI Initiator and LUN presentation
information resides in the mpx100.
The default iSCSI Initiator EVA host mode setting is Windows. The iSCSI Initiator for Linux, MAC,
Solaris and VMware host mode setting may be congured with HP Command View.
Supported IP network adapters
Table 10 lists the IP network adapters supported by EVA iSCSI connectivity.
Table 10 Supported IP network adapters
Operating system
HP OpenVMS
Supporte
All standard GbE NICs/ASICs supported by HP for OpenVMS
d IP network adapters
Apple Mac OS X
Linux
Microsoft Windows 2008, 2008 Server Core, 2003, and Windows XP
Sun Solaris
Mware
V
1
TOE NIC features are not supported
All standard GbE NICs/ASICs supported by Apple
All stand Linux
HP NC3xx
HP NC510
QLA405
All standard GbE NICs/ ASICs supported by HP
HP NC3xx TOE with MS scalable networking
QLA4052C/QLE4062C/QMH4062C
For Windows 2003 only
HP NC510x
Alacritech SES2002ET, SES2102ET, SES2001XT,
All standard GbE ICs/ASICs supported by
HP NC3xx
A V
ard GbE NICs/ASICs supported by HP for
1
:
x (Red Hat 4, SUSE 9 only), TOE NIC
es are not supported
featur
2C/QLE4062C/QMH4062C
for Windows 2008, 2003 and Windows XP:
pack
SES2104ET with MS scalable networking pack, 1000 TOE NIC support with native driver
Sun/HP for Sun
ll standard GbE NICs/ASICs supported by HP for
Mware EVA iSCSI connectivity
EVA iSCSI connectivity user guide
37
NOTE:
For further information on Alacritech adapters, visit the HP Supplies and Accessories website:
ttp://h30094.www3.hp.com/searchresults.asp?search=keyword&search_eld;=description&
h search_criteria;=alacritech&Image1.x;=9&Image1.y;=10
NOTE:
For further information on Qlogic adapters, visit w
IP network requirements
HP recommends the following:
Network protocol: TCP/IP IPv6, IPv4 Ethernet 1000 Mb/s.
IP data: LAN/VLAN support with less than 10 ms latency. Maximum of 1 VLAN per iSCSI port.
A dedicated IP net work for iSCSI data.
IP management—LAN/WAN supported.
ww.qlogic.com.
38
Conguration rules and guidelines
3 Installing and upgrading EVA iSCSI connectivity
This chapter contains information about the following topics:
Verify your system requirements, page 39
Verify your installation type and components,page39
EVA and EVA4400 iSCSI connectivity installation, page 40
Rack mount the mpx100/100b,page43
• Connect the mpx100/100b to an IP switch, page 44
Start the mpx100/100b, page 45
Setthempx100/100bmanagementporttouseHPStorageWorksCommandViewEVA, page 45
To install your EVA iSCSI connectivity option, complete proce­dures in “Verify your system requirements” on page 39 through “Set the mpx100/100b managem ent port to use HP StorageWorks Command View EVA” on page 45 in the order shown, depending upon your conguration.
Verifyi
ng your system requirements
hat your system has the hardware required for installing the HP StorageWorks EVA and EVA4400
Verify t iSCSI connectivity options:
Server: Microsoft Windows Server 2008/2003, XP Professional, Apple Mac OS X, Linux Red
Hat or SU
Storage system: EVA4000/4100/4400/6x00/8x00 or EVA3000/5000 storage system
Connectivity: B-Series, C-Series, or M-Series Fibre Channel switch for HP Command View EVA
connec
NOTE:
For con HP supports HP Command View EVA connectivity without a Fibre Channel switch. See Figure
5 on page 24.
Network and cables: A GbE IP Ethernet network and Cat 5e or Cat 6 network cab les
For the mpx100/100b-to-EVA fabric attach: a B-Series, C-Series, or M-Series Fibre Channel
switch, SFPs, and optical Fibre Channel cables
The suppor ted operating systems are specied in Table 6 on page 30. See “Operating system rules and guidelines” on page 30 for detailed information about supported operating systems.
SE, Sun Solaris, WMware, or HP OpenVMS (mpx1 00 only) server
tivity
only
gurations that use
direct connect mpx100/100b and direct connect Windows hosts,
Verify your installation type and components
Table 11 describes the iSCSI installation types and attachment modes for the HP StorageWorks EVA and
EVA4400 iSCSI connectivity option.
EVA iSCSI connectivity user guide
39
Table 11 Installation information
Installation type Fibre Channel attachment mod e
Factory instal the Enterpris Array (EVA)
option for an existing EVA
Field upgrade option for multipathing capability for an existing EVA with an iSCSI option
1
Adds a second mpx100
led with
eVirtual
1
Direct connect
Fabric (Figure 6 on page 25)
Direct connect (Figure 1 on page 22 and Figure 2 on page 23)Field upgrade iSCSI
Fabric (Figure 6 on page 25)
Direct connect (Figure 7 on page 27)
Fabric (Figure 8 on page 27)
(Figure 1 on page 22 and Figure 2 on page 23)
In addition to the congurations listed in Table 11, the EVA8x000 is supported with up to four iSCSI–Fibre Channel controller host ports, shown in Figure 5 on page 24 and Figure 9 on page 28
Table 12 lists installation components required for the iSCSI option.
Table 12 Installation components
Option
HP StorageWorks EVA and EVA4400 iSCSI connectivity option—direct connect, factory installed
HP Storag connectivity option—fabric
eWorks EVA and EVA4400 iSCSI
Installation components
N/A
Fibre Cha
HP Storag quickspec at: h
products/storageworks/evaiscsiconnect/index.html
nnel SFPs and o ptical cables. See the
eWorks EVA iSCSI Connectivity Option
ttp://h18006.www1.hp.com/
1mpx100ormpx100b 1EULA 1 Quick install instructions 1 ReadMeFirst 1SR232portconverter 1 Blank panel 1 B Bezel assembly, 1U
HP StorageWorks EVA and EVA4400 iSCSI connectivity option (parts list)
HP StorageWorks EVA and EVA4400 iSCSI Upgrade Option (parts list)
1C-Shelf1U 1BKTslide1Ushelf,left 1 BKT slide 1U shelf, right 3 Nut U-Nut 10-32 0.615L x 0.520W CSZ 5Screw,SEMS10-32Pan0.625XRCS 9Screw,SEMS10-32Pan0.325XRCS 5Nut,KEPs10-32,0.375AFCSZEXT 2 Cable assembly, 4G copper, FC, SFP 2.0m 1 PDU cord 2.4m (c13-c14) 2 EVA host port Fibre Channel loopback connectors
1mpx100ormpx100b 1EULA 1SR232portconverter 1 Quick install instructions 1 ReadMeFirst
Installing EVA and EVA4400 iSCSI connectivity
Select one of the following procedures, depending on your iSCSI option.
40
Installing and upgrading EVA iSCSI connectivity
To install the HP StorageWorks EVA or EVA4400 iSCSI option, select the appropriate installation procedure:
• Fabric attach—Recongure the factory-installed EVA or EVA4400 iSCSI connectivity option to fabric attachment mode
• Field direct connect—HP StorageWorks EVA or EVA4400 iSCSI connectivity option with direct connect attachment mode
• Field direct connect—HP StorageWorks EVA or EVA4400 iSCSI connectivity option with direct connect attachment mode
• Multipath direct connect—HP StorageWorks EVA or EVA4400 iSCSI upgrade option for multipathing capability and direct connect attachment mode
• Multipath fabric attach—HP StorageWorks EVA or EVA4400 iSCSI upgrade option with multipathing capability and fabric attachment mode
Reconguring the factory-installed EVA or EVA4400 iSCSI connectivity option to fabric attach mode
Figure 1 on 6 on page 25 il
page 22 illustrates the factory-installed EVA iSCSI connectivity option a nd Figure
lustrates the fabric iSCSI–Fibre Channel attachment mode conguration.
To install
1. Remove the two black Fibre Channel cables connecting the mpx100/100b to the HSV controllers.
2. Insert the SFPs into ports FC1 and FC2 on the mpx100/100b, and into an available FP port
3. Connect one end of an orange Fibre Channel cable to the FC1 port on the mpx100/100b.
4. Connect one end of an orange Fibre Channel cable to the FC2 port on the mpx100/100b.
5. Connec
6. Connect one end of an orange Fibre Channel cable to any available port on the Fibre Channel
7. Set t
8. Continue with “Connect the mpx100/100b to an IP switch” on page 44.
fabric attach:
on each HSV controller. Then insert SFPs into four of the available ports on the Fibre Channel switche
Connect the other end of the cable to any available SFP port on the Fibre Channel switch.
Connect the other end of the cable to any available SFP port on the Fibre Channel switch.
switc the top HSV controller.
switch. Connect the other end of the cable to the available FP port on the bottom HSV controller.
NOTE
For t
a. Press the Down Arrow key on the EVA front panel. System Information is displayed. b. Press the Right Arrow key. Versions is displayed. c. Press the Down Arrow key. Host Port Config is displayed. d. Press the Right Arrow key. Fabric is displayed. e. Press Enter. f. Press the Down Arrow key until the port that you want to change to Fabric Connect Mode is
g. Press Enter. h. Repeat the process for the other controller and then reboot the storage system.
s.
t one end of an orange Fibre Channel cable to any available p ort on the Fibre Channel
h. Connect the other end of the orange Fibre Channel cable to the available port on
he HSV controller ports to fabric topology.
:
he EVA4400, see the product installation documentation.
displayed.
EVA iSCSI connectivity user guide
41
Field direct con
nect—HP StorageWorks EVA or EVA4400 iSCSI connectivity
option with direct connect attachment mode
Figure 1 illustrates the direct connect iSCSI–Fibre Channel attachment mode conguration.
NOTE:
This option is supported only on the EVA4400 running XCS 9.000 or later rmware, or EVA4x00/6x00/8x00 running XCS 5.100 or later rmware.
To install eld d irect connect with direct connect attachment mode:
1.
Rack mount the mpx100/1 00b. (See “Rack mount the mpx100/100b”onpage43.)
2. Connect one end of the black Fibre Channel cable into the FC1 port of the mpx100/100b
(Figure 1 on page 22).
3. Connect the other end of the black Fibre Channel cable into an available FP port of the HSV top
controller.
4. Connect one end of the black Fibre Channel cable into the FC2 port of the mpx100/100b
(Figure 1 on page 22).
5. Connect the other end of the black Fibre Channel cable into an available FP port of the HSV
bottom controller.
6. Install supplied port loopback connects on any unused HSV controller host ports.
7. Set the HSV controller ports to direct connect topology:
NOTE:
For the EVA4400, see the product installation documentation.
a. Press the Down Arrow key on the EVA front panel. System Information is displayed. b. Press the Right Arrow key. Versions is displayed. c. Press the Down Arrow key. Host Port Config is displayed. d. Press the Right Arrow key. Fabric is displayed. e. Press the Down Arrow key. Direct Connect is displayed. f. Press the Right Arrow key. Port 1 is displayed. g. Press the Down Arrow key until the port that you want to change to Direct Connect
Mode is displayed.
h. Press Enter. i. Repeat the process for the other controller.
8. Continue with “Connect the mpx100/100b to an IP switch” on page 44.
Field fabric attach—HP StorageWorks EVA or EVA4400 iSCSI connectivity option with fabric attach mode
e7on page 27 illustrates the fabric iSCSI–Fibre Channel attachment mode conguration.
Figur
tall the iSCSI connectivity option with fabric attachment mode:
To ins
1.
Rack mount the mpx100/1 00b. (See “Rack mount the mpx100/100b”onpage43.)
2. Install SFPs in the mpx100/100b ports ( FC1, FC2) and the top and bottom controller FP ports.
3. Connect one end of the orange Fibre Channel cable to the FC1 SFP port of the mpx100/100b.
Connect the other end of the orange Fibre Channel cable to any available SFP port on the
re Channel switch.
Fib
42
Installing and upgrading EVA iSCSI connectivity
4. Connect one end of the orange Fibre Channel cable to the FC2 SFP port of the mpx100/100b.
Connect the other end of the orange Fibre Channel cable to any available SFP port on the Fibre Channel switch.
5. Connect one end of the orange Fibre Channel cable to any available SFP port on the Fibre
Channel switch. Connect the other end of the orange Fibre Channel cable to an available FP port on the top HSV controller.
6. Connect one end of the orange Fibre Channel cable to any available SFP port on the Fibre
Channel switch. Connect the other end of the orange Fibre Channel cable to an available FP port on the bottom HSV controller.
7. Continue with “Connect the mpx100/100b to an IP switch” on page 44.
Multipath direct connect—HP StorageWorks EVA or EVA4400 iSCSI upgrade option for multipathing capability and direct connect attachment mode
Figure 7 on page 27 illustrates the EVA iSCSI direct connect m o d e for multipathing option.
To install
the upgrade option with direct connect attachment mode:
1. Insert the
2. Remove th
the n ew mpx100 FC1 port.
3. Continue with “Connect the mpx100/100b to an IP switch” on page 44.
new mpx100/100b into the rack shelf next to the existing mpx100/100b.
e black Fibre Channel cable from the existing mpx100 FC2 port and connect it to
Multipath fabric attach—HP StorageWorks EVA or EVA4400 iSCSI upgrade option with multipathing capability and fabric attach mode
Figure 8 on page 27 illustrates the EVA iSCSI fabric attach mode for multipathing option.
To install the upgrade option with multipath direct connect attachment mode:
1. Insert the new mpx100/100b into the rack shelf next to the existing mpx100/100b.
2. Remove the orange Fibre Channel cable from the existing mpx100/100b FC2 port and Fibre
Channel switch port.
3. Connect one end of the cable into the new mpx100/100b FC1 port and the other end into the
second Fibre Channel switch port.
4. Disconnect the orange Fibre Channel cable from the top controller FP1 at the Fibre Channel
switch end and connect it to the second Fibre Channel switch port.
5. Disconnect the orange Fibre Channel cable from the bottom controller FP2 at the Fibre Channel
switch end and connect it to the rst Fibre Channel switch port.
6. Continue with “Connect the mpx100/100b to an IP switch” on page 44.
Rack mounting the mpx100/100b
You will need one Phillips head screw driver. To rack mount the mpx100/100b:
:
NOTE
earoftheC-Shelfistheendwithouttheknurledthumbscrews.
The r
1. Assemble two slide brackets (right and left) on the b ack ends of the C-Shelf, using the four
hole nut plates.
a. Mount the C-Shelf with the open side up. b. Fit the slide bracket along the 1U side at the back of the C-Shelf with i ts screw hole tab
pointing outboard and its lip suppor ting the C-Shelf. There is a right-hand slide bracket and a left-hand slide bracket.
EVA iSCSI connectivity user guide
43
c. Place the nut pla te outside the right-hand slide bracket with the dimpled threaded holes
pointing outbo a rd.
d. Placetwoscrews(10-32Pan0.625XRCS)throughthetwoholesatthebackoftheC-Shelf,
through the slide plate slots, and loosely into the front two threaded holes of the nut plate.
e. Repeat with the opposite hand slide bracket.
2. Install the C-Shelf assembly into the rack. a. Locate a clear 1U area space within the rack.
NOTE:
The 1U space in a rack includes three rail mounting holes; these rack holes, however, are not evenly spaced. For best installation the C-Shelf can be centered in a 1U space. Locate the center, nd a hole that is 5/8" on center from the holes immediately above and below. This is the center of a 1U mounting position. The holes, t wo above and two below this center, are only 1/2" on center from their adjacent holes.
b. At the front of the rack, in the center mounting holes, install the KEPs 10–32 and 0.375
AF–CSZ EXT nuts.
c. Carefully supporting the C-Shelf assembly, loosely thread the knurled thumbscrews through
the rack into the two nuts just installed.
d. Go to the back of the rack and position a slide bracket next to the corresponding holes at
the back of the rack. Slide the bracket to the rear until the threaded screw hole tabs are ushwiththeinsideoftherackrail.
e. Insert two 1 0-32 Pan 0.325 XRCS screws through the rack rail into the threaded screw hole
tab and tighten loosely.
f. Repeat step e on the other side of the C-Shelf assembly. g. Tighten all four 10-32 Pan 0.325 XRCS screws at the rear of the C-Shelf assembly. h. Tighten the front two knurled thumbscrews. i. Tighten the two 10-32 Pan 0.625 XRCS screws at each side of the back of the C-Shelf
assembly.
3. Install the mpx100/100b into one of the two available positions in the C-Shelf: a. Slidethempx100/100bin(fromthefrontoftheequipmentrack). b. Make sure that the four tabs (two at the front and two at the rear) catch, to ensure that the
mpx100.100b is rmly seated in the C-Shelf assembly. To ensure that all four tabs engage, hold both the C-Shelf assembly and the mpx100/100b as you slide it in the last inch.
c. Once the rear of the mpx100/100b is ush with the front of the C-Shelf assembly and all
four clips are engaged, snap the Bezel on the front.
Connecting the mpx100/100b to an IP switch
To connect the mpx100/100b to an IP switch:
1. Connect one end of a standard Cat 5e or Cat 6 network cable to the IP network management
port on the mpx100/100b (Figure 1 on page 22). Connect the other end to an IP switch in a network that is accessible from the management server running HP Command View EVA.
2. Connect one end of another Cat 5e or Cat 6 network cable to the GE1 port on the
mpx100/100b. Connect the other end to an IP switch in the IP network that is a ccessible from the servers running an iSCSI Initiator.
3.
Continue with “Start the mpx100/100b” on page 45.
NOTE:
The management and GE port cables can be connected to the same IP network provided the subnet settings are congured to allow both Management and iSCSI communications.
44
Installing and upgrading EVA iSCSI connectivity
Starting the mp
x100/100b
To start the mpx
1. Attach the AC power cord to the mpx100/100b and the power distribution unit (PDU). Verify
that the mpx100’s/100b's System Power LED is illuminated. The mpx100/100
2. Verify that th
illuminated.
Fi g ure 10 Th e mpx10 0 p ort a n d LED l o cat i on s
1. M an age m
3. Input Power LED
5. FC ports 6. iSCSI ports
7. R S –232 po r t
100/100b:
b runs a self-test and begins normal operation.
e Heartbeat LED is blinking (once per second) and that the Input Fault LED is not
Figure 10 shows the location of the ports and LEDs on the mpx100/100b.
1
2 3 4
5
ent port (10/100 Ethernet) 2. Heartbeat LED
6
7
4. System Fault LED
8. AC power
8
25087a
Setting the mpx100/100b management port to use HP StorageWorks Command View EVA
Communication between the mpx100/100b and HP Command View EVA is established through the IP management port of the mpx100/100b and the IP connection of the HP Command View EVA application server. This link is necessary for iSCSI device discovery and subsequent iSCSI settings of the mpx100/100b through HP Command View EVA.
To set the mpx100/100b management port:
1. Use Telnet to connect to the mpx100/100b management port, or connect to the mpx100/100b
serial p ort using the HP-supplied connector.
NOTE:
The mpx100/100b ma nagement port’s default IP address is 10.0.0.1/255.0.0.0.The mpx100/100b serial port's default setting is 115200/8/n/1.
2. Loginwiththeusernameguest and the password password.
3. Enter the command admin start with the password config to enable administrator
privileges.
4. Enter the set mgmt command and follow the prompts to set the management port properties to
enable HP Command View EVA to communicate with the mpx100/100b management port.
EVA iSCSI connectivity user guide
45
NOTE:
Changes to the management port using the set mgmt command are effective immediately. Communications may be lost if Telnet was used to log in to the mpx1 00/100b.
5. Start HP Command View EVA and select the iSCSI Devices folder under the Hardware folder in
the HP StorageWorks Command View EVA window.
6. Click Discover iSCSI Devices (Figure 11). IftheiSCSIdeviceisnotdiscovered,clickAdd iSCSI
Device, enter the mpx100/100b IP address, and then click OK.
Figure 11 Discover iSCSI devices
7. Click iSCSI Controller 1 under Hardware/iSCSI Devices (Figure 1 2).
46
Installing and upgrading EVA iSCSI connectivity
Figure 12 H
ardware/iSCSI devices
8. Enter the I
P address and subnet mask for Port 1 on the IP Ports tab, and then click Save c hang es.
EVA iSCSI connectivity user guide
47
48
Installing and upgrading EVA iSCSI connectivity
4Configuring the mpx100/100b
This chapter contains the following major sections:
General description,page49
• Installation and maintenance, page 55
General description of the mpx100/100b
The mpx100/100b
The mpx10 0 serves as the data transport between iSCSI hosts and the EVA storage system (see Figure
13 ). The mpx100/100b connects to iSCSI hosts through IP connections, and to an EVA storage system
directly through FC ports or FC switch ports.
Heartbeat LED
Management Port
10/100 Ethernet
Input Power LED
System Fault LED
Figure 13 The mpx100 external components
Chassis LEDs
The chassis LEDs shown in Figure 14 provide information about the mpx100's/100b's operational status. These LE to the mpx100/100b, plug the power cord into the mpx100/100b AC power receptacle and into a 100- 240 VAC power source.
RS232 PortFC Ports ISCSI Ports
AC Power
25063b
Ds include the Input Power LED, Hear tbeat LED, and the System Fault LED. To apply power
EVA iSCSI connectivity user guide
49
Figure 14 Chassis LEDs
1. H e a rt be a t LE D
2. Input Power LED
3. System Fault LED
Power LED (green)
ThePowerLEDindicatestheinputvoltagestatusatthempx100/100blogiccircuitboard. Duringnormal operation, this LED is illuminated to indicate that the mpx100/100b logic circuit board is receiving the D C voltage from the power supply.
Heartbeat LED (green)
The Heartbeat LED indicates the status of the internal mpx100/100b processor and any power-on self test (POST to indica running. See “Heartbeat LED blink patterns” on page 174 for a description of all Heartbeat LED blink codes.
) error results. Following a normal power-up, the Heartbeat LED blinks about once per second
te that the mpx100/100b passed the POST and that the internal mpx100/100b processor is
1 2 3
25264a
System Fault LED (amber)
The System Fault LED illuminates to indicate that a fault exists in the mpx100/100b rmware or hardware. Fault conditions include POST errors and over-temperature conditions. The Heartbeat LED shows a b link code for POST errors, IP address conicts, and the over-temperature condition. See “Heartbeat LED blink pat terns” on page 174 for m ore information.
Chassi
s controls
The Maintenance button shown in Figure 1 5 is the only chassis control; it is used to reset the mpx100/100bortorecoveradisabledmpx100/100b.
50
Conguring the mpx100/100b
Figure 15 Chassis controls
Maintenance button
The Maintenance button is a multifunction momentary switch on the front panel. It provides the following functions:
Reset
Select boot image
Reset IP address
Enable D HCP
Factory defaults
Resetting the mpx100/100b
To reset the mpx100/100b, use a pointed nonmetallic tool to briey press and release (less than two seconds) the Maintenance b ut ton. The mpx100/100b responds as follows:
1. AllthechassisLEDsareilluminated.
2. After approximately two seconds, the power-on self-test (POST) begins, extinguishing the Heartbeat
and System Fault LEDs.
3. When the POST is c omplete, the Power LED illuminates and the H eartbeat LED ashes once per
second.
Resetti
ng the IP address
To reset press the Maintenance button with a pointed non-metallic tool, releasing the button after six seconds (six ashes of the Heartbeat LED). The mpx100/100b bo ots and sets the maintenance port to IP address 1 0.0.0. 1. The boot time is less than one minute.
NOTE:
g the IP address by this method is not persistent; to make the change persistent, use the command line
Settin
ace (CLI) or GUI.
interf
Enabling DHCP
Reset the mpx100/100b and congure the maintenance port to use DHCP to access its IP address. However, enabling DHCP by this method is not persistent. To make the change persistent, use the CLI or GUI.
Use a pointed nonmetallic tool to briey press the Maintenance button. Release the button after seven seconds (observe seven ashes of the Heartbeat LED). The mpx100/100b boots and congures the maintenance port for DHCP. The boot time is less than one minute.
the mpx100/100b and restore the maintenance port IP address to the default of 10.0.0.1, briefly
EVA iSCSI connectivity user guide
51
Resetting to factory default conguration
To reset the mpx100/100b and restore it to the factory default conguration (that is, to reset passwords, maintenance por erase discovered initiators and targets), use a pointed nonmetallic tool to briey press the Maintenance button. Release the button a fter twenty seconds (observe twenty ashes of the Heartbeat LED). The mpx100/100b boots and is restored to factory defaults. The boot time is less than one minute.
t IP address 10.0.0.1, iSCSI ports disabled with no IP address, erase presentations,
FC ports
The mpx100/100b has two Fibre Channel 1 Gb/s/2 Gb/s ports. The ports are labeled FC1 and FC2, as shown in Figure 16. Each of the ports is served by an SFP optical transceiver and is capable of 1 Gb/s or 2 Gb/s transmission. The SFPs are hot-pluggable. User ports can self-discover both the port t ype and transmission speed when connected to devices or switches. The port LEDs, located to the right of their respective ports, provide status and activity information.
2
1
3
25066b
Figure 16 Fibre Channel LEDs
1. A ct iv it y L ED
2. Status
3. Alert LED
LED
Port LEDs
Each port has three LEDs: amber LED (top) indicates activity, green LED (middle) indicates status, yellow LED (bottom) indicates a n alert condition. Table 1 3 species the colored LEDs associated with port activity.
Activity LED (amber)
The Activity LED indicates that data is passing through the port.
Status LED (green)
The Status LED indicates the logged-in or initialization status of the connected devices. The Status LED ashes to indicate the link rate: once for 1 Gb/s and twice for 2 Gb/s.
Alert LED (yellow)
The Alert LED indicates any port fault conditions.
52
Conguring the mpx100/100b
Table 13 Port LED messages
Activity
Power off
Power on (before F/W Initialization)
On-Line link established at 1-Gbps
Activity at 1-G bps
On-Line Link established at 2-Gbps
Activity at 2-G bps
Poweron(afterFWinitialization and/or loss of synchronization)
Firmware error
Beacon
Transceivers
The mpx10 0/100b supports SFP optical transceivers for the FC ports. A transceiver converts electrical signals to and from optical laser signals to transmit and receive data. Duplex ber optic cables plug into the transceivers, which then connect to the devices. A 1 Gb/s/–2 Gb/s FC port is capable of transmitting at 1 Gb/s or 2 Gb/s; however, the transceiver must also be capable of delivering these rates.
The SFP transceivers are hot-pluggable. This means you can remove or install a transceiver while the mpx100/100b is o p erating without harming the mpx100 / 100b or the transceiver. However, communication with the connected device will be interrupted.
Amber LED
OFF OFF OFF
ON ON ON
F
OF
ON
OFF
ON
OFF ON ON
Error code Error code
Flash Flash Flash
Green LED
3 seconds on—Flash off once
3 seconds on—Flash off once
3 seconds on—Flash off twice
3 seconds on—Flash off twice
Yellow LED
OF
OFF
OFF
OF
ON
F
F
iSCSI/GigabitEthernetports
The iSCSI/Gigabit Ethernet ports shown in Figure 1 7 are RJ-45 connectors that provide connection to an Ethernet net work through a 10/100/1000 Base-T Ethernet cable. The ports are labeled GE1 and GE2.
These ports have two LEDs: the Link Status LED (green) and the Activity LED (green). The Link Status LED illuminates continuously when an Ethernet connection has been established. The Activity LED illuminates when data is being transmitted or received over the connection.
Figure17GigabitEthernet(iSCSI)ports
1. A ct i v it y L ED
2. Status LED
Port LEDs
The iSCSI/TOE ports each have two LEDs: the Link Status LED (green) and the Activity LED (green).
1
2
25067b
EVA iSCSI connectivity user guide
53
Activity LED (gr
The Activity LED illuminates when data is being transmitted or received over the Ethernet connection.
een)
Link Status LED (green)
The Link Status LED illuminates continuously when an Ethernet connection has been established.
Management Eth
The management Ethernet p ort shown in Figure 1 8 is an RJ-45 connector. It provides a connection to a management server through a 10/100 B ase-T Ethernet cable. The port is labeled MGMT.
The management server is a Windows ser ver that is used to congure and manage the mpx100/100b. You ca n m an ag e the m px 10 0 /10 0 b ove r a n E th e rn et connection using the mpx Manager or the CLI.
The management Ethernet port has two LEDs: the Link Status LED (green) and the Activity LED (green). The Link Stat illuminates when data is transmitted or received over the Ethernet connection.
Figure 18 M anag em ent Ethernet port
1. S ta tu s L ED
ernet port
us LED illuminates continuously when an Ethernet connection is established. The Activity LED
2. Activity LED
Serial port
The mpx100/100b is equipped with an RS-232 serial port for maintenance purposes. The serial port location is shown in Figure 19, and is labeled IOIOI. You can manage the mpx100/100b through the serial port using the CLI.
Figure 19 Serial port
The serial port is connected using a standard 8-wire Ethernet cable and the supplied dongle to convert the Ethernet RJ–45 connector to a female DB9 connector. Refer to Table 1 4 for denitions of the serial port pins for both the mpx100's/100b's RJ–45 connector and the dongle DB9 connector.
54
Conguring the mpx100/100b
Table 14 Serial port pin denition
Dongle DB9 pin
number
mpx100/100b RJ–45 pin
number
nl
Description
15
26
33
4
5
6
71
88
9
2&7
4
5
N/C
Installation and maintenance
This section describes how to install and cong u re t he mpx100/100 b. It a l so describes h ow to update new rmwareandrecoveradisabledmpx100/100b.
For the mpx100/100b hardware installation, see “Installing and upgrading EVA iSCSI connectivity” on page 39.
Power requirements
Data carrier detect (D CD)
Receive data (RxD)
Transmit da ta (TxD)
Data terminal ready (DTR)
Signal ground (GND)
Data set ready (DSR)
Request to send (RTS)
Clear to send (CTS)
Ring indicator (RI)
Powerrequirementsforthempx100/100bare0.5Ampat100VACor0.25Aat240VAC.
Environmental conditions
Consider the factors that affect the climate in your facility, such as equipment heat dissipation and ventilation. The mpx100/100b requires the following operating conditions:
Operating temperature range: 5°–40° C (41°–104° F)
Relative humidity: 15–80%, noncondensing
Connecting the server to the mpx100/100b
You can manage the mpx100/100b using the HP StorageWorks mpx Manager or the CL I. HP StorageWorks mpx M a nager requires an Ethernet connection to the mpx100/100b management port. The CLI uses either an Ethernet connection or a serial connection. Choose the mpx100/100b management method, then connect the management server to the mpx100/100b in one of the following ways:
Ind irect Ethernet connection from the management server to the mpx100/100b RJ-45 connector
through an Ethernet switch or hub.
Direct Ethernet connection from the management server to the mpx100/100b RJ-45 Ethernet
connector.
Serial port connection from management workstation to the mpx100/100b RS-232 serial port
connector. This requires a 10/100 Base-T straight cable and a dongle.
Conguring the server
If you plan to use the CLI to congure and manage the mpx100/100b, you must congure the server appropriately. This involves either setting the server IP address for Ethernet connections, or conguring the server's serial port.
EVA iSCSI connectivity user guide
55
If you plan to use HP StorageWorks mpx Manager to manage the mpx100/100b, see “Install the management application” on page 56.
Setting the server IP address
The IP address of a new mpx100/100b is 10.0.0.1\255.0.0.0. To ensure that your server is con gured to communicate with the 10.0.0 subnet, see the following instructions for your server.
To set the server address for a Windows server:
1. Select Start > Settings > Control Panel > Ne twork and Dial-up Connections.
2. Select Make Ne
3. Click Connect to a private network through the Internet, and then click Next.
4. Enter 10.0.0.253 for the IP address.
wConnection.
Conguring the server serial port
To congure the ser ver serial por t:
1. Connect the cable with the supplied dongle from a COM por t on the management server to the
serial port on the mpx100/100b.
2. Congure the server serial port according to your operating system.
For Windows:
a. O pen the HyperTerminal application.
b. Select Start > Programs > Ac cessories > HyperTerminal > HyperTerminal.
c. Enter a name for the mpx100/100b connection and choose an icon in the Connection
Description window, and then click OK.
d. Enter the following COM Port settings in the COM Properties window, and then click OK.
•Bitspersecond:115200
•DataBits:8
•Parity:None
•StopBits:1
• Flow Control: None
For Linux: i. Set up m inicom to use the serial port. Create or modify the /etc/minirs.dfl le with
the following content:
pr portdev/ttyS0 pu minit pu mreset pu mhangup
The command line pr portdev/ttyS0 species port 0 on the server. Choose the pr setting to match the server por t to which you connected the mpx100/100b.
ii. To verify that all users have permission to run minicom, review the /etc/minicom.users
le and con rm that the line ALL exists or that there are specicuserentries.
3.
Continue with “Connect the mpx100/100b to AC power”onpage58.
Installing the mpx Manager as a standalone application
You can manage the mpx100/100b using HP StorageWorks mpx Manager as a standalone application. The mpx Manager software is available in the HP StorageWorks iSCSI Connectivity
56
Conguring the mpx100/100b
Option for Enterprise Virtual Arrays software kit. The Linux kit is provided in .tar.gz format and the Windows kit is provided as a CD image (.iso file or .zip file). The kits are available at:
ttp://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html.
h
Table 15 lists the requirements for the management servers running HP StorageWorks mpx Manager.
NOTE:
For Windows, you can write the .iso file to a CD-ROM or copy the .zip file to a folder.
Table 15 mpx Manager GUI server requirements
Component Requirement
Windows: Guest OS: Windows 2003 Windows Server 2003 SP1, 2003 r2, 2003 SP2 Windows Server 2003 x64 Edition SP1, 2003 r2,
nl
2003 SP2 Linux: (Itanium and X86/x64 systems) Red Hat:
Operating system
Memory 256 MB or m ore
Red H at Advanced Server Linux 4, Update 3 (kernel 2.6.9-34 using the
bundled iSCSI driver) X86
Red Hat Enterprise Linux 3, Update 5 X86
Red Hat Linux Enterprise Server 4 X86
Red Hat Enterprise Linux 5 server X86
SUSE Linux:
SUSE Linux Enterprise Server 8, SP4 X86
SUSE Linux Enterprise Server 9 SP3 (kernel 2.6.5-7.244 using the bundled
iSCSI driver) X86
SUSE Linux Enterprise Server 10 X86
Disk space 150 MB per installation
Processor
Hardware CD-ROM drive, RJ-45 Ethernet port, RS-232 serial port (optional)
Internet browser
500 MHz or faster
Microsoft Internet Explorer 5.0 and later Netscape Navigator 4.72 and later Mozilla 1.02 and later Safari Java 2 Runtime Environment to support web applet
HP StorageWorks mpx Manager for Windows
You can install HP StorageWorks mpx Manager on a Windows ser ver. To install the HP StorageWorks mpx Manager application from the HP StorageWorks iSCSI connectivity option for Enterprise Virtual Arrays installation CD:
1. Close all programs currently running, and insert the CD into the management servers CD-ROM drive.
2. Click Manag ement Software in the upper left corner of the product introduction screen to display the
table. If the product introduction screen does not open, open the CD with Windows Explorer and run the installation program.
3. Locate your platform in the table and click Install.
EVA iSCSI connectivity user guide
57
HP StorageWorks mpx Manager for Linux
This section des
cribes how to install HP StorageWorks mpx Manager on a Linux server.
NOTE:
In the following procedure, replace n.n.nn and n.nnbnn with a le names (for example, 2.0.65 and
2.65b85).
1. Download the hpmpx_n.n.nn_linux_install.tar.gz le from http://
h18006.www1.hp.com/products/storageworks/evaiscsiconnect/.
nl
The .gz le contains the GUI.bin le and a GUI install README le.
2. Unpack the le to a temporary directory. For example:
tar -zxvf hpmpx_n.n.nn_linux_install.tar.gz
3. Issue the following command to start the install:
./hpmpxn.n.nnbnn_linux_install.bin. A chmod may be necessary prior to execution.
4. Follow the installation instructions on the screen and note the installation location. The default
directory is /opt/Hewlett-Packard/mpxManager.
Connecting the mpx100/100b to AC power
To power up the mpx100/100b, connect the power cord to the power receptacle on the mpx100/100b chassisandtoagroundedACoutlet.Thempx100/100brespondsinthefollowingsequence:
1. The chassis LEDs (Input Power, Heartbeat, System Fault) illum inate, followed by all port LEDs.
2. After a couple of seconds, the Hear tb eat and System Fault LEDs are extinguished, while the Input
Power LED remains illuminated. The mpx100/100b is executing the POST.
3. After approximately 45 seconds, the POST is complete and the Heartbeat LED starts ashing
at a one-second rate. Any other blink pattern indicates an error has occurred. See the “Heartbeat LED blink patterns” on page 174 for more information about error blink patterns.
Starting and conguring the mpx100/100b
Starting HP StorageWorks mpx Manager for Windows
Select one of the following options to start HP StorageWorks mpx Manager for Windows:
For Windows:
1. Select H P StorageWorks mpx Manag er from the Start menu.
2. Double-click the HP StorageWorks mpx Manager shortcut.
3. Click the HP mpx Manager icon.
The Connect to the mpx100/100b window is d isplayed (Figure 20).
4. Enter the host name or IP address of the management port of the mpx100/100b.
NOTE:
Click Connect to add mpx100's/100b's to be managed simultaneously.
5. Click Connect to display the selected HP mpx Manager.
nl
58
Conguring the mpx100/100b
A typical mpx Manager is displayed (Figure 21).
Figure 20 C
Figure 21 Typical mpx Manager display
on n ect t o the m px 100 /100b
Starting HP StorageWorks mpx Manager for Linux
To start HP StorageWorks mpx Manager for Linux:
1. Enter the mpx100/100b command:
EVA iSCSI connectivity user guide
59
nl
<install_directory> ./HPmpx100Manager.
nl
The Connect to the mpx100/100b window is displayed (Figure 20 on page 59).
2. Enter the host name or IP address of the management port of the mpx100/100b.
3. Click Connect t
nl
o display the selected H P mpx Manag er.
A typical mpx M anager is displayed (Figure 2 1 on page 59).
Conguring the
You can congur
To congure th
1. Open a command window according to the type of server and connection:
• Ethernet (all
•Serial—Windo
•Serial—Linu
2. Open an Admin session and enter the commands to set up both iSCSI ports and the management
interface. See “Using the command line interface” on page 141 for command descriptions.
mpx100 #> ad Password: config mpx100 (admin) #> set mgmt
mpx100/100b
e the mpx100/100b using the HP StorageWorks mpx Manager application or the CLI.
e mpx100/100b using the CLI:
platforms): Open a Telnet session with the default mpx100/100b IP
address and log in to the mpx100/100b with the default account name and password (guest/password):
telnet 10.0.0 username: guest password: password
•SelectStart > Programs > Accessories > HyperTerminal > HyperTerminal.
• Select the connection you created earlier and click OK.
minicom
.1
ws: Open the HyperTerminal application on a Windows platform.
x: Open a command window and enter the following command:
min start
……………………… mpx100 (admin) #> set iscsi 1 ……………………… mpx100 (admi
n) #> set iscsi 2
………………………
Conguring the mpx100/100b iSCSI ports for Internet Storage Name Service (iSNS) (optional)
The mpx100/100b iSCSI ports support Microsoft iSNS Server software. iSNS is a protocol designed to facilitate the automated discovery, management, and conguration of iSCSI devices on a TCP/IP network. For more information, see the Microsoft website:
ttp://www.microsoft.com/downloads/details.aspx?familyid=0DBC4AF5-9410-4080-A545-
h F90B45650E20&displaylang=en
You can congure each port to register as an iSCSI Target with an iSNS server using the HP StorageWorks mpx Manager GUI or the CLI.
To congure iSNS on an iSCSI port using the HP StorageWorks m px Manager:
1. Double-click the desired mpx100/100b in the topology display.
2. Select the Information tab.
60
Conguring the mpx100/100b
3. Enter a unique name in the Symbolic Name box.
NOTE:
TheSymbolicNamesyntaxmustfollowtheiSCSIstandardforiqnnaming. Only the following ASCII characters (U+0000 to U+007F), are allowed:
• ASCII dash character (-) = U+002d
• ASCII dot character (.) = U+002e
• ASCII colon character (:) = U+003a
• ASCII lower-case characters (a through z) = U+0061 through U+007a
• ASCII digit characters (0 through 9) = U+0030 through U+0039 See section 3.2.6 of Request for Comments (RFC) 3720 (iSCSI) for a description of the iSCSI name string
prole. You can access RFC 3720 at the ftp site: ftp://ftp.rfc-editor.org/in-notes/rfc3720.txt.
4. In the IQN uses Symbolic Name box, select Enable. When this setting is enabled, the mpx100/100b
embeds the symbolic name as part of the Target iqn on the iSNS server. This also helps users to recognize the target if multiple mpx's are registered with the same iSNS server.
5. Select an iSCSI p ort under the mpx100/100b Manager tab.
6. Select the Enable iSNS check box, and then enter the IP address of the iSNS server.
7. Click Save.
To congure iSNS on an iSCSI port using the CLI:
1. Enter the mpx100 (admin) #> set system command:
mpx100 (admin) #> set system A list of attributes with formatting and current values will follow.
nl
Enter a new value or simply press the Enter key to accept the current value.
nl
If you wish to terminate this process before reaching the end of the list
nl
press 'q' or 'Q' and the Enter key to do so.
nl nl
WARNING:
nl
If enabled by operator, the Symbolic Name can be embedded as part of the
nl
iSCSI Name. Changes to the iSCSI name will be effective after a reboot.
nl
Only valid iSCSI name characters will be accepted. Valid characters
nl
include alphabetical (a-z, A-Z), numerical (0-9), colon, hyphen, and period.
nl
System Symbolic Name (Max = 64 characters) Embed Symbolic Name (0=Enable,1=Disable) [Disabled System Log Level (Min = 0, Max = 3) [0
[]
] ]
All attribute values that have been changed will now be saved. mpx100 (admin) #>
TIP:
You can exit the set system command window without making changes to the existing values by pressing q or Q, and then pressing Enter.
2. Enter a unique Symbolic Name or press Enter to a cce pt the current value.
EVA iSCSI connectivity user guide
61
System Symbolic Name (Max = 64 characters) Embed Symbolic Name (0=Enable,1=Disable) [Disabled System Log Level (Min = 0, Max = 3) [0
[
] MPX100-65 ] ]
NOTE:
The Symbolic Name syntax must follow the iSCSI standard for IQN naming. Only the following ASCII characters (U+0000 to U+007F), are allowed:
• ASCII dash character (-) = U+002d
•ASCIIdotchara
cter (.) = U +002e
• ASCII colon character (:) = U+003a
• ASCII lower-case characters (a through z) = U+0061 through U+007a
•ASCIIdigitch
aracters (0 through 9) = U+0030 through U+0039
See section 3.2.6 of Request for Comments (RFC) 3720 (iSCSI) for a description of the iSCSI name string prole. You can acc ess RFC 3720 at the ftp site: ftp://ftp.rfc-editor.org/in-notes/rfc3720.txt.
3. Enable the Embed Symbolic Name option.
When this setting is enabled, the mpx100 /100b em beds the symbolic name as part of the Target IQN on the iSNS server. This also helps users to recognize the target if multiple mpxs are registered with the same iSNS server.
System Symbolic Name (Max = 64 characters) Embed Symbolic Name (0=Enable,1=Disable) [Disabled System Log Level (Min = 0, Max = 3) [0
[
] MPX100-65 ]0 ]
4. Reboot the mpx100/100b.
Thenewattributevaluesaresavedandineffect.
5. After enabling the iSCSI port for iSNS, verify that an iSCSI port target entry appears in the iSNS
server database.
Example 1. iSNSCLI command issued on iSNS server
C:> isnscli listnodes
. . .
iqn.1986-03.com.hp:fcgw.mpx100:mpx100-65.1.50001fe150002f70.50001fe150002f7f
Where:
iqn. 1986-03.com.hp:fcgw.mpx100 The standard iqn name for all mpx100's/100b's
mpx100-65
1
50001fe150002f70.50001fe150002f7f
Symbolic Name
iSCSI port number
Presented EVA port
In stalling the mpx100 /100b rmware
The mpx100/100b ships with the latest rmware installed. You can upgrade the rmware from the management server. You can use the HP StorageWorks mpx Manager application or the CLI to install new rmware.
62
Conguring the mpx100/100b
WARN ING!
Installing and then activating the new rmware is disruptive. For activation, you must reboot the mpx100/100b. However, the reboot can result in incorrect data being transferred between devices connected to the mpx100/100b. HP recommends suspending activity on the interfaces before activating rmware.
For the latest mpx100/100b rmware, go to the HP website:
ttp://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html.
h
Using HP StorageWorks mpx Manager to install mpx100/100b rmware
To install the rmware using the HP StorageWorks mpx Manager:
1. Identify the mpx100/100b in the topology display. Double-click to open it.
2. Click Select intheFirmwareUploadwindowandbrowsetoselectthermware le to upload.
3. Click Start to begin the rmware load process. A message is displayed, warning you that the
mpx100/100b will need to be rebooted to activate the rmware.
4. Click OK to continue the rmware installation, or click Cancel to stop the installation.
Using the CLI to install mpx100/100b rmware
To install the rmware using the CLI:
1. Download the latest rmware version and place it on a server that can access the mpx100/100b
management port IP address.
2. FTP to the mpx100/100b management port and log in with the following information:
Username: ftp Password: ftp
3. To set FTP for binary transfer, enter the following information:
ftp> bin ftp> put mpx100-x_x_x_x.bin ftp> quit
4. Use Telnet to connect to the mpx100/100b and log in as guest.
Username: guest Password: password
5. Set administrative privileges to allow for the rmware upgrade with the following information:
mpx100> admin start password: config
6. Upgrade the mpx100/100b using the image command.
mpx100 admin> image unpack mpx100-x_x_x_x.bin
7. Reboot the mpx100/100b for new rmware to take effect.
mpx100 admin> reboot.
EVA iSCSI connectivity user guide
63
64
Cong u rin g th e mpx 10 0/ 10 0 b
5 Setting up the iSCSI Initiator and storage
This chapter contains the following topics:
iSCSI initiator setup,page65
iSCSI initiator setup for Windows (single-path), page 65
• Storage setup for Windows (single-path)
• About Microsoft Windows server 2003 scalable networking pack
• iSCSI Initiator version 3.10 setup for Apple Mac OS X (single-path)
• iSCSI initiator setup for Linux, page 75
iSCSI initiator setup for Solaris (single-path),page81
iSCSI initiator setup for VmWare, page 86
iSCSI initiator setup for OpenVMS,page89
iSCSI Initiator setup
The IP host or iSCSI Initiator uses an iSCSI driver to enable target resource recognition and attachment to EVA iSCSI connectivity over IP. An iSCSI driver may be part of the operating system (software initiator) or embedded on an iSCSI HBA (hardware initiator). An iSCSI driver is con gured with the Gigabit Ethernet IP address of each mpx100/100b iSCSI port with which the host is to transport SCSI requests and resp
The iSCSI Initiator sees the EVA LUNs as if they were block-level drives attached directly to the server.
onses.
iSCSI Initiator setup for Windows (single-path)
To set up the iSCSI Initiator for Windows:
1. Install the iSCSI Initiator: a. Download the HP StorageWorks iSCSI Connectivity Option for Enterprise Virtual Arrays
software kit from the HP website:
nl
http://h18006.www1.hp.com/products/storageworks/evaiscsiconnect/index.html .Select
Support for your product,thenDownload drivers and software.
NOTE:
Thesoftwarekitisavailableina.zip or .iso le. You can write the .iso le to a CD-ROM or copy the .zip le to a folder.
b. Insert the CD-ROM. Run Launch.exe if the CD-ROM does not start automatically. c. Click Install iSCSI software package, accept the default settings, and reboot the server. d. Click the Microsoft iSCSI Initiator icon on your desktop.
The iSCSI Initiator Properties window opens.
EVA iSCSI connectivity user guide
65
NOTE:
The terms
initiator
and
host
are used interchangeably. The initiator is the host that is accessing
the storage.
e. Click t he Discovery tab, (Figure 22).
Figure 22 Adding an IP address
f. Click Add to add the IP address of Port 1 on the mpx100/100b. g. Click OK to exit. h. Click the Targets tab.
nl
The target status is Inactive (Figure 2 3).
Figure 23 In active target status
i. Select a single target and then click Log On. j. Click Autom atically restore this connection when the system boots (do n ot enable multipath),
and then click OK (Figure 24). The target status is Connected .
66
Setting up the iSCSI Initiator and storage
Figure 24 Connected target status
NOTE:
Each target represents a path to the EVA. Logging into multiple targets may inadvertently present the s
ame LUN multiple times to the operating system.
Storage setup for Windows (single-path)
To set up LUNs using HP Command View:
1. Set up LUNs using HP Command View.
See Using HP Command View EVA to congure LUNs to iSCSI initiators,page129.
2. Set up the iSCSI drive on the iSCSI Initiator: a. Open the Windows Computer Management window. b. Select Disk Management. c. Select Action > Rescan Disks.
nl
The newly created Vdisk should appear as a disk to the operating system; if it does not, reboot the iSCSI Initiator.
d. Format and partition the disk.
About Microsoft Windows server 2003 scalable networking pack
TheMicrosoftWindowsServer2003ScalableNetworkingPackcontainsfunctionalityforoffloading TCP network processing to hardware. TCP Chimney is a feature that allows TCP/IP processing to be ofoaded to hardware. Receive Side Scaling allows receive packet processing to scale across multiple CPUs.
HP’s NC3xxx Multifunction Gigabit server adapters and Alacritech's SES2xxxxx adapters suppor t TCP ofoad functionality using Microsoft’s Scalable Networking Pack (SNP).
For more support details, read the latest HP adapter information for more support details.
To download the SNP package and for more details see: h
SNP setup with HP NC 3xxx GbE multifunction adapter
Microsoft’s Scalable Networking Pack works in conjunction with HP’s NC3xxxx Multifunction Gigabit server adapters and Alacritech's SES2xxxxx a dapter for Windows 2003 only.
TosetupSNPonaWindows2003server:
1. Install the hardware and necessary software for the NC3xxx Multifunction Gigabit server adapter,
following the manufacturer’s installation procedures.
ttp://support.microsoft.com/kb/912222.
EVA iSCSI connectivity user guide
67
2. Download the SNP package from the Microsoft website: http://support.microsoft.com/k b/912222.
a. To start the installation immediately click Run,or
b. To copy the download to your computer for installation at a later time, click Save.
A reboot is required after successful installation.
3. After reboot, verify TCP ofoad settings by opening a Command Prompt window and issuing the
command:
C:\>netsh interface ip show offload The following is displayed: Offload Options for interface "33-IP Storage Subnet" with index:
10003: TCP Transmit Checksum IP Transmit Checksum TCP Receive Checksum IP Receive Checksum TCP Large Send TCP Chimney Offload.
4. To modify TOE Chimney settings, use the commands:
>netsh int ip set chimney enabled >netsh int ip set chimney disabled
For more information, go to:
h
ttp://support.microsoft.com/kb/912222
iSCSI Initiat (single-path
The EVA4400 and EVA connectivity option s upports the Macintosh Xtend iSCSI Initiator provided by ATTO Technologies. For more details please visit h
or version 3.10 setup for Apple Mac OS X
)
ttp://www.attotech.com.
Set up the iSCSI Initiator for Apple Mac OS X
1. Install the ATTO iSCSI Ma cintosh Initiator v3.10 following the install instructions provided by the
vendor.
2. Run the Xtend SAN application to discover a nd congure the EVA iSCSI targets. The Xtend SAN
iSCSI Initiator can discover targets either by static address or iSNS.
For static address discovery:
a. Select Discover Targets and then select Discover by DNS/IP (Figure 25).
68
Setting up the iSCSI Initiator and storage
Figure 25 Discover targets
b. Add the static IP address of the mpx iSCSI port in the Address eld and then select Finish
(Figure 2 6).
Figure 26 Add static IP address
c. Select a target from the Discovered Target list and then click Add (Figure 27).
EVA iSCSI connectivity user guide
69
Figure 27 Discovered target list
NOTE:
The mpx iSCSI port may present several iSCSI targets to the Xtend SAN iSCSI Initiator. Select only one target from the list.
3. For iSNS discovery:
a. Select Initiator and then enter the iSNS name or IP address in the iSNS Address eld (Figure 28).
70
Setting up the iSCSI Initiator and storage
Figure 28
iSNS discovery a nd verication
b. Test the
connection from the initiator to the iSNS server by selecting Verify iSNS. If successful,
select Save.
If necessary, working on the iSNS ser ver, m ake the appropriate edits to add the Xtend SAN iSCSI In
itiator to any iSNS discovery domains that include mpx iSCSI targets.
c. Select Discover Targets.
d. Select Discover by iSNS.
Alisto
f mpx targets appears under Discovered Targets (Figure 29).
EVA iSCSI connectivity user guide
71
Figure 29 Disc overed targets
NOTE:
The mpx iSCSI port may present several iSCSI targets to the Xtend SAN iSCSI Initiator. Select only one target from the list.
e. Select the newly-added target under Host name in the left frame.
f. Check the Visible box (Figure 30). This allows the initiator to display the target status.
g. Check the Auto Login box. This congures the iSCSI Initiator to automatically log in to the
iSCSItargetatsystemstartup.
h. Click Save.
72
Setting up the iSCSI Initiator and storage
Figure 30 Selecting newly added target
i. Select Status, select Network Node, and then select Login to connect to the mpx target (Figure
31 ).
The Network Node displays a status of Connected and the target status light turns green.
EVA iSCSI connectivity user guide
73
Figure 31 Select status
Storage setup for Apple Mac OS X
1. Presen
2. Verif
t LUNs using HP Command View EVA.
See Usi
a. O pen the Xtend SAN iSCSI application.
b. Select the mpx100b target entry under the host name.
c. Click
ng HP Command View EVA to congure LUNs to iSCSI initiators, page 129.
y that the EVA LUNs are presented to the Macintosh iSCSI Initiator:
the LUNs button.
AlistofpresentedEVALUNsisdisplayed(Figure 32).
74
Setting up the iSCSI Initiator and storage
Figure 32 Presented EVA LUNs
NOTE:
If no LUNs appear in the list, log out and then log in again to the target, or a system reboot may be required.
3. SetuptheiSCSIdriveontheiSCSIInitiator:
a. Open Disk Utilities from the Apple Max OS X Finder Applications list.
b. Format and partition the EVA LUN as needed.
iSCSI Initiator setup for Linux
Installing and conguring the SUSE Linux Enterprise 10 iSCSI driver
Congure the i nitiator using the bu ilt-in GUI-based tool or the open-iscsi administration utility using the iscsiadm command. See the iscsiadm (8) man pages for detailed command information.
1. Modify the Initiator Name by issuing the following command:
# vi /etc/initiatorname.iscsi
2. To congure the Initiator and Targets, start the iSCSI Initiator applet by nding it in the YaST Control
Center under Network Services, and then set the service to star t at boot time (Figure 33).
EVA iSCSI connectivity user guide
75
Figure 33 Congure initiator and targets
3. Click the Discovered Targets tab and enter your iSCSI target IP address (Figure 34).
Figure 34 Discovered Targets tab
4. <title> </title>
Log in to the target (Figure 35).
76
Setting up the iSCSI Initiator and storage
Figure 35 Target login
5. Click the Connected Targets tab, and then click the Toggle Start-Up button on each target listed
so the targets start automatically (Figure 36).
Figure 36 Connected Targets tab
Installing and conguring for Red Hat 5
To install and congure for Red Hat 5:
NOTE:
The iSCSI driver package is included but is not installed by default. Install the package named iscsi-initiator-utils-6.2.0.742-0.5.e15 during or after operating system installation.
EVA iSCSI connectivity user guide
77
1. Use the iscsiadm command to control discovery and connectivity:
# iscsiadm –m discovery –t st –p 10.6.0.33:3260
2. Edit the initiator name:
# vi /etc/iscsi/initiatorname.iscsi
3. To start the iSCSI service use the service command:
# service iscsi start
4. Verify that the iSCSI service autostarts:
#chkconfig iscsi on
NOTE:
For more detail, see the man pages regarding the iscsiadm open-iscsi administration utility.
Installing and conguring for Red Hat 3, 4 and SUSE 8 and 9
To install and congure for Red Hat 3 and 4 and for SUSE 8 and 9:
NOTE:
The iSCSI driver is included with the Red Hat 4 and SUSE 9 distributions and is installed by default. CongurationisthesameforRedHat3,4,SUSE8and9.
1. Update /etc/iscsi.conf to include the IP address of your iSCSI target. A sample conguration
le might include entries like this:
DiscoveryAddress=33.33.33.101
For a more detailed description of the conguration le format, enter:
man iscsi.conf
2. Enter the following command to manually start iSCSI services to test your conguration:
/etc/init.d/iscsi start
3. Modify the /etc/initiatorname.iscsi le to reect a meaningful name for the initiator. For
example:
InitiatorName=iqn.1987–05.com.cisco:servername.yourcompany.com
NOTE:
In most cases, the only part of the le requiring modication is after the colon.
If there are problems starting the iscsi daemon, they are usually caused by an incorrect IP Address or an ill-formatted initiator name.
Installing the initiator for Red Hat 3 and SUSE 8
If you are upgrading from a previous installation of an iSCSI driver, HP recommends that you remove the /etc/initiatorname.iscsi file before installing the new driver. See the following website for the latest version of the Linux driver for EVA iSCSI connectivity:
ttp://sourceforge.net/projects/linux-iscsi
h
78
Setting up the iSCSI Initiator and storage
NOTE:
The Linux driver supports both Red Hat 3 and SUSE 8. See the information on how to congure the Linux iSCSI Initiator.
Installing the iSCSI driver
In a newly installed Red Hat Linux kernel, an iSCSI instance may be running. Before installing the iSCSI driver, you must stop the instance.
To stop the instance:
1. Run setup.
2. Deselect iSCSI.
3. Reboot the system.
See the Readme le in the tar ball for more information on conguring the iSCSI Initiator.
To install the iSCSI driver:
1. Use tar(1) to decompress the source archive into a directory of your choice. The archive contains
a subdirectory corresponding to the archive name. Use the following commands to decompress the source archive:
cd /usr/src tar xvzf /path/to/linux-iscsi-version.tgz cd linux-iscsi-<version>
2. Compile the iSCSI driver. If your kernel sources are not in the usual place, add
TOPDIR=/path/to/kernel or edit the denition of TOPDIR in Makefile.UsetheMake command to edit Makefile.
3. Install the driver as root. If you are currently using the iSCSI driver, rst unmount all iSCSI devices
and unload the old iSCSI driver. If your Linux distribution includes an iSCSI driver, it may be necessary to uninstall that package rst.
Readme
le in the tar ball for more
4.
Congure the driver. See Installing and conguring for RedHat 3, 4 and SuSE 8 and 9, page 78.
Assigning device names
Because Linux assigns SCSI device nodes dynamically whenever a SCSI logical unit is detected, the mapping from device nodes such as /dev/sda or /dev/sdb to iSCSI targets and logical units may vary.
Variations in process scheduling and network delay can result in iSCSI targets being mapped to different SCSI device nodes every time the driver is started. Because of this variability, conguring applications or operating system utilities to use the standard SCSI device nodes to access iSCSI devices can result in sending SCSI commands to the wrong target or logical unit.
To provide consistent naming, the iSCSI driver scans the system to determine the mapping from SCSI device nodes to iSCSI targets. The iSCSI driver creates a tree of directories and symbolic links under /dev/iscsi to make it easier to use a particular iSCSI target's logical unit.
The directory tree under /dev/iscsi contains subdirectories for each iSCSI bus number, each target id number on the bus, and each logical unit number for each target. For example, the whole disk device for bus 0, target ID 0,andLUN 0 would be /dev/iscsi/bus0/target0/LUN0/disk.
In each logical unit directory there is a symbolic link for each SCSI device node that can be connected to that particular logical unit. These symbolic links are modeled after the Linux devfs naming convention:
The symbolic link disk mapstothewhole-diskSCSIdevicenodesuchas/dev/sda or /dev/sdb.
The symbolic links part1 through part15 maps to each partition of that SCSI disk. For example, a
symbolic link can map to partitions /dev/sda1, dev/sda15, or to as many partitions as necessary.
EVA iSCSI connectivity user guide
79
NOTE:
These symbolic links exist regardless of the number of disk partitions. Opening the partition devices results in an error if the partition does not actually exist on the disk.
The symbolic link mt maps to the auto-rewind SCSI tape device node for the LUN /dev/st0,for
example. Additional links for mtl, mtm,andmta map to the other auto-rewind devices /dev/st0l, /dev/st0m, /dev/st0a, regardless of whether these device nodes actually exist or could be opened.
The symbolic link mtn maps to the no-rewind SCSI tape device node, if any. For example, this LUN
maps to /dev/nst0. Additional links formtln, mtmn,andmtan map to the other no-rewind devices such as /dev/nst0l, /dev/nst0m, /dev/nst0a, regardless of whether those device nodes actually exist or could be opened.
The symbolic link cd mapstotheSCSICD-ROMdevicenode,ifany,fortheLUN/dev/scd0
for example.
The symbolic link generic maps to the SCSI generic device node, if any, for the LUN /dev/sg0.
Because the symlink creation process must open all of the SCSI device nodes in /dev in order to determine which nodes map to iSCSI devices, you may see many modprobe messages logged to syslog indicating that modprobe could not nd a d river for a particular combination of major and minor numbers. This message can be ignored. The messages occur when Linux is unable to ndadriver to associate with a SCSI device nod e that the iSCSI daemon is opening as part of its symlink creation process. To prevent these messages from occurring, remove the SCSI device nodes that do not contain an associated high-level SCSI driver.
Target bindings
Moun
The iSCSI driver automatically maintains a bindings le, /var/iscsi/bindings.Thisfile contains persistent bindings to ensure that the same iSCSI bus and target ID number are used for every iSCSI sessionwithaparticulariSCSITargetName,evenwhenthedriverisrepeatedlyrestarted.
This feature ensures that the SCSI number in the device symlinks (described in “Device names” on page 79) always map to the same iSCSI target.
NOTE:
Because of the way Linux dynamically allocates SCSI device nodes as SCSI devices are found, the driver does not and cannot ensure that any particular SCSI device node /dev/sda, for example, always maps to the same iSCSI TargetName.Thesymlinks described in “Device names”onpage79are intended to provide application and fstab le persistent device mapping and must be used instead of direct references to particular SCSI device nodes.
If the bindings le grows too large, lines for targets that no longer exist m ay be manually removed by editing the le. Manual editing should not be needed, however, since the driver can maintain up to 65,535 different bindings.
ting le systems
usetheLinuxbootprocessnormallymountsle systems listed in /etc/fstab before the network is
Beca congured, adding mount entries in iSCSI devices to /etc/fstab will not work. The iscsi-mountall script manages the checking and m ounting of devices listed in the le /etc/fstab.iscsi, which has thesameformatas/etc/fstab. This script is automatically invoked by the iSCSI startup script.
NOTE:
If iSCSI sessions are unable to log in immediately due to network or authentication problems, the iscsi-mountall script can time out and fail to mount the le systems.
80
Setting up the iSCSI Initiator and storage
Mapping inconsistencies can occur between SCSI device nodes and iSCSI targets, such as mounting the wrong device due to device name changes resulting from iSCSI target conguration changes or network delays. Instead of directly mounting SCSI devices, HP recommends one of the following options:
Mount the /dev/iscsi tree symlinks.
Mount le system UU IDs or labels (see m an pages for mke2fs, mount,andfstab).
Use logical volume management (see Linux LVM).
Unmounting le systems
It is very important to unmount all lesystemsoniSCSIdevicesbeforetheiSCSIdriverstops.IftheiSCSI driver stops system corruption can occur.
Since Linux will not unmount le systems that are being used by a running process, any processes using those devices must be stopped (see fuser(1)) before iSCSI devices can be unmounted.
To avoid le in /etc/fst SIGKILL. T terminating all connections to iSCSI devices.
CAUTION:
File systems not listed in /etc/fstab.iscsi cannot be automatically unmounted.
while iSCSI devices are mounted, buffered writes may not be committed to disk, and le
system corruption, the iSCSI shutdown script automatically stops all processes using devices
ab.iscsi, rst by sending them SIGTERM, and then by sending any remaining processes
he iSCSI shutdown script unmounts all iSCSI lesystemsandstopstheiSCSIdaemon,
Presenting EVA storage for Linux
To set up LUNs using HP Command View:
1. Set up LUNs using HP Command View. For procedure steps, see
Using HP Command View EVA to congure LUNs to iSCSI initiators” on page 129.
2. Set up the iSCSI drive on the iSCSI Initiator: a. Restart the iSCSI services:
/etc/rc.d/initd/iscsi restart
b. Verify that the iSCSI LUNs are presented to the operating system by entering the following
command:
fdisk -l
iSCSI Initiator setup for Solaris (single-path)
The Solaris iSCSI driver is included in the Solaris 10 operating system with the following software packages:
SUNWiscsir–Sun iSCSI Device Driver (root)
SUNWiscsiu–Sun iSCSI M anag em ent Utilities (usr)
EVA LUN 0 with Solaris iSCSI Initiators
By default, LUN 0 is assigned to an iSCSI Initiator when the initiator logs in to the mpx100b iSCSI target and when HP Command View EVA presents a virtual disk to an iSCSI host.
Because the Solaris iSCSI Initiator does not recognize LUN 0 as the EVA controller console LUN, the initiator tries to bring LUN 0 online, resulting in the following warning:
Mar 21 08:04:09 hdxs8j iscsi: [ID 248668 kern.warning] WARNING: iscsi driver unable to online iqn.1986–03.com.hp:fcgw.mpx100:hdxh05–m2.0.50001 fe1500aef60.50001aef68 LUN 0
EVA iSCSI connectivity user guide
81
LUN 0 can be prevented from being sent to the Solaris iSCSI Initiator by disabling the Controller LUN AutoMap parameter with the mpx system settings.
LUN 0 is not presented to any host entry in HP Command View 8.0 with any iSCSI host mode setting of Solaris.
Disabling Controller LUN AutoMap using the mpx CLI
To disable Con
1. Use Telnet to connect to the mpx management port or connect to the mpx serial port using the
HP-supplied connector.
The mpx manage default sett
2. To log in, enter:
•Username:guest
•Password:p
3. To enable administrator p rivileges, enter:
4. Issue the Set System command.
5. Follow the prompts to disable Controller LUN AutoMap.
The follow
mpx100b (admin) #> set system A list of attributes with formatting and current values will follow.
Enter a new If you wish to terminate this process before reaching the end of the list press 'q' or 'Q' and the Enter key to do so.
WARNING: If enabled by operator, the Symbolic Name can be embedded as part of the only valid i alphabetical (a-z, A-Z), numerical (0–9), colon, hyphen, and period.
troller LUN AutoMap using the CLI:
ment port's default I P address is 10.0.0.1/255.0.0.0˙. The mpx serial port's
ing is 115200/8/n/1.
assword
admin start config
ing is an example of the set system command:
value or simply press the Enter key to accept the current value.
SCSI name characters will be accepted. Valid characters include
Changes to th
System Symbolic Name (Max=64 characters) [mpx100–66 ] Embed Symbol Controller Lun AutoMap (0=Enable, 1=Disable) [Disabled ]1 System Log Level (Min=0, Max=3) [0 ]
All attribute values that have been changed will now be saved.
mpx100b (admi
NOTE:
In the Warning operator, the valid iSCSI name characters are accepted.”
e settings below will be effective after a reboot.
ic Name (0=Enable, 1=Disable) [Enabled ]
n) #>
message above, the rst sentence is intended to read: “If enabled by the
Symbolic Name can be embedded as part of the
Prepare for a Solaris iSCSI conguration
Complete the following tasks before starting a Solaris iSCSI conguration:
82
Setting up the iSCSI Initiator and storage
target IQN name, but
only
1. Become a superuser.
2. Verify that the iSCSI software packages are installed:
# pkginfo SUNWiscsiu SUNWiscsir system SUNWiscsiu Sun iSCSI Device Driver (root) system SUNWiscsir Sun iSCSI Management Utilities (usr)
3. Verify that you are running a Solaris 10 1/06 or later release.
4. Conrm that your TCP/IP network is set up.
Congure for EVA iSCSI target discovery
This procedure assumes that you a re logged in to the local system where you want to congure access to an iSCSI target device. The EVA target can be discovered by either using the IP address of the MPX iSCSI port or using an iSNS server address.
SettargetdiscoveryusingMPXiSCSIportaddress
To set target discovery using the MPX iSCSI port address:
1. Become a superuser.
2. Add the ip address of the mpx iSCSI port to the initiator’s discovery list:
# iscsiadm add discovery-address 33.33.66.64
3. Enable the SendTargetsdiscovery method:
# iscsiadm modify discovery --sendtargets enable
4. Create the iSCSI device links for the local system:
# devfsadm -i iscsi
5. Verify that mpx targets are available to the initiator:
# iscsiadm list target
NOTE:
The iSCSI connection is not initiated until the discovery method is enabled.
Set target discovery using iSNS server address
To set target discovery using the iSNS server address:
1. Become a superuser.
2. Add the ip address of the iSNS server to the initiator’s discovery list:
# iscsiadm add iSNS-server 33.33.66.64
3. Enable iSNS discovery method:
# iscsiadm mod discovery –isns enable
4. Enable the SendTargets discovery method:
# iscsiadm modify discovery --sendtargets enable
5. Create the iSCSI device links for the local system:
# devfsadm -i iscsi
6. Verify mpx targets are available to the initiator:
# iscsiadm list target
EVA iSCSI connectivity user guide
83
NOTE:
The iSCSI connection is not initiated until the discovery method is enabled.
For more details on using the iscsiadm command, see the #man iscsiadm man pages. For more details on iSCSI Initiator setup, see Sun Microsystems System Administration Guide,DevicesandFile Systems, Section 15.
Creating an iSCSI host and virtual disks for the Solaris iSCSI Initiator
See Using HP Command View EVA to congure LUNs to iSCSI initiators, page 129, to create an iSCSI host entry and to present LUNs to an iSCSI host in HP Command View. The host mode setting for SolarisisLinux/Mac.
Command View 6.0.2 and 7.0 only—Remove LUN 0 from the Solaris iSCSI Initiator using the CLI
BydefaultHPCommandView6.0.2and7.0willassignLUN0toeachiSCSIInitiatorthatispresenteda virtual disk. Because the Solaris iSCSI Initiator d oes not recognize LUN 0 as the EVA c ontroller console LUN the initiator will try to bring LUN 0 online resulting in the following warning:
Mar 21 08:04:09 hdxs8j iscsi: [ID 248668 kern.warning] WARNING: iscsi driver unable to online iqn.1986-03.com.hp:fcgw.mpx100:hdxh05­m2.0.50001fe1500aef60.50001fe1500aef68 LUN 0
To remove LUN 0 from the Solaris iSCSI Initiator using the CLI:
1. Use telnet to connect to the mpx100/100b management port, or connect to the mpx100/100b serial
port using the HP-supplied connector.
The mpx100/100b management port’s default IP address is 10.0.0.1/255.0.0.0. The mpx100/100b serial port’s default setting is 115200/8/n/1.
2. To log in, type:
•Username:guest
•Password:password
3. To enable administrator privileges, type:
admin start
config
4. Type the command:
LUNmask rm
Follow the prompts to remove the Solaris iSCSI Initiator from each iSCSI presented target.
For example:
> telnet 10.6.7.65 login: guest password: password >admin start password: config mpx100 (admin) #> LUNmask rm Index (WWNN,WWPN/iSCSI Name) — ——— ————————————— 0 50:00:1f:e1:50:00:2f:70,50:00:1f:e1:50:00:2f:7e
84
Setting up the iSCSI Initiator and storage
Please select a Target from the list above ('q' to quit): 0 LUN Vendor — ——— — 0HP 1HP 2HP 3HP 4HP . . . Please select a LUN from the list above ('q' to quit): 0 Index Initiator — ——– 0 iqn.2005-03.com:sanlabmac-s01 1 iqn.1986-03.com.sun:rack81-s16 2 iqn.1991-05.com.microsoft:rack77-s16.sanbox.com 3 iqn.1991-05.com.microsoft:rack77-s14.sanbox.com 4 iqn.1996-04.de.SUSE:bl7-04.sanbox.com 5 iqn.1996-04.de.SUSE:bl7-03.sanbox.com 6 iqn.1996-04.de.SUSE:bl7-02.sanbox.com .
.
.
Please select an Initiator to remove ('a' to remove all, 'q' to quit): 1
All attribute values for that have been changed will now be saved. mpx100 (admin) #>
Accessing iSCSI disks
If you want to make the iSCSI drive available on reboot, create the le system and add an entry to the /etc/vfstab file
After the devices have been discovered by the Solaris iSCSI Initiator, the login negotiation occurs automatically. The Solaris iSCSI driver determines the number of LUNs available and creates the device nodes. Then, the iSCSI devices can be treated as any other SCSI device.
You can view the
# format AVAILABLE DISK SELECTIONS:
0. c0t1d0<SUN7
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w500000e010685cf1,0
1. c0t2d0<SUN72G cyl 14087 alt 2 hd 24 sec 424>
iSCSI disks on the local system with the format utilit y, for example:
as you would with a UFS le system on a SCSI device.
2G cyl 14087 alt 2 hd 24 sec 424>
EVA iSCSI connectivity user guide
85
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w500000e0106e3ba1,0
2. c3t0d0 <ABCSTORAGE-100E-00–2.2 cyl 20813 alt 2 hd 16 sec 63> /iscsi/disk@0000iqn.2001-05.com.abcstorage%3A6-8a0900-477d70401­b0fff044352423a2-hostname-020000,0
3. c3t1d0 <ABCSTORAGE-100E-00–2.2 cyl 20813 alt 2 hd 16 sec 63> /iscsi/disk@0000iqn.2001-05.com.abcstorage%3A6-8a0900-3fcd70401
-085ff04434f423a2-hostname-010000,0 . . .
Monitoring your iSCSI conguration
Display information and modify settings on the iSCSI Initiator and target devices by using the following commands:
iscsiadm list initiator-node iscsiadm list discovery iscsiadm list t iscsiadm list target-param iscsiadm modify initiator-node iscsiadm modif iscsiadm modify target-param
For more details on using the iscsiadm command, see the #man iscsiadm man pages. For more details on iSCS
Sun Microsystems System Administration Guide, Devices and File Systems, Section 15
arget
y discovery
I Initiator setup, see:
iSCSI Initiator setup for VMware
The software iSCSI Initiator is built into the ESX server’s VMkernel and uses standard GigE NICs to connect to the mpx100/100b.
To setup software based iSCSI storage connectivity:
1. Install the appropriate license from VMware to enab le iSCSI software driver as per the instructions by
VMware.
2. Congure the V MKernel TCP/IP networking stack for iSCSI support. Congure the VMkernel, service
console with dedicated virtual switch with a dedicated NIC for iSCSI data trafc. Follow the instructions from VMware. Figure 37 shows an example of a conguration.
86
Setting up the iSCSI Initiator and storage
Figure 37 Con guration tab
3. Open a rewall port by enabling the iSCSI software client service.
a. Using VMware’s VI client, select the server.
b. Click the Conguration tab, and then click Security Prole.
c. Select the check box for iSCSI service to enable iSCSI trafc.
d. Click OK (Figure 38).
Figure 38 Security prole information
4. <title> </title>
Enable the iSCSI software initiators:
a. In VMware’s VI client, select the ser ver from the inventory panel.
b. Click the Conguration tab, and then click Storage Adapters under Hardware.
EVA iSCSI connectivity user guide
87
c. Under iSCSI software Adapter, choose the available software initiator.
d. Click the Properties link of the software adapter.
e. The iSCSI initator propeties dialog box is displayed. Click Congure.
f. The General properties dialog box displays (Figure 39). Select the Enabled checkbox .
Figure 39 General properties dialog box
g. Click OK.
5. Set up Discovery Addressing for the software initiator:
a. Repeat Step 4 to open the iSCSI initiator Properties dialog box.
b. Click the Dynamic Discovery tab.
c. Click Add to add a new iSCSI target. The AddSendTargetServerdialog box is displayed.
d. Enter the mpx100’s/100b's iSCSI IP address (Figure 40) and then click OK.
Figure 40 Add send targets server dialog box
6.
See Creating an iSCSI initiator host via HP Command View EVA, page 130 for instructions to change thehostmodeoftheVMwareinitiatortoVMware.
7. See iSCSI initiator setup for Windows (single-path) for instructions to set up LUNs using HP Command
View.
88
Setting up the iSCSI Initiator and storage
8. To verify that the LUNs are presented to the VMware host:
a. Rescan for new iSCSI LUNs.
b. In VMware’s VI client, select a server and click the Conguration tab.
c. Choose Storage Adapters in the hardware panel and click Rescan above the Storage Adapters
panel.
The Rescan dialog box displays; see Figure 41.
d. Select the Scan for New Storage Devices and the Scan for New VMFS Volumes checkboxes.
e. Click OK.
The LUNs are now available for ESX server.
Figure 41 Rescan dialog box
NOTE:
When presenting iSCSI storage to Virtual Machines you must:
Create Virtual Machines using LSI Logic emulation.
Present iSCSI storage to a Virtual Machine either as a data store created on a iSCSI device, or
raw device mapping.
iSCSI Initiator setup for OpenVMS
Beginning with OpenVMS V8.3-1H1, the OpenVMS Software-Based iSCSI Initiator TDK is included as part of the standard OpenVMS installation. The processes for conguring and enabling the initiator are detailed in the following sections:
Conguring TCP/IP services,page90
Conguring VLANs, page 90
Enabling Ethernet jumbo frames,page90
Conguring target discovery, page 90
Starting the iSCSI Initiator, page 92
Stopping the iSCSI Initiator,page92
Setting up storage for OpenVMS, page 92
EVA iSCSI connectivity user guide
89
Conguring TCP/
IP services
Before you star only function w TCP/IP core functionality n eeds to be congured. Note that particular attention should be paid to the system’s hostname, which is a dening element in the iSCSI Initiator name (a unique name assigned to each host running the iSCSI Initiator software). TCP/IP must be running and the hostname must be set before the iSCSI Initiator is loaded.
t the iSCSI Initiator, TCP/IP must be properly congured and enabled. The initiator will
ith the TCP/IP stack provided by HP TCP/IP Services for OpenVMS. Only the basic
Conguring VLANs
While not mandatory, if the initiator will be operating on a shared network (a network not dedicated solely to storage), it is s uggested that storage trafc be isolated to a dedicated Virtual LAN (VLAN). The VLAN will logically isolate storage trafcintoitsownsubnet.
In order to congure and use a VLAN, the hosts, network switches, and targets must all support IEEE
802.1Q. For information on con guring VLANs on the OpenVMS hosts, see the HP OpenVMS Version
8.3 New Features and Documentation Overview and the HP OpenVMS System Management Utilities Reference Manual. Forinformationonconguring VLANs on the network switches, see your switch
manufacturer’s documentation. VLAN conguration on the mpx100/100b targets will be performed during their installation and conguration (see section Installation and maintenance, page 55.)
Enabling Ethernet jumbo frames
If Ethernet jumbo frames are to be used for iSCSI trafc, they must be enabled on the initiators (OpenVMS
network switches, and targets. To enable jumbo frames system-wide on an OpenVMS host node
hosts),
he LAN_FLAGS system parameter, see the HP OpenVMS System Management Utilities Reference
using t
Manual Manual.
. To enable jumbo frames on a per-device basis, see the HP OpenVMS System Manager’s
Conguring target discovery
The OpenVMS Software-Based iSCSI Initiator supports two target discovery mechanisms – manual and iSNS.Atleastoneofthesemethodsmustbeconfigured on each iSCSI-enabled OpenVMS host:
Manual target discovery
With manual target discovery, the initiator is supplied with a list of IP addresses for each iSCSI target port. Each mpx100 has two iSCSI target ports. (The management port is not an iSCSI target port.) An initiator using this discovery method will periodically poll each target port in its manual discovery list to gather a list of accessible storage devices.
1. To create a manual target list, copy the le
SYS$COMMON:[SYSMGR]ISCSI$MANUAL_TARGETS.TEMPLATE
to
SYS$COMMON:[SYSMGR]ISCSI$MANUAL_TARGETS.DAT
The directory SYS$SPECIFIC:[SYSMGR] can be used if the le is to be node-specicrather than cluster-wide.
2. Edit the new le and add a list of the IP names or addresses of the iSCSI target por ts that should
be probed for available storage devices. The header included in this le denes the proper format for these addresses. The manual target list is automatically loaded when the iSCSI Initiator is started. By default, changes to this le will not take effect until the system is rebooted or until the initiator is stopped and restarted.
3. To manually force the initiator to recognize additions to the manual target list while the initiator is
running, issue the following command:
$ mcr iscsi$control_program manual
Target ports that have been added to this le since the initiator was started will be added to the list of target ports that are periodically scanned by the initiator. Note that target ports that have
90
Setting up the iSCSI Initiator and storage
been removed from this lewillnotberemovedfromtheinitiator'sscanlistuntilthesystemis rebooted or the initiator is stopped and restarted.
NOTE:
Regardless of whether IP addresses or IP names are used in the manual target data le, every iSCSItargetportmustbeknowntoTCP/IP.ThecommandTCPIPSHOWHOSTcanbeusedto determine if the target port is known to TCP/IP. The host can be added to the local TCP/IP host database with the command TCPIP SET HOST. A target port not known to TCP/IP will not be probed by the iSCSI Initiator.
NOTE:
The default TCP/IP port used for iSCSI trafcis3260.Ifanon-defaultportistobeutilized,the addresses listed in the manual targets le must include the port number. The header included in this le denes the format that must be used when including a port number. There is no need to include the port number if the default will be used. Additionally, if a non-default port number is to be utilized, the iSCSI ports on the mpx100 must be congured with that non-default port number.
NOTE:
The OpenVMS S
oftware-Based iSCSI Initiator does not currently support IPv6. All IP addresses
must be IPv4.
iSNS target discovery
The Internet Storage Name Service (iSNS) protocol provides a target discovery mechanism similar to the discovery services found in Fibre Channel. Among the capabilities provided by iSNS is the ability for storage targets to register with an iSNS server. Acting as iSNS clients, initiators are able to query this server to retrieve a lost of potential targets. The initiator can then use this list to query the individual targets to nd its storage devices. The use of iSNS requires the availability of an iSNS server that is network accessible by both the storage targets and the initiators (OpenVMS hosts). Currently, the Microsoft iSNS Server is the only iSNS server supported for use with the OpenVMS Software-Based iSCSI Initiator. To use iSNS target discovery, both the initiators and targets must be properly congured with the IP address of the iSNS server.
1. To congure the OpenVMS initiators for iSNS, copy the le
SYS$COMMON: [SYSMGR] ISCSI$ISNS_SERVICES.TEMPLATE
to
SYS$COMMON: [SYSMGR] ISCSI$ISNS_SERVICES.DAT
The directory SYS$SPECIFIC:[SYSMGR] can be used if the le is to be node-specic rather than cluster-wide.
2. Edit the new le and add a list of the IP names or addresses of the iSNS servers that should be
probed for available targets.
The header in this le denes the proper format for these addresses. The iSNS server list is automatically loaded when the iSCSI Initiator is started. By default, changes to this le do not take effect until the system is rebooted or until the initiator is stopped and restarted.
3. To manually force the initiator to recognize additions to the iSNS server list while the initiator is
running, issue the following command:
$ mcr iscsi$control_program isns
iSNS servers that have been added to this le since the initiator was started will be added to the list of servers that are periodically queried by the initiator. Not that servers that have been removed from this le will not be removed from the initiator's scan list until the system is rebooted or the initiator is stopped and restarted.
EVA iSCSI connectivity user guide
91
NOTE:
Regardless whether IP addresses or IP names are used in the iSNS server data le, every iSNS server listed must be known to TCP/IP. Use the command TCPIP SHOW HOST to determine if the server is known to TCP/IP. Use the command TCPIP SET HOST to add the server to the local TCP/IP host database. A server not known to TCP/IP will not be queried by the iSCSI Initiator.
NOTE:
The default TCP/IP port used for iSNS trafc is 3205. This port number cannot be congured.
NOTE:
The OpenVMS Sof IPv4.
tware-Based iSCSI Initiator does not currently support IPv6. All IP addresses must be
Starting the iSCSI Initiator
Starting the iSCSI Initiator
After conguring the hosts and targets, the OpenVMS Software-Based iSCSI Initiator can be started by executing the DCL command procedure SYS$STARTUP:ISCSI$INITIATOR_STARTUP.COM. To start the iSCSI Initiator each time the host is bo oted, add the following line to
SYS$MANAGER:SYSTARTUP_VMS.COM: $ @SYS$STARTUP:ISCSI$INITIATOR_STARTUP.COM
NOTE:
Note that TCP/IP must be fully loaded before the iSCSI Initiator is started.
Stopping the iSCSI Initiator
Generally, there should be n o need to stop the iSCSI Initiator after it has been loaded. However, should the need arise to stop the initiator, execute the DCL command procedure
SYS$STARTUP:ISCSI$INITIATOR_SHUTDOWN.COM.
NOTE:
Note that if TCP/IP is stopped on a system running the iSCSI Initiator, the initiator will be automatically stopped and unloaded as part of the rundown of TCP/IP. After restarting TCP/IP, the iSCSI Initiator must be manually restarted.
NOTE:
HP strongly recommends that trafc to all iSCSI target storage devices be quieted prior to shutting down the initiator.
Setting up storage for OpenVMS
To set up storage for OpenVMS:
1. Set up LUNs using HP Command View EVA.
See “Using HP Command View EVA to congure LUNs to iSCSI initiators” on page 129.
92
Setting up the iSCSI Initiator and storage
2. Discover and congure the iSCSI drives on the OpenVMS host using the following command:
$ mcr sysman io auto/log
NOTE:
This step is required only if the LUNs are congured via HP Command View EVA
after
theinitiatorhasbeenloaded. Thecommandprocedureusedtoloadtheinitiatorissues this command by default.
EVA iSCSI connectivity user guide
93
94
Setting up the iSCSI Initiator and storage
6 Setting up the iSCSI Initiator for multipathing
This chapter contains the following topics:
Overview, page 95
Conguring multipath with Windows iSCSI i nitiator,page100
•Configuring multipath with the VMware iSCSI initiator, page 112
Conguring multipath with the Solaris 10 iSCSI initiator,page116
Conguring multipath with the OpenVMS iSCSI initiator, page 123
Device Mapper Multipath Enablement Kit for HP StorageWorks Disk Arrays, page 126
Overview
The mpx100/100b supports iSCSI multipath in a single or dual mpx100/100b conguration with asingleEVA
storage system.
As with sin process. F the mpx100/100b to the iSCSI Initiator.
Because the mpx100/100b is bridging SCSI commands and data from the host to storage with iSCSI and Fibre Channel, it is important to understand what multipathing means from each technology's perspective.
NOTE:
The examp Note, how
IMPORTANT:
WindowsXPProfessionalisnotsupportedbyMicrosoft'sMultipathI/O(MPIO).
gle- p a th mp x 10 0/ 10 0b congurations, presenting EVA LUNs to an iSCSI Initiator is a two-step
irst the EVA LUN must be presented to the mpx100 / 100b, and then it must be presented from
les in this section show direct connect congurations between the EVA and the mpx100/100b.
ever, that iSCSI multipath is also supported in fabric connections.
Understanding Fibre Channel multipathing for the mpx100/100b
EVA storage array perspective
The mpx100/100b has two FC ports, each having a unique WWPN. When connected to the EVA storage system, these WWPN s behave like any other WWPN accessing the array. When the iSCSI host entry is created in HP Command View, all FC port WWPNs are included in the iSCSI host properties.
The mpx100/100b FC ports do not necessarily have to be connected to the EVA storage controller to be added to the iSCSI host entry. Upon iSCSI device discovery, HP Command View polls the mpx100/100b for both FC port WWPNs and adds them to the iSCSI host entry FC port list.
If a single mpx100/100b is discovered as an iSCSI controller (see Figure 42), both of its FC ports will be included in the single HP Command View iSCSI host entry. If two mpx100's/100b's are discovered (see Figure 43), the single HP Command View EVA iSCSI host entry contains four FC ports—two from each mpx100/100b.
EVA iSCSI connectivity user guide
95
Figure 42 Example: Single mpx100 multipath—WWPN conguration
Figure 43 Example: Dual mpx100 multipath—WWPN conguration
The mpx100/100b perspective
When an EVA storage system FC port connects to the mpx100/100b, the mpx100/100b creates in its database a unique iSCSI target name that includes the WWPN of the EVA storage controller port. This iSCSI target name is used by the iSCSI Initiator to connect to the EVA storage system.
Each EVA FC port must be connected to the mpx100/100b in order for the mpx100/100b to create
SI target entry (see Figure 44).
an iSC
96
Setting up the iSCSI Initiator for multipathing
Figure 44 Examp l e : Sin g le mpx100 multipath—iSCSI target conguration
As with any other Fibre Channel host entry within HP Command View EVA, when a LUN is presented to the iSCSI host entry the LUN is presented to all mpx100/100b FC port WWPNs contained in that entry (see Figure 45).
iSCSI Target Name
iqn.1986-03.com.hp:fcgw.mpx100.0.500508b40101f670.500508b40101f678
50:05:08:b4:01:01:f6:78
50:05:08:b4:01:01:f6:7c
iSCSI Target Name
iqn.1986-03.com.hp:fcgw.mpx100.0.500508b40101f670.500508b40101f67c
Figure 45 Example: Du al mpx100 multipath—iSCSI target conguration
Understanding iSCSI multipathing with the mpx10 0/100b
Once the EVA target and LUNs are presented to the mpx100/100b F C port WWPNs, they can be presented to iSCSI Initiators through the mpx100/100b iSCSI GbE ports.
Although each Fibre Channel target and its LUNs are received by the mpx100/100b through separate FC ports, all targets are presented from each iSCSI port of the mpx100 /100b to the IP network (see
Figure 46).
EVA iSCSI connectivity user guide
97
T=Iqn.1996…..f678 Lun 0,1,2,3
EVA
50:…..f6:78 Lun 0,1,2,3
50:…..f6:7c Lun 0,1,2,3
FC1
FC2
M
P X 1 0 0
T=Iqn.1996…..f67c Lun 0,1,2,3
GE1
GE2
T=Iqn.1996…..f678 Lun 0,1,2,3
IP
Network
T=Iqn.1996…..f67c Lun 0,1,2,3
Figure 46 Example: Fibre Channel to I P port/target translation
The iSCSI Initiator discovers the targets presented out of the mpx100 /100b GE ports by discovering the GE port's IP addresses and logging in to the target (see Figure 47).
Figure 47 Example: Single mpx100 iSCSI port IP addressing
Each iSCSI targets w
GbEporthasduplicatepathstotheLUNbecauseeachGEportispresentingtwounique
ith the same LUN information. Each unique target should be considered an iSCSI path to the LUN.
iSCSI Initiator perspective
Because of the mpx100's/100b's ability to present multiple Fibre Channel targets through one physical iSCSI GbE connection, it is possible for the iSCSI Initiator to connect—and use—more virtual paths than are physically available on the FC/IP networks.
NOTE:
Using the iSCSI target discovery process, it is up to the iSCSI Initiator to determine how many targets to log in to, bearing in mind that one target equals one path.
For the preceding examples, Table 1 6 shows all the paths available to an iSCSI Initiator connected to both iSCSI G bE Ports of the mpx100/100b.
98
Setting up the iSCSI Initiator for multipathing
Table16Singlempx100/100bmultipathconfiguration
iSCSI Initiator—virtual path
mpx100/100b iSCSI GbE
EVA FC port—physical pat h
port—physical path
iqn.199601…..f678
iqn.199602…..f67 c
iqn.199603…..f678
iqn.199604…..f67c
16.10 . 11.0 2
16.10 . 11.0 3
50:05:08:b4:01:01:f6:78
50:05:08:b4:01:01:f6:7c
Adding another mpx100/100b and two more EVA ports to this conguration results in the conguration in Figure 48 and Table 17:
Figure 48 Example: Dual mpx100 iSCS I port IP addressing
Table 1 7 provides an example of a dual multipath c on guration.
Table 17 Example: Dual mpx100/100b multipath conguration
iSCSI Initiator—virtual path
iqn.199601…..f678
iqn.199602…..f67 c
iqn.199603…..f678
iqn.199604…..f67c
iqn.199605…..f679
iqn.199606…..f67d
iqn.199607…..f679
iqn.199607…..f67d
mpx100/100b iSCSI GbE port—physical path
16.10 .1 1. 0 2
16.10 .1 1. 0 3
16.10 .1 1. 0 4
16.10 .1 1. 0 5
EVA FC port—physical path
50:05:08:b4:01:01:f6:78
50:05:08:b4:01:01:f6:7c
50:05:08:b4:01:01:f6:79
50:05:08:b4:01:01:f6:7d
EVA iSCSI connectivity user guide
99
The iSCSI Initiator may use all virtual paths as if they were physical paths following the rules/restrictions of the iSCSI multipath software residing on the iSCSI Initiator.
The iSCSI Initiator host can have single or multiple physical connections or links to the IP storage network.
With a single physical connection, the iSCSI virtual paths can share the same link, because IP packets with their TCP/iSCSI payloads are routed via the IP packet network addressing information.
With multiple physical connections, the MS iSCSI Initiator control panel applet allows setting a specific link to be used as the primary iSCSI session during target login. However, the remaining links are considered to be standby and will only be used if the primary link becomes unavailable.
This becomes an implicit hardware failover capability, because the initiator's routing table contains all available links to the target. If the session's link becomes unavailable, the iSCSI session ends. TCP tries another link in the routing table to renegotiate or connect to the mpx100/100b GbE port; the iSCSI Initiator and the target performs their login sequence, and I/O resumes.
Conguring multipath with Windows iSCSI Initiator
Since V2.0 the Microsoft iSCSI Initiator includes support for establishing redundant paths for sending I/O from the initiator to the target. Setting up redundant paths properly is important to ensure high availabilityofthetargetdisk.Ideally,thePCwouldhavethepathsuseseparateNICcardsandseparate network infrastructure (cables, switches, mpx100's/100b's). Separate target ports are recommended, but are not
Microsoft MPIO support allows the initiator to log in to multiple sessions to the same target and aggregate the duplicate devices into a single d evice exposed to Windows. Each session to the target can be established using different NICs, network infrastructure, and target ports. If one session fails, another session can continue processing I/O without interruption to the application. The iSCSI target must support multiple sessions to the same target. The Microsoft iSCSI MPIO DSM supports a set of load balance policies that determine how I/O is allocated among the different sessions. With Microsoft MPIO, the load balanc
The Microsoft iSCSI DSM assumes that all targets are active/active and can handle I/O on any path at any time. There is no mechanism within the iSCSI protocol to determine whether a target is active/active or active/passive; therefore, the mpx100/100b supports only multipath congurations with the EVA XL and the EVA GL with active/active suppor t.
necessary.
e policies apply to each LUN individually.
Microsoft MPIO multipathing support for iSCSI
Installing the MPIO feature for Windows Server 2008
NOTE:
soft Windows 2008 includes a separate MPIO feature that requires installation for use. Microsoft
Micro Windows Server 2008 also includes the iSCSI Initiator. Download or installation is not required.
To install the MPIO feature for Windows Server 2 008:
1. Check the box for Multipath I/O in the Add Features page (Figure 49).
2. Click Next and then click Install.
100
Setting up the iSCSI Initiator for multipathing
Loading...