LSI MegaRAID 320 User Manual

Page 1
USER’S
GUIDE
MegaRAID® 320 Storage Adapters
80-00143-01 Rev. B
Page 2
LSI products are not intended for use in life-support appliances, devices, or systems. Use of any LSI product in such applications without written consent of the appropriate LSI officer is prohibited.
Purchase of I Associated Companies, conveys a license under the Philips I use these components in an I
2
the I
C standard Specification as defined by Philips.
Document 80-00143-01 Rev. B, March 2008. This document describes the current version of the LSI Corporation MegaRAID 320 Storage Adapters and will remain the official reference source for all revisions/releases of these products until rescinded by an update.
LSI Corporation reserves the right to make changes to any products herein at any time without notice. LSI does not assume any responsibility or liability arising out of the application or use of any product described herein, except as expressly agreed to in writing by LSI; nor does the purchase or use of a product from LSI convey a license under any patent rights, copyrights, trademark rights, or any other of the intellectual property rights of LSI or third parties.
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
TRADEMARK ACKNOWLEDGMENT LSI, the LSI logo design, Fusion-MPT, and MegaRAID are trademarks or registered trademarks of LSI Corporation. Microsoft, Windows, and Windows NT are registered trademarks of Microsoft Corporation. Novell and NetWare are registered trademarks of Novell Corporation. UNIX and UnixWare are registered trademarks of The Open Group. SCO is a registered trademark of Caldera International, Inc. Linux is a registered trademark of Linus Torvalds. PCI-X is a registered trademark of PCI SIG. All other brand and product names may be trademarks of their respective companies.
2
C components of LSI Corporation, or one of its sublicensed
2
C Patent Rights to
2
C system, provided that the system conforms to
CD
To receive product literature, visit us at http://www.lsi.com.
For a current list of our distributors, sales offices, and design resource centers, view our web page located at
http://www.lsi.com/contacts/index.html
ii
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 3
Preface
This book is the primary reference and user’s guide for the LSI MegaRAID instructions for these adapters and includes specifications for them.
The MegaRAID 320 Storage Adapter family consists of the following:
MegaRAID 320-1 PCI SCSI Disk Array Controller
MegaRAID 320-2 PCI SCSI Disk Array Controller
®
320 Storage Adapters. It contains complete installation
Audience
MegaRAID 320-2E PCI Express SCSI Disk Array Controller
MegaRAID 320-2X PCI-X SCSI Disk Array Controller
MegaRAID 320-4X PCI-X SCSI Disk Array Controller
For details on how to configure the Storage Adapters, and for an overview of the software drivers, see the MegaRAID Configuration Software User’s Guide.
This document assumes that you have some familiarity with RAID controllers and related support devices. The people who benefit from this book are:
Engineers who are designing a MegaRAID 320 Storage Adapter into
a system
Anyone installing a MegaRAID 320 Storage Adapter in their RAID
system
MegaRAID 320 Storage Adapters User’s Guide iii
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 4
Organization
This document has the following chapters and appendixes:
Chapter 1, Overview, provides a general overview of the
MegaRAID 320 series of PCI-to-SCSI Storage Adapters with RAID control capabilities.
Chapter 2, Hardware Installation, describes the procedures for
installing the MegaRAID 320-1, -2, -2E, -2X, and -4X Storage Adapters.
Chapter 3, MegaRAID 320 Storage Adapter Characteristics,
provides the characteristics and technical specifications for the MegaRAID 320-1, 320-2, 320- 2E, 320-2X, and 320-4X Storage Adapters.
Chapter 4, Installing and Configuring Clusters, explains how to
implement clustering to enable two independent servers to access the same shared data storage.
Appendix A, Glossary of Terms and Abbreviations, lists and
explains the terms and abbreviations used in this manual.
Related Publications
MegaRAID Configuration Software User’s Guide, Document No. DB15-000269-01 (on the MegaRAID Universal Software Suite CD included with the MegaRAID 320 Storage Adapter)
MegaRAID Device Driver Installation User’s Guide, Document No. DB11-000018-02 (on the MegaRAID Universal Software Suite CD included with the MegaRAID 320 Storage Adapter)
iv Preface
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 5
Safety Instructions
Use the following safety guidelines to help protect your computer system from potential damage and to ensure your own personal safety.
When Using Your Computer System – As you use your computer system, observe the following safety guidelines:
Caution:
Do not operate your computer system with any cover(s) (such as computer covers, bezels, filler brackets, and front-panel inserts) removed:
To help avoid damaging your computer, be sure the voltage selection
switch on the power supply is set to match the alternating current (AC) power available at your location:
115 volts (V)/60 hertz (Hz) in most of North and South America
and some Far Eastern countries such as Japan, South Korea, and Taiwan
230 V/50 Hz in most of Europe, the Middle East, and the Far East.
Also be sure your monitor and attached peripherals are electrically rated to operate with the AC power available in your location.
To help avoid possible damage to the system board, wait 5 seconds
after turning off the system before removing a component from the system board or disconnecting a peripheral device from the computer.
To help prevent electric shock, plug the computer and peripheral
power cables into properly grounded power sources. These cables are equipped with 3-prong plugs to ensure proper grounding. Do not use adapter plugs or remove the grounding prong from a cable. If you must use an extension cable, use a 3-wire cable with properly grounded plugs.
To help protect your computer system from sudden, transient
increases and decreases in electrical power, use a surge suppressor, line conditioner, or uninterruptible power supply.
Be sure nothing rests on your computer system’s cables and that the
cables are not located where they can be stepped on or tripped over.
Do not spill food or liquids on your computer. If the computer gets
wet, consult the documentation that came with it.
Preface v
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 6
Do not push any objects into the openings of your computer. Doing so
can cause fire or electric shock by shorting out interior components.
Keep your computer away from radiators and heat sources. Also, do
not block cooling vents. Avoid placing loose papers underneath your computer; do not place your computer in a closed-in wall unit or on a rug.
When Working Inside Your Computer –
Notice:
1. Turn off your computer and any peripherals.
2. Disconnect your computer and peripherals from their power sources. Also disconnect any telephone or telecommunications lines from the computer.
Doing so reduces the potential for personal injury or shock.
Also note these safety guidelines:
Do not attempt to service the computer system yourself, except as explained in this guide and elsewhere in LSI Logic documentation. Always follow installation and service instructions closely.
When you disconnect a cable, pull on its connector or on its
strain-relief loop, not on the cable itself. Some cables have a connector with locking tabs; if you are disconnecting this type of cable, press in on the locking tabs before disconnect the cable. As you pull connectors apart, keep them evenly aligned to avoid bending any connector pins. Also, before you connect a cable, make sure both connectors are correctly oriented and aligned.
Handle components and cards with care. Don’t touch the
components or contacts on a card. Hold a card by its edges or by its metal mounting bracket. Hold a component such as a microprocessor chip by its edges, not by its pins.
Protecting Against Electrostatic Discharge – Static electricity can harm delicate components inside your computer. To prevent static damage, discharge static electricity from your body before you touch any of your computer’s electronic components, such as the microprocessor. You can do so by touching an unpainted metal surface, such as the metal around the card-slot openings at the back of the computer.
vi Preface
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 7
As you continue to work inside the computer, periodically touch an unpainted metal surface to remove any static charge your body may have accumulated. In addition to the preceding precautions, you can also take the following steps to prevent damage from electrostatic discharge (ESD):
When unpacking a static-sensitive component from its shipping
carton, do not remove the component from the antistatic packing material until you are ready to install the component in your computer. Just before unwrapping the antistatic packaging, be sure to discharge static electricity from your body.
When transporting a sensitive component, first place it in an
antistatic container or packaging.
Handle all sensitive components in a static-safe area. If possible, use
antistatic floor pads and workbench pads.
Preface vii
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 8
viii Preface
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 9
Chapter 1 Overview
Contents
1.1 Overview 1-1
1.1.1 Operating System Support 1-3
1.1.2 Technical Support 1-3
1.2 Features 1-4
1.2.1 Memory 1-4
1.2.2 Connectors 1-4
1.2.3 RAID Features 1-4
1.2.4 Drive Roaming 1-5
1.2.5 Drive Migration 1-6
1.3 Hardware 1-6
Chapter 2 Hardware Installation
2.1 Requirements 2-1
2.2 Quick Installation 2-2
2.3 Detailed Installation 2-3
2.4 SCSI Device Cables 2-9
2.5 Replacing a Failed Controller with Data in the TBBU 2-15
2.6 After Installing the Storage Adapter 2-16
2.4.1 Internal SCSI Cables 2-9
2.4.2 External SCSI Cables 2-10
2.4.3 Connecting Internal SCSI Devices 2-11
2.4.4 Connecting External SCSI Devices 2-13
MegaRAID 320 Storage Adapters User’s Guide xv
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 10
Chapter 3 MegaRAID 320 Storage Adapter Characteristics
3.1 MegaRAID 320 Storage Adapter Family 3-1
3.1.1 Single-Channel Storage Adapter 3-2
3.1.2 Dual-Channel Storage Adapters 3-4
3.1.3 Quad-Channel Storage Adapter 3-10
3.2 MegaRAID 320 Storage Adapter Characteristics 3-12
3.3 Technical Specifications 3-12
3.3.1 Storage Adapter Specifications 3-13
3.3.2 Array Performance Features 3-15
3.3.3 Fault Tolerance 3-16
3.3.4 Electrical Characteristics 3-17
3.3.5 Thermal and Atmospheric Characteristics 3-17
3.3.6 Safety Characteristics 3-17
Chapter 4 Installing and Configuring Clusters
4.1 Overview 4-1
4.2 Benefits of Clusters 4-2
4.3 Installing and Configuring Your System as Part of
a Cluster 4-2
4.4 Driver Installation Instructions under Microsoft
Windows 2000 Advanced Server 4-4
4.4.1 Network Requirements 4-4
4.4.2 Shared Disk Requirements 4-5
4.5 Installing the Peer Processor Device in a Windows
Cluster 4-6
4.6 Installing SCSI Drives 4-12
4.6.1 Configuring the SCSI Devices 4-12
4.6.2 Terminating the Shared SCSI Bus 4-12
4.7 Installing Clusters under Windows 2000 4-13
4.7.1 Installing the Microsoft Windows 2000 Operating System 4-14
4.7.2 Setting Up Networks 4-14
4.7.3 Configuring the Cluster Node Network Adapter 4-15
4.7.4 Setting Up the First Node in Your Cluster 4-16
4.7.5 Configuring the Public Network Adapter 4-17
xvi Contents
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 11
4.7.6 Verifying Connectivity and Name Resolution 4-18
4.7.7 Verifying Domain Membership 4-19
4.7.8 Setting Up a Cluster User Account 4-20
4.7.9 Setting Up Shared Disks 4-21
4.7.10 Configuring Shared Disks 4-21
4.7.11 Assigning Drive Letters 4-22
4.7.12 Verifying Disk Access and Functionality 4-23
4.7.13 Installing Cluster Service Software 4-24
4.7.14 Configuring Cluster Disks 4-27
4.7.15 Validating the Cluster Installation 4-33
4.7.16 Configuring the Second Node 4-34
4.7.17 Verifying Installation 4-35
4.8 Installing Clusters under Windows Server 2003 4-36
4.8.1 Cluster Service Software Installation 4-36
4.8.2 Installation Checklist 4-36
4.8.3 Shared Disk Requirements 4-38
4.8.4 Steps for Configuring the Shared Disks under Windows Server 2003 4-39
4.8.5 Cluster Service Installation Steps 4-45
4.8.6 Validating the Cluster Installation 4-55
4.8.7 Configuring the Second Node 4-58
Appendix A Glossary of Terms and Abbreviations
Customer Feedback
Contents xvii
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 12
xviii Contents
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 13
Figures
2.1 Inserting the MegaRAID 320 Card in a PCI Slot 2-4
2.2 Inserting the MegaRAID 320-2E Card in a PCI-Express Slot 2-5
2.3 Terminating an Internal SCSI Disk Array 2-8
2.4 SCSI Cable – 68-Pin High Density with Terminator 2-10
2.5 SCSI Cable – 68-Pin High Density without Terminator 2-10
2.6 SCSI Cable – 68-Pin VHDCI to 68-Pin VHDCI 2-10
2.7 SCSI Cable – 68-Pin VHDCI to 68-Pin HD 2-10
2.8 SCSI Cable – 68-Pin HD to 68-Pin HD 2-11
2.9 Connecting an Internal SCSI Cable to Host Adapter 2-11
2.10 Connecting Multiple Internal SCSI Devices 2-12
2.11 Connecting One External SCSI Device 2-13
2.12 Connecting Multiple External SCSI Devices 2-14
3.1 MegaRAID 320-1 Card Layout 3-2
3.2 MegaRAID 320-2 Card Layout 3-4
3.3 MegaRAID 320-2E Card Layout 3-6
3.4 MegaRAID 320-2X Card Layout 3-8
3.5 MegaRAID 320-4X Card Layout 3-10
4.1 Found New Hardware Wizard Dialog Box 4-7
4.2 Search and Installation Options 4-8
4.3 Hardware Type Dialog Box 4-9
4.4 Hardware Device Manufacturer and Model 4-10
4.5 Device Driver Dialog Box 4-11
4.6 Network and Dial-up Connections Screen 4-15
4.7 Create or Join a Cluster Dialog Box 4-25
4.8 User Account and Password Validation 4-26
4.9 Add or Removed Managed Disks Screen 4-27
4.10 Configure Cluster Networks Dialog Box 4-28
4.11 Network Connections Dialog Box 4-29
4.12 Network Connections Dialog Box 4-30
4.13 Internal Cluster Communication Dialog Box 4-31
4.14 Cluster IP Address Dialog Box 4-32
4.15 Cluster Service Confirmation 4-33
4.16 Cluster Administrator Screen 4-35
4.17 Computer Management Screen 4-39
4.18 Initialize and Convert Disk Wizard 4-40
Contents xvii
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 14
4.19 Disk Management Screen 4-41
4.20 Select Partition Type Screen 4-42
4.21 Final Partition Wizard Screen 4-43
4.22 Computer Management 4-44
4.23 Cluster Administrator Screen 4-46
4.24 New Server Cluster Wizard Screen 4-47
4.25 Cluster Name and Domain Screen 4-48
4.26 Select Computer Screen 4-49
4.27 Configuration Analysis Screen 4-50
4.28 IP Address Screen 4-51
4.29 Cluster Service Account Screen 4-52
4.30 Proposed Cluster Configuration Screen 4-53
4.31 Creating the Cluster Screen 4-54
4.32 Selecting Properties in Cluster Administrator 4-55
4.33 Setting the Network Priority 4-56
4.34 Private Properties 4-58
4.35 Validating Cluster Administration on the Cluster Administrator 4-59
4.36 Open Connection to Cluster Window 4-60
4.37 Select Computers Dialog Box 4-61
4.38 Cluster Service Account Dialog Box 4-62
4.39 Cluster Administrator Screen 4-63
xviii Contents
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 15
Tabl es
1.1 MegaRAID 320 Storage Adapter Comparisons 1-6
2.1 Target IDs 2-6
3.1 MegaRAID 320-1 Headers and Connectors 3-3
3.2 MegaRAID 320-2 Headers and Connectors 3-5
3.3 MegaRAID 320-2E Headers and Connectors 3-7
3.4 MegaRAID 320-2X Headers and Connectors 3-9
3.5 MegaRAID 320-4X Headers and Connectors 3-10
3.6 Storage Adapter Characteristics 3-12
3.7 Storage Adapter Specifications 3-13
3.8 Array Performance Features 3-15
3.9 MegaRAID 320 Fault Tolerance Features 3-16
3.10 Maximum Power Requirements 3-17
4.1 Nodes and Storage Devices 4-13
4.2 Example IP Addresses 4-19
4.3 Nodes and Storage Devices 4-45
Contents xix
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 16
xx Contents
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 17
Chapter 1 Overview
This section provides a general overview of the MegaRAID 320 series of PCI-to-SCSI Storage Adapters with RAID control capabilities. It consists of the following sections.
Section 1.1, “Overview”
Section 1.2, “Features”
Section 1.3, “Hardware”
1.1 Overview
The MegaRAID 320 Storage Adapters are high-performance intelligent PCI-to-SCSI host adapters with RAID control capabilities. MegaRAID 320 Storage Adapters provide reliability, high performance, and fault­tolerant disk subsystem management. They are an ideal RAID solution for the internal storage of workgroup, departmental, and enterprise systems. MegaRAID 320 Storage Adapters offer a cost-effective way to implement RAID in a server.
MegaRAID 320 Storage Adapters are available with one, two, or four SCSI channels. There are two versions of the MegaRAID 320-1 Storage Adapter. The following are descriptions of the adapters:
The MegaRAID 320-1 Storage Adapter (single-channel) has one
LSI53C1020 controller chip that controls one SCSI channel. The Storage Adapter has one very high-density cable interconnect (VHDCI) 68-pin external SCSI connector and one high-density cable interconnect (HDCI) 68-pin internal SCSI connector.
MegaRAID320 Storage Adapters User’s Guide 1-1
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 18
The MegaRAID 320-2 Storage Adapter (dual-channel) has one
LSI53C1030 controller chip that controls two SCSI channels. The Storage Adapter has two VHDCI 68-pin external SCSI connectors and two HDCI 68-pin internal SCSI connectors.
The MegaRAID 320-2E (Express) Storage Adapter has one 80332
processor that controls two SCSI channels. The Storage Adapter has two UHDCI 68-pin external SCSI connectors and two HDCI 68-pin internal SCSI connectors. Note that the MegaRAID 320-2E is a PCI-Express controller.
The MegaRAID 320-2X Storage Adapter (dual-channel) has one
LSI53C1030 controller chip that controls two SCSI channels. The Storage Adapter has two VHDCI 68-pin external SCSI connectors and two HDCI 68-pin internal SCSI connectors. Note that the MegaRAID 320-2X is a PCI-X controller.
The MegaRAID 320-4X Storage Adapter (quad-channel) has two
LSI53C1030 controller chips that control the four SCSI channels. The Storage Adapter has four VHDCI 68-pin external SCSI connectors and two HDCI 68-pin internal SCSI connectors. Note that the MegaRAID 320-4X is a PCI-X controller.
The MegaRAID 320 Storage Adapters support a low-voltage differential (LVD) or a single-ended (SE) SCSI bus. With LVD, you can use cables up to 12 meters long. Throughput on each SCSI channel can be as high as 320 Mbytes/s.
PCI, PCI-X, and PCI-Express are I/O architectures designed to increase data transfers without slowing down the central processing unit (CPU). You can install the MegaRAID 320 PCI and PCI-X Storage Adapters in PCI-X computer systems with a standard bracket type. With these adapters in your system, you can connect SCSI devices over a SCSI bus.
PCI-Express goes beyond the PCI specification in that it is intended as a unifying I/O architecture for various systems: desktops, workstations, mobile, server, communications, and embedded devices.
Note:
For Ultra320 SCSI performance, you must connect only LVD devices to the bus. Do not connect a high voltage differential (HVD) device to the controller. Do not mix SE with LVD devices, or the bus speed will be limited to the slower SE (Ultra SCSI) SCSI data transfer rates.
1-2 Overview
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 19
1.1.1 Operating System Support
The MegaRAID 320 Storage Adapters support major operating systems, such as Windows
®
SuSe Other software support ensures data integrity by intelligently testing the network before completing negotiation.
Linux, Novell® NetWare®, SCO OpenServer®, and UnixWare®.
®
(2000, Server 2003, and XP), Red Hat® Linux,
Note:
The MegaRAID 320 Storage Adapters use Fusion-MPT™ architecture for all major operating systems for thinner drivers and better performance. Refer to the MegaRAID Device Driver Installation User’s Guide for driver installation instructions.
1.1.2 Technical Support
For assistance installing, configuring, or running a MegaRAID 320 RAID controller or obtaining a driver for an operating system other than the ones already listed in Section 1.1.1, “Operating System Support,” contact LSI Technical Support:
For assistance in installing, configuring, or running your SAS RAID controller, contact LSI Technical Support.
Phone Support:
The MegaRAID 320 Storage Adapters do not support the Windows NT
®
operating system.
1-800-633-4545 (North America)
Web Site:
http://www.lsi.com/support
Overview 1-3
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 20
1.2 Features
1.2.1 Memory
The following sections describe the features of the LSI MegaRAID 320 Storage Adapters.
The memory features include:
64 Mbytes of synchronous dynamic random access memory
(SDRAM) integrated on the board on the MegaRAID 320-1
Support for up to 256 Mbytes of SDRAM; a 128- or 256-Mbyte DIMM
can be installed on the MegaRAID 320-2 and 320-4X
Support for up to 256 Mbytes of SDRAM; a 256-Mbyte DIMM can be
installed on the MegaRAID 320-2X
Support for up to 512 Mbytes of double data rate (DDR) SDRAM; a
Support for a 64-bit PCI host interface for the MegaRAID 320-2, 320-
1.2.2 Connectors
The MegaRAID 320 connector features include:
One internal and one external SCSI connector for the
Two internal and two external SCSI connectors for the
Two internal and four external SCSI connectors for the
128-, 256-, or 512-Mbyte DIMM can be installed on the MegaRAID 320-2E
2E, 320-2X, and 320-4X (note that the 320-2X and -4X are PCI-X controllers and the 320-2E is a PCI Express controller.)
MegaRAID 320-1
MegaRAID 320-2, 320-2E, and 320-2X
MegaRAID 320-4X
1.2.3 RAID Features
The MegaRAID 320 RAID controllers support the following RAID features:
Support for RAID levels 0, 1, 5, 10, and 50
1-4 Overview
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 21
Advanced array configuration and management utilities
Online RAID level migration
No reboot necessary after expansion
Support for hard drives with capacities greater than 8 Gbytes
More than 200 Qtags per array
User-specified rebuild rate
Hardware clustering support on the board
Note:
The MegaRAID 320-2, -2E, -2X, and -4X Storage Adapters support clustering; the MegaRAID 320-1 Storage Adapter does not. See Chapter 4, “Installing and Configuring
Clusters” for more information about clustering.
Wide Ultra320 LVD SCSI performance up to 320 Mbytes/s
Support for up to 14 SCSI drives per channel on storage system with
SAF-TE enclosures (SCSI accessed fault-tolerant enclosures), 15 SCSI drives per channel for other configurations
32 Kbyte NVRAM for storing RAID system configuration information;
the MegaRAID 320 firmware is stored in Flash ROM for easy upgrade
Battery backup for MegaRAID 320-2, -2E, -2X, and -4X
Note:
Battery backup is available for the MegaRAID 320-1, 320-2, 320-2E, 320-2X, and 320-4X controllers, either through an onboard battery or daughter card. You can purchase the controller with the battery backup unit (BBU) or purchase the BBU separately.
1.2.4 Drive Roaming
Drive roaming occurs when the hard drives are moved within a configuration on the controller. When the drives are moved, the controller detects the RAID configuration from the configuration information on the drives. Configuration information is saved in both nonvolatile random access memory (NVRAM) on the MegaRAID controller and on the hard drives attached to the controller. This maintains the integrity of the data on each drive, even if the drives have changed their target ID.
Important:
Features 1-5
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Before performing drive roaming, make sure to power off
both your platform and your drive enclosure.
Page 22
1.2.5 Drive Migration
Drive migration is the transfer of a set of hard drives in an existing configuration from one controller to a blank controller. The drives must be reinstalled in the same order as in the original configuration.
Important:
Do not perform drive roaming and drive migration at the same time.
1.3 Hardware
You can install the MegaRAID 320-1 and 320-2 boards in a computer with a mainboard that has 5 V or 3.3 V, 32- or 64-bit PCI slots, the MegaRAID 320-2X and -4X in 3.3 V, 64-bit PCI or PCI-X slots, and the MegaRAID 320-2E in 3.3 V PCI-Express slots.
The following subsection describes the hardware configuration features for the MegaRAID 320 Storage Adapters.
Storage Adapter Features – Tab le 1 .1 compares the configurations
for the MegaRAID 320-1, 320-2, 320-2E, 320-2X, and 320-4X Storage Adapters.
Table 1.1 MegaRAID 320 Storage Adapter Comparisons
MegaRAID
Specification
RAID Levels 0, 1, 5, 10, 50 0, 1, 5, 10, 50 0, 1, 5, 10, 50 0, 1, 5, 10, 50 0, 1, 5, 10, 50
SCSI Device Types
Devices per SCSI Channel
SCSI Channels 1 2 2 2 4
SCSI Data Transfer Rate
SCSI Bus LVD o r S E LVD o r S E LVD o r S E LVD o r S E LVD o r S E
1-6 Overview
320-1
Synchronous or Asynchronous
Up to 15 Wide SCSI devices
Up to 320 Mbytes/s per channel
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
MegaRAID 320-2
Synchronous or Asynchronous
Up to 15 Wide SCSI devices
Up to 320 Mbytes/s per channel
MegaRAID 320-2E
Synchronous or Asynchronous
Up to 15 Wide SCSI devices
Up to 320 Mbytes/s per channel
MegaRAID 320-2X
Synchronous or Asynchronous
Up to 15 Wide SCSI devices
Up to 320 Mbytes/s per channel
MegaRAID 320-4X
Synchronous or Asynchronous
Up to 15 Wide SCSI devices
Up to 320 Mbytes/s per channel
Page 23
Table 1.1 MegaRAID 320 Storage Adapter Comparisons (Cont.)
MegaRAID
Specification
Cache Function Write-back,
Multiple Logical Drives/Arrays per Controller
Online Capacity Expansion
Dedicated and Pool Hot Spare
Hot Swap Devices Supported
320-1
Write-through, Adaptive Read Ahead, Non-Read Ahead, Read Ahead, Cache I/O, Direct I/O
Up to 40 logical drives per controller or per logical array
Ye s Ye s Ye s Ye s Ye s
Ye s Ye s Ye s Ye s Ye s
Ye s Ye s Ye s Ye s Ye s
MegaRAID 320-2
Write-back, Write-through, Adaptive Read Ahead, Non-Read Ahead, Read Ahead, Cache I/O, Direct I/O
Up to 40 logical drives per controller or per logical array
MegaRAID 320-2E
Write-back, Write-through, Adaptive Read Ahead, Non-Read Ahead, Read Ahead, Cache I/O, Direct I/O
Up to 40 logical drives per controller or per logical array
MegaRAID 320-2X
Write-back, Write-through, Adaptive Read Ahead, Non-Read Ahead, Read Ahead, Cache I/O, Direct I/O
Up to 40 logical drives per controller or per logical array
MegaRAID 320-4X
Write-back, Write-through, Adaptive Read Ahead, Non-Read Ahead, Read Ahead, Cache I/O, Direct I/O
Up to 40 logical drives per controller or per logical array
Non-Disk Devices Supported
Mixed Capacity Hard Disk Drives
Number of 16-Bit Internal Connectors
Number of 16-Bit External Connectors
Cluster Support No Ye s Ye s Ye s Ye s
Hardware Exclusive OR (XOR) Assistance
Direct I/O Ye s Ye s Ye s Ye s Ye s
Architecture Fusion-MPT Fusion-MPT Fusion-MPT Fusion-MPT Fusion-MPT
Ye s Ye s Ye s Ye s Ye s
Ye s Ye s Ye s Ye s Ye s
1 2 2 2 2
1 2 2 2 4
Ye s Ye s Ye s Ye s Ye s
Hardware 1-7
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 24
1-8 Overview
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 25
Chapter 2 Hardware Installation
This chapter describes the procedures that install the MegaRAID 320-1, 320-1, 320-2, 320-2E, 320-2X, and 320-4X Storage Adapters. It contains the following sections:
Section 2.1, “Requirements”
Section 2.2, “Quick Installation”
Section 2.3, “Detailed Installation”
Section 2.4, “SCSI Device Cables”
Section 2.5, “Replacing a Failed Controller with Data in the TBBU”
Section 2.6, “After Installing the Storage Adapter”
2.1 Requirements
The following items are required to install a MegaRAID 320 Storage Adapter:
A MegaRAID 320-1, 320-1, 320-2, 320-2E, 320-2X, or 320-4X
Storage Adapter
A host computer with an available 32- or 64-bit, 3.3 V PCI or PCI-X
expansion slot or a PCI-Express slot
The MegaRAID Universal Software Suite CD, which contains drivers
and documentation
The necessary internal and/or external SCSI cables
Ultra, Ultra2, Ultra160, or Ultra320 SCSI hard disk drives
(although backward compatible, SCSI uses the speed of the slowest device on the bus)
LSI strongly recommends using an uninterruptible power supply (UPS).
MegaRAID320 Storage Adapters User’s Guide 2-1
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 26
2.2 Quick Installation
The following steps are for quick Storage Adapter installation. These steps are for experienced computer users/installers. Section 2.3,
“Detailed Installation,” contains the steps for all others to follow.
Step 1. Turn power off to the server and all hard disk drives,
enclosures, and system components and remove the PC power cord.
Step 2. Open the cabinet of the host system by following the
instructions in the host system technical documentation.
Step 3. Determine the SCSI ID and SCSI termination requirements.
Step 4. Install the MegaRAID 320 Storage Adapter in the server,
connect SCSI devices to it, and set termination correctly on the SCSI channel(s). Ensure that the SCSI cables you use conform to all SCSI specifications.
Step 5. Perform a safety check.
Ensure that all cables are properly attached.
Ensure that the MegaRAID 320 Storage Adapter is
properly installed.
Close the cabinet of the host system.
Step 6. Turn power on after completing the safety check.
2-2 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 27
2.3 Detailed Installation
This section provides detailed instructions for installing a MegaRAID 320 Storage Adapter.
Step 1. Unpack the Storage Adapter
Unpack and remove the Storage Adapter. Inspect it for damage. If it appears damaged, or if any items listed below are missing, contact your LSI support representative. The MegaRAID 320 Storage Adapter is shipped with
–the MegaRAID Universal Software Suite CD, which
contains MegaRAID drivers for supported operating systems, an electronic version of this User’s Guide, and other related documentation
a license agreement
Step 2. Power Down the System
Turn off the computer and remove the AC power cord. Remove the system’s cover. Refer to the system documentation for instructions.
Step 3. Check the jumpers.
Ensure that the jumper settings on the your Storage Adapter are correct. Refer to Chapter 3, “MegaRAID 320 Storage
Adapter Characteristics” for diagrams of the Storage Adapters
with their jumpers and connectors.
Step 4. Install the MegaRAID 320 Storage Adapter
Select a PCI, PCI-X, or PCI-Express slot, and align the Storage Adapter PCI bus connector to the slot. Press down gently but firmly to ensure that the card is properly seated in the slot, as shown in Figure 2.1. Figure 2.2 shows installation of the PCI-Express controller. Then screw the bracket into the computer chassis.
Detailed Installation 2-3
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 28
Bracket Screw
64-Bit Slots (5 V)
Press Here
Edge of
Motherboard
Press Here
Figure 2.1 Inserting the MegaRAID 320 Card in a PCI Slot
2-4 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 29
Figure 2.2 Inserting the MegaRAID 320-2E Card in a PCI-Express
Edge of
Motherboard
Bracket Screw
PCI-Express Slot
Press Here
Press Here
Slot
Step 5. Set the Target IDs
Set target identifiers (TIDs) on the SCSI devices. Each device in a channel must have a unique TID. Provide unique TIDs for non-disk devices (CD-ROM or tapes), regardless of the channel where they are connected. The MegaRAID 320 Storage Adapter automatically occupies TID 7, which is the highest priority. The arbitration priority for a SCSI device depends on its TID.
Detailed Installation 2-5
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 30
Tabl e 2.1 Ta rge t IDs
Priority Highest Lowest
TID 7 6 5 . . . 2 1 0 15 14 . . . 9 8
Step 6. Connect SCSI Devices to the Storage Adapter
Use SCSI cables to connect SCSI devices to the Storage Adapter. Refer to Section 2.4.1, “Internal SCSI Cables” and
Section 2.4.2, “External SCSI Cables” for SCSI cable
information. Refer to Section 2.4.3, “Connecting Internal SCSI
Devices” and Section 2.4.4, “Connecting External SCSI Devices” for details on connecting the controller to internal and
external devices.
To connect the SCSI devices
a. Disable termination on any SCSI device that does not sit at
the end of the SCSI bus.
b. Configure all SCSI devices to supply TERMPWR.
c. Connect cables to the SCSI devices. Refer to the following
table for maximum cable lengths.
Cable Length
Device
Fast SCSI (10 Mbytes/s) 3
SE SCSI 3
Ultra SCSI 1.5
LV D 12
You can connect up to 15 Ultra SCSI devices to each SCSI channel.
in Meters
2-6 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 31
System throughput problems can occur if SCSI cables are not the correct type. To minimize the potential for problems,
use cables no longer than 12 meters for Ultra160 and
Ultra320 devices (it is better to use shorter cables, if possible)
use the shortest SCSI cables for SE SCSI devices
(no longer than 3 meters for Fast SCSI, no longer than
1.5 meters for an 8-drive Ultra SCSI system, and no longer than 3 meters for a 6-drive Ultra SCSI system)
use active termination
avoid clustering the cable nodes
note that the cable stub length must be no greater than
0.1 meters (4 inches)
use high impedance cables
route SCSI cables carefully.
Step 7. Set SCSI Termination
The SCSI bus is an electrical transmission line and must be terminated properly to minimize reflections and losses. Set termination at each end of the SCSI cable(s).
For a disk array, set SCSI bus termination so that removing or adding a SCSI device does not disturb termination. An easy way to do this is to connect the Storage Adapter to one end of the SCSI cable and to connect an external terminator module at the other end of the cable. You can then connect SCSI disk drives to the connectors between the two ends of the cable. If necessary, disable termination on the SCSI devices. (This is not necessary for Ultra320 and Ultra160 SCSI drives.)
Set the termination so that SCSI termination and TermPWR are intact when any disk drive is removed from a SCSI channel, as shown in Figure 2.3.
Detailed Installation 2-7
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 32
Figure 2.3 Terminating an Internal SCSI Disk Array
Terminator
ID2 – No Termination
ID0 – Boot Drive No Termination
ID1 – No Termination
MegaRAID 320 SCSI ID 7
Host Computer
2-8 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 33
Step 8. Power On Host System
Replace the computer cover, and reconnect the AC power cords. Turn power on to the host computer. Ensure that the SCSI devices are powered up at the same time as, or before, the host computer. If the computer is powered up before a SCSI device, the device might not be recognized.
During boot, a BIOS message appears.
The firmware takes several seconds to initialize. During this time, the Storage Adapter scans the SCSI channel(s).
The MegaRAID 320 BIOS Configuration utility prompt times out after several seconds. The second portion of the BIOS message displays the MegaRAID 320 Storage Adapter number, firmware version, and cache SDRAM size. The numbering of the controllers follows the PCI slot scanning order used by the host mainboard.
If you want to run the MegaRAID Configuration utility or the WebBIOS utility at this point, press the appropriate keys when this message appears:
Press <CTRL><M> to run MegaRAID Configuration Utility, or Press <CTRL><H> for WebBIOS
2.4 SCSI Device Cables
For reliable Ultra320 operation, be sure to use an Ultra320-rated SCSI cable. The internal Ultra320 SCSI cable has built-in low voltage differential (LVD) and single-ended termination. This built-in feature is included because most LVD SCSI hard disk drives are not made with on-board low voltage differential termination.
2.4.1 Internal SCSI Cables
You can connect all internal SCSI devices to the Storage Adapter with an unshielded, twisted pair, 68-pin ribbon cable. Some 68-pin internal cables come with a low voltage differential and single-ended terminator on one end, which must be farthest from the host adapter. Figure 2.4 and
Figure 2.5 show internal cables with and without a terminator.
SCSI Device Cables 2-9
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 34
Figure 2.4 SCSI Cable – 68-Pin High Density with Terminator
Te r mi n at o r
n
Figure 2.5 SCSI Cable – 68-Pin High Density without Terminator
2.4.2 External SCSI Cables
You must connect all external SCSI devices to the Storage Adapter with shielded cables. Figures 2.6 through 2.8 are examples of external SCSI cables. Select the correct 68-pin cable needed to connect your devices.
Figure 2.6 SCSI Cable – 68-Pin VHDCI to 68-Pin VHDCI
Figure 2.7 SCSI Cable – 68-Pin VHDCI to 68-Pin HD
2-10 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 35
Figure 2.8 SCSI Cable – 68-Pin HD to 68-Pin HD
2.4.3 Connecting Internal SCSI Devices
This subsection provides step-by-step instructions for connecting internal SCSI devices. The figures show the MegaRAID 320-2 Storage Adapter, which has two internal connectors and two external connectors. Refer to
Section 2.4.1, “Internal SCSI Cables,” for examples of internal cables.
Perform the following steps to connect devices.
Step 1. Plug the 68-pin connector on the end of the SCSI ribbon cable
into the internal connector on the host adapter. Figure 2.9 shows how to do this.
Figure 2.9 Connecting an Internal SCSI Cable to Host Adapter
SCSI Device Cables 2-11
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 36
Step 2. Plug the 68-pin connector on the other end of the internal SCSI
ribbon cable into the SCSI connector on the internal SCSI device, as shown in Figure 2.10.
Figure 2.10 Connecting Multiple Internal SCSI Devices
Step 3. If you have another internal SCSI device, connect the internal
Step 4. Be sure that termination is enabled at the end of the cable that
2-12 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
SCSI ribbon cable to it. Figure 2.10 shows how to do this. You can connect other devices if the cable has more connectors. The Ultra320 SCSI host adapters support up to 15 SCSI devices connected to each SCSI channel.
is farthest from the SCSI host adapter. Refer to Section 2.3,
“Detailed Installation,” page 2-3, for details on SCSI
bus termination.
Page 37
2.4.4 Connecting External SCSI Devices
Terminator
This subsection provides step-by-step instructions for connecting external SCSI devices. Refer to Section 2.4.2, “External SCSI Cables,” for examples of external cables.
Step 1. Plug the 68-pin connector on one end of a shielded external
SCSI cable into the external SCSI connector on the host adapter. This connector is exposed on the back panel of your computer.
Step 2. Plug the 68-pin connector on the other end of the shielded
external SCSI cable into the SCSI connector on the first external SCSI device.
Figure 2.11 shows how to connect one external SCSI device. If
you have the correct cable, it matches the external connector.
Figure 2.11 Connecting One External SCSI Device
Step 3. Connect any additional SCSI devices to one another with
SCSI Device Cables 2-13
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
shielded external SCSI cables. You need a separate SCSI cable for each additional device.
Page 38
Figure 2.12 shows how to connect multiple external SCSI
Adapter Automatically Terminated
Enable termination
on the device at the
end of the bus.
Disable termination on all devices not at the end of the bus.
devices.
Figure 2.12 Connecting Multiple External SCSI Devices
Step 4. Be sure that termination is enabled only on the last external
SCSI device as shown in Figure 2.12. Refer to Section 2.3,
“Detailed Installation,” page 2-3, for details on SCSI bus
termination.
2-14 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 39
2.5 Replacing a Failed Controller with Data in the TBBU
The MegaRAID Transportable Battery Backup Module (TBBU) is a cache memory module with an integrated battery pack. The module provides an uninterrupted power source to the module if power is unexpectedly interrupted while cached data is still present. If the power failure is the result of the MegaRAID controller itself failing, then the TBBU can be moved to a new controller and the data recovered. The replacement controller must have a cleared configuration.
Perform the following steps to replace a failed controller with data in the transportable battery backup unit.
Step 1. Power-down the system and drives.
Step 2. Remove the failed controller from the system.
Step 3. Remove the TBBU from the failed controller.
Step 4. Insert the TBBU into the replacement controller.
Step 5. Insert the replacement controller into the system.
Step 6. Power-on the system.
The controller then reads the disk configuration into NVRAM and flushes cache data to the logical drives.
Resolving a Configuration Mismatch – If the replacement controller has a previous configuration, a message displays during the power-on self-test (POST) stating that there is a configuration mismatch. A configuration mismatch occurs when the configuration data in the NVRAM and the hard disk drives are different. You need to update the configuration data in the NVRAM with the data from the hard disk drive.
Perform the following steps to resolve the mismatch.
Step 1. Press <Ctrl> <M> when prompted during bootup to access the
BIOS Configuration Utility.
Step 2. Select Configure—>View/Add Configuration.
This gives you the option to view the configuration on both the NVRAM and the hard drive disk.
Step 3. Select the configuration on disk.
Replacing a Failed Controller with Data in the TBBU 2-15
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 40
Step 4. Press <ESC> and select YES to update the NVRAM.
Step 5. Exit and reboot.
2.6 After Installing the Storage Adapter
After Storage Adapter installation, you must configure the Storage Adapter and install the operating system driver. The MegaRAID Configuration Software User’s Guide instructs you on the configuration options and how to set them on your Storage Adapter. The MegaRAID Device Driver Installation User’s Guide provides detailed installation instructions for operating system drivers.
2-16 Hardware Installation
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 41
Chapter 3 MegaRAID 320 Storage Adapter Characteristics
This chapter describes the characteristics of the LSI MegaRAID 320 Storage Adapters. This chapter contains the following sections:
Section 3.1, “MegaRAID 320 Storage Adapter Family”
Section 3.2, “MegaRAID 320 Storage Adapter Characteristics”
Section 3.3, “Technical Specifications”
3.1 MegaRAID 320 Storage Adapter Family
PCI is a high-speed standard local bus for interfacing I/O components to the processor and memory subsystems in a high-end PC. The component height on the top and bottom of the Ultra320 SCSI host adapters follows the PCI Local Bus Specification, Revision 2.2, and PCI-X Addendum to the PCI Local Bus Specification, Revision 1.0a. The MegaRAID 320 Storage Adapters are used in PCI-X and PCI computer systems with PCI standard and PCI low-profile bracket types.
The MegaRAID 320-2E controller is used in a system with a PCI-Express slot. PCI-Express goes beyond the PCI specification in that it is intended as a unifying I/O architecture for various systems: desktops, workstations, mobile, server, communications, and embedded devices.
Ta bl e 3 .7 lists the features for the MegaRAID 320 Storage Adapters.
MegaRAID320 Storage Adapters User’s Guide 3-1
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 42
3.1.1 Single-Channel Storage Adapter
J9
J4 J5
J7
J10
Optional Backup
Battery Unit
Connector
Internal High-Density
68-Pin SCSI Connector
External
Very Hi g h -
Density
68-Pin SCSI
Connector
J1
J8
J3J2 J6
The MegaRAID 320-1 is a single-channel Ultra320 SCSI-to-PCI Storage Adapter that supports one Ultra320 SCSI channel each. The MegaRAID SCSI channel interface is made through connectors J1 and J7.
Figure 3.1 and Ta bl e 3 . 1 show the connectors and headers on the
MegaRAID 320-1 Storage Adapter.
Figure 3.1 MegaRAID 320-1 Card Layout
3-2 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 43
Table 3.1 MegaRAID 320-1 Headers and Connectors
Connector Description Type Comments
J1 Internal SCSI
Connector
J2 Dirty Cache LED 2-pin header Connector for an LED mounted on the system
J3 Clear EPROM 2-pin header Deletes the configuration data in the erasable
J4 On-Board BIOS
Enable
J5 SCSI Activity LED 2-pin header Connector for enclosure LED to indicate data
J6 I2C Connector 4-pin
J7 External SCSI
Connector
J8 BBU Daughter Card
Connector
J9 Termination Power
Enable
68-pin connector
2-pin header No jumper: MegaRAID on-board BIOS enabled
connector
68-pin connector
40-pin connector
2-pin header Jumpered: On-board termination power enabled.
Internal high-density SCSI bus connector.
enclosure. The LED is lit when the data in the cache has not yet been written to the storage device.
programmable read-only (EPROM) memory.
(default) Jumpered: MegaRAID on-board BIOS disabled
transfers. Connection is optional.
Reserved for LSI Logic internal use
External very high-density SCSI bus connector
Connector for optional backup battery unit (BBU) located on a daughter card
(default - do not change)
1
J10 SCSI Bus
Termination Enable
1. The MegaRAID 320-1 does not have an alarm integrated onto the board. For an alarm, the controller requires a daughter card with integrated alarm. If you order the daughter card for battery backup, it should have the alarm on it.
3-pin header Jumper on pins 1-2: Software uses drive detection to
control SCSI termination (default - do not change). Jumper on pins 2-3: On-board SCSI termination disabled. No jumper: On-board SCSI termination enabled.
MegaRAID 320 Storage Adapter Family 3-3
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 44
3.1.2 Dual-Channel Storage Adapters
J4
External
Very Hig h -
Density
68-Pin SCSI
Connector
External
Very Hig h -
Density
68-Pin SCSI
Connector
J5
J2 J3
J16
J10
J17
J18
J4
External
Density
68-Pin SCSI
Connector
External
Density
68-Pin SCSI
Connector
J5
Channel 0
Channel 1
J7
Internal High-Density
68-Pin SCSI Connector
Channel 0
Internal High-Density
68-Pin SCSI Connector
Channel 1
J9
J19
J8
J1
J12 J13 J14 J15
J22
J23
J24
J20
J11
Battery Backup
Unit
J21
J6
The MegaRAID 320-2 is a dual-channel Ultra320 SCSI-to-PCI Storage Adapter that supports two Ultra320 SCSI channels. The MegaRAID 320-2X is a dual-channel Ultra320 SCSI-to-PCI-X Storage Adapter that supports two Ultra320 SCSI channels. The MegaRAID 320-2E is a dual-channel Ultra320 SCSI-to-PCI-Express Storage Adapter that supports two Ultra320 SCSI channels.
Figure 3.2 and Ta bl e 3 . 2 show the connectors and headers on the
MegaRAID 320-2 Storage Adapter. Figure 3.3 and Tab le 3 .3 show the connectors and headers on the MegaRAID 320-2E Storage Adapter.
Figure 3.4 and Ta bl e 3 . 4 show the connectors and headers on the
MegaRAID 320-2X Storage Adapter.
Figure 3.2 MegaRAID 320-2 Card Layout
3-4 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 45
Table 3.2 MegaRAID 320-2 Headers and Connectors
Connector Description Type Comments
J1 I2C Connector 4-pin connector Reserved for LSI Logic internal use
J2 SCSI Activity LED 4-pin header Connector for LED on enclosure to indicate
data transfers. Optional.
J3 Write Pending Indicator
(Dirty Cache LED)
J4 SCSI Termination
Enable Channel 0
J5 SCSI Termination
Enable Channel 1
J6 DIMM socket DIMM socket The MegaRAID 320-2 supports the following
J7 Internal SCSI
Channel 0 Connector
J8 Internal SCSI
Channel 1 Connector
J9 External SCSI
Channel 0 Connector
J19 External SCSI
Channel 1 Connector
J10 Battery Connector
1
2-pin header Connector for enclosure LED to indicate when
data in the cache has yet to be written to the device. Optional.
3-pin header Jumper on pins 1–2: Software uses drive
detection to control SCSI termination. (Refer to
3-pin header
J17 and J18.) This is the default. Note: Leave J4 and J5 at the default setting to
allow the MegaRAID SCSI 320-2 to automatically set its own SCSI
termination. Jumper on pins 2–3: On-board SCSI termination disabled. No jumper: On-board SCSI termination enabled.
sizes of SDRAM: 128 and 256 Mbytes.
68-pin connector
68-pin connector
68-pin connector
68-pin connector
3-pin header Connector for an optional battery pack
Internal high-density SCSI bus connector
External very high-density SCSI bus connector.
Pin-1 -BATT Terminal (black wire) Pin-2 Thermistor (white wire) Pin-3 +BATT Terminal (red wire)
J11 NVRAM Clear 2-pin connector Clears the contents of the nonvolatile random
J12 NMI 2-pin connector Nonmaskable interrupt
J13 32/64-bit secondary
PCI selection
J14 Firmware Initialization
Mode 0 or 3 Select
J15 Serial Debug Interface 3-pin connector Reserved for LSI Logic internal use
J16 On-board BIOS Enable 2-pin header No jumper: BIOS enabled (default)
MegaRAID 320 Storage Adapter Family 3-5
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
access memory
3-pin connector Reserved for LSI Logic internal use
2-pin connector Reserved for LSI Logic internal use
Jumpered: BIOS disabled
Page 46
Table 3.2 MegaRAID 320-2 Headers and Connectors (Cont.)
J12 Channel
0
External
Ver y
68-Pin SCSI
Connector
High-Density
J
14 Channel 1
External
Ver y
68-pin SCSI
Connector
High-Density
J1 J2 J3
J4 J7
J6
J15
J16
J11
Internal High-Density
68-Pin SCSI Connector
Internal High-Density
68-Pin SCSI Connector
J5
J9 Ch 0 Int
J10 Ch1 Int
U8
Connector Description Type Comments
J17 Termination Power
Enable Channel 0
J18 Termination Power
Enable Channel 1
2-pin header Jumpered: TERMPWR is enabled from the
PCI bus. (default)
2-pin header
No jumper: TERMPWR is enabled from the SCSI bus. (Refer to J4 and J5)
J20 Control Related to RUBI 3-pin connector Reserved for LSI Logic internal use
J21 RUBI PCI Interrupts
Steering Interface
J24 RUBI PCI Interrupts
3-pin connector
Reserved for LSI Logic internal use
3-pin connector
Steering Interface
J22 Load Sharing Enable 2-pin connector Reserved for LSI Logic internal use
J23 EEPROM Access
2-pin connector Reserved for LSI Logic internal use
Connector
1. The battery connector is not shipped connected. It is recommended that you connect the cable on the battery pack to J10 before you install the card.
Figure 3.3 MegaRAID 320-2E Card Layout
3-6 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 47
Table 3.3 MegaRAID 320-2E Headers and Connectors
Connector Description Type Comments
J1 Write Pending Indicator
(Dirty Cache LED)
J2 On-board BIOS Enable 2-pin header No jumper installed enables the on-board BIOS.
J3 SCSI Drive Activity
Header
2
J4 I
J5 SCSI Termination
J6 SCSI Termination
J7 Serial Port (RS232) 3-pin header Connector is for diagnostic purposes.
J9 Internal SCSI
C Header 3-pin header Reserved for LSI Logic internal use
Enable Channel 0
Enable Channel 1
Channel 0 connector
2-pin header Connector for enclosure LED to indicate when
data in the cache has yet to be written to the device. Optional.
This is the default. Jumper pins 1–2 to disable the on-board BIOS.
2-pin header When lit, indicates that the SCSI drive is active.
3-pin header Jumper pins 1–2 to enable software control of
SCSI termination through drive detection. This
3-pin header
68-pin connector Internal high-density SCSI bus connector.
is the default. Jumper pins 2–3 to disable on-board SCSI termination. No jumper installed enables on-board SCSI termination.
Pin 1: RXD (Receive Data) Pin 2: TXD (Transmit Data) Pin 3: GND (Ground)
Connection is optional.
J10 Internal SCSI
Channel 1 connector
J11 Mode Select 2-pin header Reserved for LSI Logic internal use
J12 External SCSI
Channel 0 connector
J14 External SCSI
Channel 1 connector
J15 Termination Power 2-pin connector
J16 Termination Power 2-pin connector
U8 DIMM Socket DIMM socket
68-pin connector
68-pin connector External very-high density SCSI bus connector.
Connection is optional.
68-pin connector
The MegaRAID 320-2E supports the following sizes of SDRAM: 128, 256, and 512 Mbytes.
MegaRAID 320 Storage Adapter Family 3-7
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 48
Figure 3.4 MegaRAID 320-2X Card Layout
J2J1
J17
External
Ver y
High-Density
68-pin SCSI
Connector
Internal High-Density
68-Pin SCSI Connector
Channel 1
Internal High-Density
68-Pin SCSI Connector
Channel 0
Channel 0
External
Ver y
High-Density
68-pin SCSI
Connector
Channel 1
J18
J11
J13
J14
J7
J6
U6
J12
J19
J5
J8
J9
J10
J16
3-8 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 49
Table 3.4 MegaRAID 320-2X Headers and Connectors
Connector Description Type Comments
J1 Termination Enable
Channel 0
J2 Termination Enable
Channel 1
J5 Internal SCSI Channel 0
Connector
J6 Internal SCSI Channel 1
Connector
J7 External SCSI Channel 0
Connector
J8 I2C Header 4-pin
J9 Serial Debug Interface 3-pin
J10 Mode 0 Initialization
Header
J11 On-Board Cache LED 2-pin
3-pin header
3-pin header
68-pin connector
68-pin connector
68-pin connector
header
connector
2-pin connector
header
Jumper on pins 1-2: Software uses drive detection to control SCSI termination (default: do not change). Jumper on pins 2-3: On-board SCSI termination disabled. No jumper: On-board SCSI termination enabled.
Internal high-density SCSI bus connector
External very high-density SCSI bus connector
Reserved for LSI Logic internal use
Reserved for LSI Logic internal use
Reserved for LSI Logic internal use
LED glows when the on-board cache contains data and a write from the cache to the hard drives is pending.
J12 BBU Daughter Card 40-pin
header
J13 SCSI Activity LED 2-pin
header
J14 External SCSI
Channel 1 Connector
J16 EEPROM Access
Connector
J17 Termination Power
Enable Channel 0
J18 Termination Power
Enable Channel 1
J19 On-Board BIOS Enable 4-pin
U6 DIMM Socket DIMM
68-pin connector
2-pin connector
2-pin header
2-pin header
header (two rows of two pins each)
socket
Connector for an optional back-up battery pack
Connector for enclosure LED to indicate data transfers. Connection is optional.
External very high-density SCSI bus connector
Reserved for LSI Logic internal use
Jumpered: MegaRAID 320-2X supplies termination power. No jumper: SCSI bus provides termination power.
No jumper: BIOS enabled (default) Jumpers on pins 1/3: NVSRAM clear Jumper on pins 2/4: BIOS disable
The MegaRAID 320-2X supports up to 512 Mbytes of 333 MHz unbuffered DDR ECC SDRAM, in a x72 configuration.
MegaRAID 320 Storage Adapter Family 3-9
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 50
3.1.3 Quad-Channel Storage Adapter
External
Very-High
Density
68-Pin SCSI
Connector
External
Very-High
Density
68-Pin SCSI
Connector
J1
Internal High-Density
68-Pin SCSI Connector
J5
J21
J14
J13
J6
J7J8J9
J10
J11
J17
J4
Channel 0
Internal High-Density
68-Pin SCSI Connector
Channel 1
Channel 0, 1
Channel 2, 3
J23 J24
J20
J16
J19
J15
J12
J2
J3
The MegaRAID 320-4X is a quad-channel Ultra320 SCSI-to-PCI-X Storage Adapter that supports four Ultra320 SCSI Channels. Figure 3.5 and Ta bl e 3 .5 show the connectors and headers on the MegaRAID 320-4X Storage Adapter.
Figure 3.5 MegaRAID 320-4X Card Layout
Table 3.5 MegaRAID 320-4X Headers and Connectors
Connector Description Type Comments
J1 SCSI Activity LED 4-pin
header
J2 Internal SCSI
Channel 1 Connector
J3 Internal SCSI
Channel 0 Connector
68-pin connector
68-pin connector
J4 DDR DIMM Socket 184-pin
socket
3-10 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Connector for LED on enclosure to indicate data transfers. Optional.
Internal high-density SCSI bus connector
Socket for mounting DDR SDRAM DIMM. The MegaRAID 320-4X supports 256 Mbytes of 333 MHz DDR ECC SDRAM, in a x72 configuration.
Page 51
Table 3.5 MegaRAID 320-4X Headers and Connectors (Cont.)
Connector Description Type Comments
J5 External SCSI
Channel 0/1 connectors (side-by-side)
J21 External SCSI
Channel 2/3 connectors (side-by-side)
J6 Termination Enable
Channel 1
J8 Termination Enable
Channel 2
J10 Termination Enable
Channel 3
J13 Termination Enable
Channel 0
J7 Termination Power
Enable Channel 1
J9 Termination Power
Enable Channel 2
J11 Termination Power
Enable Channel 3
J14 Termination Power
Enable Channel 0
J12 I2C Connector 4-pin
J15 Battery Connector
J16 EEPROM Access
Connector
J17 Write Pending Indicator
(Dirty Cache LED)
J19 Serial Interface for
Code Debugging
J20 NVRAM Clear 2-pin
J23 80321 Initialization
Mode Select
J24 On-Board BIOS Enable 2-pin
1. The battery connector is not shipped connected. It is recommended that you connect the cable on the battery pack to J15 before you install the card.
1
68-pin connector
68-pin connector
3-pin header
3-pin header
3-pin header
3-pin header
2-pin header
2-pin header
2-pin header
2-pin header
connector 3-pin
header
2-pin header
2-pin header
3-pin header
connector 2-pin
connector
header
External very high-density SCSI bus connectors
Jumper on pins 1-2: Software uses drive detection to control SCSI termination (default: do not change). Jumper on pins 2-3: On-board SCSI termination disabled. No jumper: On-board SCSI termination enabled.
Jumper installed enables TermPWR from the SCSI bus to the appropriate SCSI channel.
Reserved for LSI Logic internal use
Connector for an optional battery pack Pin-1 -BATT Terminal (black wire) Pin-2 Thermistor (white wire) Pin-3 +BATT Terminal (red wire)
Reserved for LSI Logic internal use
Connector for enclosure LED to indicate when data in the cache has yet to be written to the device. Optional.
Reserved for LSI Logic internal use
Used to clear the contents of the nonvolatile random access memory
Reserved for LSI Logic internal use
When open, optional system BIOS is enabled; when closed, it is disabled. Status of this jumper can be read through bit 0 at local CPU address 0x9F84.0000.
MegaRAID 320 Storage Adapter Family 3-11
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 52
3.2 MegaRAID 320 Storage Adapter Characteristics
Ta bl e 3 .6 shows the general characteristics for all MegaRAID 320
Storage Adapters.
Table 3.6 Storage Adapter Characteristics
Flash ROM
Yes Yes 16-bit SE
1. For boot code and firmware
2. For BIOS configuration storage
Serial
1
EEPROM
LVD/SE
2
Signaling Ultra320 SCSI Data Transfers SCSI Features
Up to 320 Mbytes/s as well as Fast, or LVD interfaces
Each MegaRAID 320 Storage Adapter ensures data integrity by intelligently validating the compatibility of the SCSI domain. The Storage Adapters use Fusion-MPT architecture that allows for thinner drivers and better performance.
Ultra, Ultra2, and Ultra160 speeds;
Synchronous offsets up to 62.
3.3 Technical Specifications
The design and implementation of the MegaRAID 320 Storage Adapters minimizes electromagnetic emissions, susceptibility to radio frequency energy, and the effects of electrostatic discharge. The Storage Adapters carry the CE mark, C-Tick mark, FCC Self-Certification logo, Canadian Compliance Statement, Korean MIC, Taiwan BSMI, and Japan VCCI, and they meet the requirements of CISPR Class B.
Plug n Play Scatter/Gather Activity LED
SCSI Termination
Active, Single Ended, or LVD
3-12 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 53
3.3.1 Storage Adapter Specifications
Ta bl e 3 .7 lists the specifications for the MegaRAID 320-1, 320-2, 320-2E,
320-2X, and 320-4X Storage Adapters.
Table 3.7 Storage Adapter Specifications
MegaRAID
Specification
Processor (PCI Controller)
Operating Volta ge
Card Size Low-profile,
Array Interface to Host
PCI Bus Data Transfer Rate
Serial Port 3-pin RS232C-
320-1
Intel GC80302 64-bit RISC processor @66MHz
+ 3. 3 V, + 5 V, +12 V, −12 V
Half-length PCI Adapter card size (6.875" x 4.2")
PCI Rev 2.2 PCI Rev 2.2 PCI-Express Rev
Up to 33 Mbytes/s at 64-bit/66 MHz
compatible connector (for manufacturing use only)
MegaRAID 320-2
Intel GC80303 64-bit RISC processor @ 100 MHz
+ 3. 3 V, + 5 V, +12 V, −12 V
Half-length PCI Adapter card size (6.875" x 4.2")
Up to 33 Mbytes/s at 64-bit/66 MHz
3-pin RS232C­compatible connector (for manufacturing use only)
MegaRAID 320-2E
Intel 80332 64-bit RISC processor @500MHz
+3.3 V, +12 V +3.3 V, +5 V,
Half-length PCI Adapter card size (6.875" x 4.2")
1.0a
2 Gbytes/s Up to 1064
3-pin RS232C­compatible connector (for manufacturing use only)
MegaRAID 320-2X
Intel GC80321 64-bit RISC processor @400MHz
+12 V, −12 V
Half-length PCI Adapter card size (6.875" x 4.2")
PCI Rev 2.2, PCI-X Rev 1.0a
Mbytes/s at 64-bit/133 MHz
3-pin RS232C­compatible connector (for manufacturing use only)
MegaRAID 320-4X
Intel GC80321 64-bit RISC processor @400MHz
+3.3 V, +5 V, +12 V, −12 V
Full-length PCI Adapter card size (12.3" x 4.2")
PCI Rev 2.2, PCI-X Rev 1.0a
Up to 1064 Mbytes/s at 64-bit/133 MHz
3-pin RS232C­compatible connector (for manufacturing use only)
SCSI Controller(s)
SCSI Connectors
One LSI53C1020 Single SCSI controller
One 68-pin internal high-density connector for SCSI devices. One very high-density 68-pin external connector for Ultra320 and Wide SCSI.
Technical Specifications 3-13
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
One LSI53C1030 Dual SCSI controller
Two 68-pin internal high-density connectors for SCSI devices. Two ve r y high-density 68-pin external connectors for Ultra320 and Wide SCSI.
One LSI53C1030 Dual SCSI controller
Two 68-pin internal high-density connectors for SCSI devices. Two very high­density 68-pin external connectors for Ultra320 and Wide SCSI.
One LSI53C1030 Dual SCSI controller
Two 68-pin internal high-density connectors for SCSI devices. Two ve r y high-density 68-pin external connectors for Ultra320 and Wide SCSI.
Two LSI53C1030 Dual SCSI controllers
Two 68-pin internal high-density connectors for SCSI devices. Four very high-density 68-pin external connectors for Ultra320 and Wide SCSI.
Page 54
Table 3.7 Storage Adapter Specifications (Cont.)
Specification
SCSI Bus Termination
Termination Disable
Cache Configuration
Double-Sided Dual Inline Memory Modules (DIMMs)
Size of Flash ROM for Firmware
MegaRAID 320-1
Active, single-ended or LVD
Automatic through cable and device detection
Integrated 64 Mbytes 100 MHz ECC SDRAM
No Yes (128 or
1 Mbyte flash ROM
MegaRAID 320-2
Active, single-ended or LVD
Automatic through cable and device detection
Up to 256 Mbytes 100 MHz ECC SDRAM
256 Mbytes)
1 Mbyte flash ROM
MegaRAID 320-2E
Active, single-ended or LVD
Automatic through cable and device detection
Up to 512 Mbytes 167 MHz DDR ECC SDRAM
No Yes (512 Mbytes) Yes (256 Mbytes)
MegaRAID 320-2X
Active, single-ended or LVD
Automatic through cable and device detection
512 Mbytes 333 MHz unbuffered DDR ECC SDRAM
MegaRAID 320-4X
Active, single-ended or LVD
Automatic through cable and device detection
256 Mbytes 333 MHz DDR ECC SDRAM
2 Mbyte flash ROM
1 Mbyte flash ROM
1 Mbyte flash ROM
Nonvolatile Random Access Memory (NVRAM)
32 Kbytes for storing RAID configuration
32 Kbytes for storing RAID configuration
32 Kbytes for storing RAID configuration
32 Kbytes for storing RAID configuration
32 Kbytes for storing RAID configuration
3-14 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 55
3.3.2 Array Performance Features
Ta bl e 3 .8 shows the MegaRAID 320 array performance features.
Table 3.8 Array Performance Features
MegaRAID
Specification
PCI Host Data Transfer Rate
Drive Data Transfer Rate
Maximum Scatter/Gathers
Maximum Size of I/O Requests
Maximum Queue Tags per Drive
Stripe Sizes 8, 16, 32, 64,
Maximum Number of Concurrent Commands
Support for Multiple Initiators
320-1
533 Mbytes/s 533 Mbytes/s 2 Gbytes/s 1064 Mbytes/s 1064 Mbytes/s
320 Mbytes/s 320 Mbytes/s 320 Mbytes/s 320 Mbytes/s 320 Mbytes/s
26 elements 26 elements 26 elements 26 elements 26 elements
6.4 Mbytes in 64 Kbyte stripes
As many as the drive can accept
or 128 Kbyte
255 255 255 255 255
Ye s Ye s Ye s Ye s Ye s
MegaRAID 320-2
6.4 Mbytes in 64 Kbyte stripes
As many as the drive can accept
8, 16, 32, 64, or 128 Kbyte
MegaRAID 320-2E
6.4 Mbytes in 64 Kbyte stripes
As many as the drive can accept
8, 16, 32, 64, or 128 Kbyte
MegaRAID 320-2X
6.4 Mbytes in 64 Kbyte stripes
As many as the drive can accept
8, 16, 32, 64, or 128 Kbyte
MegaRAID 320-4X
6.4 Mbytes in 64 Kbyte stripes
As many as the drive can accept
8, 16, 32, 64, or 128 Kbyte
Technical Specifications 3-15
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 56
3.3.3 Fault Tolerance
Ta bl e 3 .9 shows the MegaRAID 320 fault tolerance features.
Table 3.9 MegaRAID 320 Fault Tolerance Features
MegaRAID
Specification
Support for SMART
Optional Battery Backup for Cache Memory
Drive Failure Detection
Drive Rebuild Using Hot Spares
Parity Generation and Checking
1. The Self Monitoring Analysis and Reporting Technology (SMART) detects up to 70 percent of all
1
predictable disk drive failures. In addition, SMART monitors the internal performance of all motors, heads, and drive electronics.
320-1
Ye s Ye s Ye s Ye s Ye s
3.6 V/600mAH battery pack. Up to 48 hours of data retention for 64 MB.
Automatic Automatic Automatic Automatic Automatic
Automatic Automatic Automatic Automatic Automatic
Ye s Ye s Ye s Ye s Ye s
MegaRAID 320-2
3.6 V/600mAH battery pack. Up to 24 hours of data retention for 128 MB.
MegaRAID 320-2E
4.8 V/880mAH battery pack. Up to 72 hours of data retention for 128 MB.
MegaRAID 320-2X
3.6 V/650mAH battery pack. Up to 24 hours of data retention for 256 MB.
MegaRAID 320-4X
3.6 V/650mAH battery pack. Up to 24 hours of data retention for 256 MB.
3-16 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 57
3.3.4 Electrical Characteristics
This subsection provides the power requirements for the MegaRAID 320 Storage Adapters. Ta bl e 3 . 10 lists the maximum power requirements, which include SCSI TERMPWR, under normal operation.
Table 3.10 Maximum Power Requirements
Storage Adapter
PCI/PCI-X/ Express +12 V
PCI/PCI-X
+5.0 V
PCI/PCI-X/
Express+3.3 V
PCI PRSNT1#/
PRSNT2#
Power
Operating
Range
320-1 115 mA; used only if
battery is present
320-2 N/A 1.5 A (PCI only) N/A 15 W 0 °C to 55 °C
320-2E 1.4 A without battery;
1.6 A when battery is charging
320-2X, 320-4X
0.0 A 5 A 0.0 A 25 W 0 °C to 55 °C
1.5 A (PCI only) N/A 15 W 0 °C to 55 °C
N/A 1.5 A 25 W 0 °C to 50 °C
3.3.5 Thermal and Atmospheric Characteristics
For all MegaRAID 320 Storage Adapters, the thermal and atmospheric characteristics are
relative humidity range: 5% to 90% noncondensing
maximum dew point temperature: 32 ° C
airflow must be at least 300 linear feet per minute (LFPM) to keep
the LSI53C1020 and LSI53C1030 heat sink temperature below 80 ° C
The following parameters define the storage and transit environment for the MegaRAID 320 Storage Adapter
temperature range: 40 ° C to +105 ° C (dry bulb)
relative humidity range: 5% to 90% noncondensing
3.3.6 Safety Characteristics
All MegaRAID 320 Storage Adapters meet or exceed the requirements of UL flammability rating 94 V0. Each bare board is also marked with the supplier name or trademark, type, and UL flammability rating. For the
Technical Specifications 3-17
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 58
boards installed in a PCI bus slot, all voltages are lower than the SELV
42.4 V limit.
3-18 MegaRAID 320 Storage Adapter Characteristics
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 59
Chapter 4 Installing and Configuring Clusters
This chapter explains how clusters work and how to install and configure them. It contains the following sections:
Section 4.1, “Overview”
Section 4.2, “Benefits of Clusters”
Section 4.3, “Installing and Configuring Your System as Part of a
Cluster”
Section 4.4, “Driver Installation Instructions under Microsoft Windows
Section 4.5, “Installing the Peer Processor Device in a Windows Cluster”
Section 4.6, “Installing SCSI Drives”
Section 4.7, “Installing Clusters under Windows 2000”
Section 4.8, “Installing Clusters under Windows Server 2003”
4.1 Overview
A cluster is a grouping of two independent servers that can access the same shared data storage and provide services to a common set of clients (servers connected to common I/O buses and a common network for client access).
2000 Advanced Server”
Note:
The MegaRAID 320-2, -2E, -2X, and -4X Storage Adapters support clustering; the MegaRAID 320-1 does not.
Logically, a cluster is a single management unit. Any server can provide any available service to any authorized client. The servers must have access to the same shared data and must share a common security model. This generally means that the servers in a cluster have the same architecture and run the same version of the operating system.
MegaRAID320 Storage Adapters User’s Guide 4-1
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 60
4.2 Benefits of Clusters
Clusters provide three basic benefits
Improved application and data availability
Scalability of hardware resources
Simplified management of large or rapidly growing systems
4.3 Installing and Configuring Your System as Part of a Cluster
Perform the following steps to install and configure your system as part of a cluster.
Step 1. Unpack the Storage Adapter, following the instructions in
Chapter 2, “Hardware Installation”.
Step 2. Set the hardware termination for the Storage Adapter as
“always on.”
For termination information, refer to the following sections:
Section 2.3, “Detailed Installation,” page 2-3, Ta b le 3. 1
on page 3-3, Ta bl e 3 . 2 on page 3-5, and Section 3.1.3,
“Quad-Channel Storage Adapter,” page 3-10
Step 3. Configure the IDs for the drives in the enclosure.
Step 4. Install one Storage Adapter at a time, starting with Node 1.
Step 5. Press <Ctrl> <M> at BIOS initialization to run the BIOS
Configuration Utility and configure the options in steps 6 through 12. Do not attach the disks yet.
Step 6. Set the Storage Adapter to Cluster Mode in the
Objects→Adapter Cluster Mode menu.
Step 7. Disable the BIOS in the Objects Adapter Enable/Disable
BIOS menu.
4-2 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 61
Step 8. Change the initiator ID in the Objects Adapter Initiator ID
menu.
For example, you can change the initiator ID to 6. If ID 6 is used by a disk drive, select a different ID.
Step 9. Power down the first server.
Step 10. Attach the Storage Adapter to the shared array.
Step 11. Configure the first Storage Adapter to the arrays using the
Configure New Configuration menu.
Important.
Step 12. Follow the on-screen instructions to create arrays and save the
Step 13. Repeat steps 5 through 8 for the second Storage Adapter.
Note:
Step 14. Power down the second server.
Step 15. Attach the cables for the second Storage Adapter to the shared
Step 16. If a configuration mismatch occurs, press <Ctrl> <M> to enter
Use the entire array size of any created array. Do not create partitions of different sizes on the RAID arrays from the BIOS Configuration Utility (<Ctrl><M>); these cannot be failed over individually when they are assigned drive letters in Windows 2000 or Windows Server 2003.
configuration.
Changing the initiator ID is optional if you had changed the initiator for Node 1 to 6. The initiator ID for Node 2 remains 7 when the cluster mode is enabled.
enclosure, and power up the second server.
the BIOS Configuration Utility.
Step 17. Go to the Configure View/Add Configuration View Disk
Step 18. Save the configuration.
Step 19. Proceed to the driver installation for a Microsoft cluster
Installing and Configuring Your System as Part of a Cluster 4-3
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
menu to display the disk configuration.
environment.
Page 62
4.4 Driver Installation Instructions under Microsoft Windows 2000 Advanced Server
After the hardware is set up for the MS cluster configuration, perform the following procedure to configure the driver under Microsoft Windows 2000 Advanced Server. Note that when the Storage Adapter is added after a Windows 2000 Advanced Server installation, the operating system detects it.
Step 1. When the Found New Hardware Wizard screen displays the
detected hardware device, click Next.
Step 2. When the next screen appears, select Search for a Suitable
Driver and click Next.
The Locate Driver Files screen appears.
Step 3. Insert the floppy disk with the appropriate driver for Windows
2000, then select Floppy Disk Drives on the screen and click Next.
The Wizard detects the device driver on the diskette; the “Completing the Upgrade Device Driver” Wizard displays the name of the device.
Step 4. Click Finish to complete the installation.
Step 5. Repeat steps 1 through 4 to install the device driver on the
second system.
4.4.1 Network Requirements
The network requirements for clustering are
A unique NetBIOS cluster name
Five unique, static IP addresses:
Two addresses are for the network adapters on the
internal network.
Two addresses are for the network adapters on the
external network.
One address is for the cluster itself.
4-4 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 63
A domain user account for Cluster Service (all nodes must be part
of the same domain)
Two network adapters for each node – one for connection to the
external network, the other for the node-to-node internal cluster network. If you do not use two network adapters for each node, your configuration is unsupported. HCL certification requires a separate private network adapter.
4.4.2 Shared Disk Requirements
Disks can be shared by the nodes. The requirements for sharing disks are the following:
All shared disks, including the quorum disk, must be physically
attached to the shared bus.
All disks attached to the shared bus must be visible from all nodes.
You can check this at the setup level in the BIOS Configuration Utility, which is accessed by pressing <Ctrl> <M> during bootup. Refer to
Section 4.6, “Installing SCSI Drives,” page 4-12, for installation
information.
Each SCSI device must have a unique SCSI identification number
assigned to it, and each device at the end of the bus must be terminated properly. Refer to the storage enclosure manual for details on installing and terminating SCSI devices.
Configure all shared disks as basic (not dynamic).
Format all partitions on the disks as NTFS.
Important:
Use fault-tolerant RAID configurations for all disks. This includes RAID levels 1, 5, 10, and 50.
Driver Installation Instructions under Microsoft Windows 2000 Advanced Server
4-5
Page 64
4.5 Installing the Peer Processor Device in a Windows Cluster
Use the procedure in this section to install the peer processor device in a Windows cluster.
Note:
After the shared drives are configured, and both nodes powered up, a prompt for another device to be installed appears. This is the peer controller’s initiator ID and is installed as the processor device. The peer processor device for the 320-2 controller is detected as LSI SCSI 320-2. The 320-2X and 320-4X controllers peer processor device are detected as 320-2X SCSI Processor Device and 320-4X SCSI Processor Device.
Perform the following steps to correctly install the driver for this device so that the prompt does not display anymore.
Step 1. Using the MegaRAID SCSI 320-2 controller as an example, in
These steps apply to both Windows 2000 and Windows Server 2003 clusters.
Windows Server 2003, when the peer initiator ID is detected, the New Hardware Wizard detects the peer initiator as LSI SCSI 320-2.
The peer initiator in this example, LSI SCSI 320-2, is shown in
Figure 4.1.
4-6 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 65
Figure 4.1 Found New Hardware Wizard Dialog Box
Step 2. Select Install From a List or Specific Location and click Next.
The next dialog box, shown in Figure 4.2, contains the search and installation options.
Installing the Peer Processor Device in a Windows Cluster 4-7
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 66
Figure 4.2 Search and Installation Options
Step 3. Select the option Don’t Search. I Will Choose the Driver to Install.
Step 4. Have the driver diskette or CD with the driver ready, then
click Next.
The Hardware Type dialog box displays, as shown in
Figure 4.3.
4-8 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 67
Figure 4.3 Hardware Type Dialog Box
Step 5. Select the hardware types based on the following options.
a. For Windows 2000, select Other Devices from the list of
hardware types, then click Next.
b. For Windows 2003, select System Devices from the Common
Hardware Types list and click Next.
The next dialog box, shown in Figure 4.4, is used to select the maker and model of your hardware device and to indicate whether you have a disk with the driver you want to install.
Installing the Peer Processor Device in a Windows Cluster 4-9
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 68
Figure 4.4 Hardware Device Manufacturer and Model
Step 6. Click Have Disk...
Step 7. Specify the location of the driver package when prompted, then
click Next.
The dialog box shown in Figure 4.5 displays the correct device driver.
4-10 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 69
Figure 4.5 Device Driver Dialog Box
Step 8. Select the appropriate processor device for the controller being
used in the cluster.
For example, if a 320-2X RAID controller is being used in a cluster, select the 320-2X SCSI Processor Device.
Step 9. Click Next and ignore any security warning messages for the
system device.
The final dialog box displays, stating that the software installation for the processor device is complete.
Step 10. Click Finish to complete the SCSI Processor device install.
Step 11. Repeat the steps on the peer cluster node.
Installing the Peer Processor Device in a Windows Cluster 4-11
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 70
4.6 Installing SCSI Drives
This information is provided as a generic instruction set for SCSI drive installations. If the SCSI hard disk vendor’s instructions conflict with the instructions in this section, always use the instructions supplied by the vendor.
The SCSI bus listed in the hardware requirements must be configured prior to installation of Cluster Services. This includes
Configuring the SCSI devices.
Configuring the SCSI Storage Adapters and hard disks to work
properly on a shared SCSI bus.
Properly terminating the bus. The shared SCSI bus must have a
terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
In addition to the information on the next page, refer to the documentation from the SCSI device manufacturer or the SCSI specifications, which can be ordered from the American National Standards Institute (ANSI). The ANSI web site contains a catalog that you can search for the SCSI specifications.
4.6.1 Configuring the SCSI Devices
Each device on the shared SCSI bus must have a unique SCSI ID. Since most SCSI Storage Adapters default to SCSI ID 7, part of configuring the shared SCSI bus is to change the SCSI ID on one Storage Adapter to a different SCSI ID, such as SCSI ID 6. If more than one disk is to be on the shared SCSI bus, each disk must also have a unique SCSI ID.
4.6.2 Terminating the Shared SCSI Bus
You can connect Y cables to devices if the device is at the end of the SCSI bus. You can then attach a terminator to one branch of the Y cable to terminate the SCSI bus. This method of termination requires either disabling or removing any internal terminators the device has.
Important:
Any devices that are not at the end of the shared bus must have their internal termination disabled.
4-12 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 71
4.7 Installing Clusters under Windows 2000
During installation, some nodes are shut down, and other nodes are rebooted. This ensures uncorrupted data on disks attached to the shared storage bus. Data corruption can occur when multiple nodes try to write simultaneously to the same disk that is not yet protected by the cluster software.
Ta bl e 4 .1 shows which nodes and storage devices must be powered on
during each step.
Table 4.1 Nodes and Storage Devices
Step Node 1 Node 2 Storage Comments
Set up Networks On On Off Ensure that power to all storage devices on
the shared bus is turned off. Power on all nodes.
Set up Shared Disks On Off On Power down all nodes. Next, power on the
shared storage, then power on the first node.
Verify Disk Configuration Off On On Shut down the first node. Power on the
second node.
Configure the First Node On Off On Shut down all nodes. Power on the first
node.
Configure the Second Node
Post-installation On On On All nodes must be active.
On On On Power on the second node after the first
node was successfully configured.
Before installing the Cluster Service software, perform the following steps.
Step 1. Install Windows 2000 Advanced Server or Windows 2000
Datacenter Server on each node.
Step 2. Set up networks.
Step 3. Set up disks.
Important:
To configure the Cluster Service on a Windows 2000-based server, you must be able to log on as administrator or have administrative
These steps must be completed on every cluster node before proceeding with the installation of Cluster Service on the first node.
Installing Clusters under Windows 2000 4-13
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 72
permissions on each node. Each node must be a member server, or must be a domain controller inside the same domain. A mix of domain controllers and member servers in a cluster is not supported.
4.7.1 Installing the Microsoft Windows 2000 Operating System
Install the Microsoft Windows 2000 operating system on each node. Refer to your Windows 2000 manual for information.
Log on as administrator before you install the Cluster Services.
4.7.2 Setting Up Networks
Important: Do not allow both nodes to access the shared storage
device before the Cluster Service is installed. To prevent this, power down any shared storage devices, then power up nodes one at a time. Install the Clustering Service on at least one node, and ensure it is online before you power up the second node.
Install at least two network card adapters for each cluster node. One network card adapter card is used to access the public network. The second network card adapter is used to access the cluster nodes.
The network card adapter used to access the cluster nodes establishes the following:
Node-to-node communications
Cluster status signals
Cluster Management
Ensure that all the network connections are correct. Network cards that access the public network must be connected to the public network. Network cards that access the cluster nodes must connect to each other.
Verify that all network connections are correct, with private network adapters connected only to other private network adapters, and public network adapters connected only to the public network. View the Network and Dial-up Connections screen in Figure 4.6 to check the connections.
4-14 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 73
Figure 4.6 Network and Dial-up Connections Screen
Important:
Use crossover cables for the network card adapters that access the cluster nodes. If you do not use the crossover cables properly, the system does not detect the network card adapter that accesses the cluster nodes. If the network card adapter is not detected, you cannot configure the network adapters during the Cluster Service installation. However, if you install Cluster Service on both nodes, and both nodes are powered on, you can add the adapter as a cluster resource and configure it properly for the cluster node network in the Cluster Administrator application.
4.7.3 Configuring the Cluster Node Network Adapter
Note: The wiring determines which network adapter is private and
which is public. For this chapter, the first network adapter (Local Area Connection) is connected to the public network; the second network adapter (Local Area Connection 2) is connected to the private cluster network. This might not be the case in your network.
Installing Clusters under Windows 2000 4-15
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 74
Renaming the Local Area Connections – To clarify the network connection, you can change the name of the Local Area Connection (2). Renaming helps you identify the connection and correctly assign it. Perform the following steps to change the name.
Step 1. Right-click on the Local Area Connection 2 icon.
Step 2. Click Rename.
Step 3. In the text box, type:
Private Cluster Connection
and press <Enter>.
Step 4. Repeat steps 1 through 3 to change the name of the public
LAN network adapter to Public Cluster Connection.
The renamed icons look like those in the picture above.
Step 5. Close the Networking and Dial-up Connections window.
The new connection names automatically replicate to other cluster servers as the servers are brought online.
4.7.4 Setting Up the First Node in Your Cluster
Perform the following steps to set up the first node in your cluster:
Step 1. Right-click My Network Places, then click Properties.
Step 2. Right-click the Private Connection icon.
Step 3. Click Status.
The Private Connection Status window shows the connection status, as well as the speed of connection.
If the window shows that the network is disconnected, examine cables and connections to resolve the problem before proceeding.
Step 4. Click Close.
Step 5. Right-click Private Connection again.
Step 6. Click Properties.
Step 7. Click Configure.
Step 8. Click Advanced.
4-16 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
The network card adapter properties window displays.
Page 75
Step 9. Set network adapter speed on the private network to 10
Mbytes/s, rather than the default automated speed selection. 10 Mbytes/s is the recommended setting.
a. Select the network speed from the drop-down list.
Important:
b. Set the network adapter speed by clicking the appropriate option,
Step 10. Configure identically all network adapters in the cluster that are
Step 11. Click Transmission Control Protocol/Internet Protocol (TCP/IP).
Step 12. Click Properties.
Step 13. Click the radio-button for Use the Following IP Address.
Step 14. Enter the IP addresses you want to use for the private network.
Step 15. Type in the subnet mask for the network.
Step 16. Click the Advanced radio button, then select the WINS tab.
Do not use “Auto detect” as the setting for speed. Some adapters can drop packets while determining the speed.
such as Media Type or Speed.
attached to the same network, so they use the same Duplex Mode, Flow Control, Media Type, and so on.
These settings should stay the same even if the hardware is different.
Step 17. Select Disable NetBIOS over TCP/IP.
Step 18. Click OK to return to the previous menu. Perform this step for
the private network adapter only.
4.7.5 Configuring the Public Network Adapter
Important: It is strongly recommended that you use static IP
addresses for all network adapters in the cluster. This includes both the network adapter used to access the cluster nodes and the network adapter used to access the LAN (Local Area Network). If you use a dynamic IP address through DHCP, access to the cluster could be terminated and become unavailable if the DHCP server goes down or goes offline.
Installing Clusters under Windows 2000 4-17
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 76
Use long lease periods to assure that a dynamically assigned IP address remains valid in the event that the DHCP server is temporarily lost. In all cases, set static IP addresses for the private network connector. Note that Cluster Service recognizes only one network interface per subnet.
4.7.6 Verifying Connectivity and Name Resolution
Perform the following steps to verify that the network adapters are working properly.
Important:
Step 1. Click Start.
Step 2. Click Run.
Step 3. Type:
Step 4. Click OK.
Step 5. Type:
Step 6. If you do not already have the command prompt on your
Before proceeding, you must know the IP address for each network card adapter in the cluster. You can obtain it by using the IPCONFIG command on each node.
cmd
in the text box.
ipconfig /all
and press Enter.
IP information displays for all network adapters in the machine.
screen, click Start.
Step 7. Click Run.
Step 8. In the text box, type:
Step 9. Click on OK.
Step 10. Type:
4-18 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
cmd
ping ipaddress
where ipaddress is the IP address for the corresponding network adapter in the other node. For example, assume that the IP addresses are set as shown in Ta bl e 4 .2 :
Page 77
Table 4.2 Example IP Addresses
Node Network Name Network Adapter IP Address
1 Public Cluster Connection 192.168.0.171
1 Private Cluster Connection 10.1.1.1
2 Public Cluster Connection 192.168.0.172
2 Private Cluster Connection 10.1.1.2
In this example, you would type:
Ping 192.168.0.172
and
Ping 10.1.1.1
from Node 1.
Then you would type:
Ping 192.168.0.172
and
10.1.1.1
from Node 2.
To confirm name resolution, ping each node from a client using the node’s machine name instead of its IP number.
4.7.7 Verifying Domain Membership
All nodes in the cluster must be members of the same domain and must be capable of accessing a domain controller and a DNS Server. You can configure them as either member servers or domain controllers. If you configure one node as a domain controller, configure all other nodes as domain controllers in the same domain.
Installing Clusters under Windows 2000 4-19
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 78
4.7.8 Setting Up a Cluster User Account
The Cluster Service requires a domain user account under which the Cluster Service can run. Create the user account before installing the Cluster Service. Setup requires a user name and password. This user account should not belong to a user on the domain.
Perform the following steps to set up a cluster user account.
Step 1. Click Start.
Step 2. Point to Programs, then point to Administrative Tools.
Step 3. Click Active Directory Users and Computers.
Step 4. Click the plus sign (+) to expand the domain name (if it is not
already expanded.)
Step 5. Click Users.
Step 6. Right-click Users.
Step 7. Point to New and click User.
Step 8. Type in the cluster name and click Next.
Step 9. Set the password settings to User Cannot Change Password
and Password Never Expires.
Step 10. Click Next, then click Finish to Create This User.
Important:
Step 11. Right-click Cluster in the left pane of the Active Directory Users
Step 12. Select Properties from the context menu.
Step 13. Click Add Members to a Group.
Step 14. Click Administrators and click on OK. This gives the new user
Step 15. Close the Active Directory Users and Computers snap-in.
If your company’s security policy does not allow the use of passwords that never expire, you must renew the password on each node before password expiration. You must also update the Cluster Service configuration.
and Computers snap-in.
account administrative privileges on this computer.
4-20 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 79
4.7.9 Setting Up Shared Disks
Caution: Ensure that Windows 2000 Advanced Server or
Windows 2000 Datacenter Server and the Cluster Service are installed and running on one node before you start an operating system on another node. If the operating system is started on other nodes before you install and configure Cluster Service and run it on at least one node, the cluster disks have a high chance of becoming corrupted.
To continue, power off all nodes. Power up the shared storage devices. Once the shared storage device is powered up, power up node one.
Quorum Disk – The quorum disk stores cluster configuration database checkpoints and log files that help manage the cluster. Microsoft makes the following quorum disk recommendations:
Create a small partition. Use a minimum of 50 Mbytes as a quorum
disk. Microsoft generally recommends that a quorum disk be 500 Mbytes.
Dedicate a separate disk for a quorum resource. The failure of the
quorum disk would cause the entire cluster to fail; therefore, Microsoft strongly recommends that you use a volume on a RAID disk array.
During the Cluster Service installation, you must provide the drive letter for the quorum disk. For our example, we use the letter E.
4.7.10 Configuring Shared Disks
Perform these steps to configure the shared disks:
Step 1. Right-click My Computer.
Step 2. Click Manage, then click Storage.
Step 3. Double-click Disk Management.
Installing Clusters under Windows 2000 4-21
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 80
Step 4. Ensure that all shared disks are formatted as NTFS and are
designated as Basic.
If you connect a new drive, the Write Signature and Upgrade Disk Wizard starts automatically. If this occurs:
1. Click Next to go through the wizard.
The wizard sets the disk to dynamic, but you can deselect it at this point to set it to Basic.
2. To reset the disk to Basic, right-click Disk # (where # identifies the disk that you are working with) and click Revert to Basic Disk.
Step 5. Right-click unallocated disk space.
Step 6. Click Create Partition… .
The Create Partition Wizard begins.
Step 7. Click Next twice.
Step 8. Enter the desired partition size in Mbytes or change it if desired,
but each node’s drive letters must match.
Step 9. Click Next.
Step 10. Accept the default drive letter assignment by clicking Next.
Step 11. Click Next to format and create a partition.
4.7.11 Assigning Drive Letters
After you have configured the bus, disks, and partitions, you must assign drive letters to each partition on each clustered disk. Perform the following steps to assign drive letters.
Important:
Step 1. Right-click the desired partition and select Change Drive Letter
Mountpoints is a feature of the file system that lets you mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on Windows 2000 clusters. Any external disk that is used as a cluster resource must be partitioned using NTFS partitions and must have a drive letter assigned to it.
and Path.
Step 2. Select a new drive letter.
4-22 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 81
Step 3. Repeat steps 1 and 2 for each shared disk.
Step 4. Close the Computer Management window.
Step 5. Power down node 1 and boot to node 2 to verify the drive letters.
4.7.12 Verifying Disk Access and Functionality
Perform these steps to verify disk access and functionality:
Step 1. Click Start.
Step 2. Click Programs.
Step 3. Click Accessories, then click Notepad.
Step 4. Type some words into Notepad and use the File/Save As
command to save it as a test file called test.txt. Close Notepad.
Step 5. Double-click the My Documents icon.
Step 6. Right-click test.txt and click on Copy.
Step 7. Close the window.
Step 8. Double-click My Computer.
Step 9. Double-click a shared drive partition.
Step 10. Click Edit and click Paste.
A copy of the file should now exist on the shared disk.
Step 11. Double-click test.txt to open it on the shared disk.
Step 12. Close the file.
Step 13. Highlight the file, then press the Del key to delete it from the
clustered disk.
Step 14. Repeat the process for all clustered disks to ensure they can
be accessed from the first node.
After you complete the procedure, shut down the first node, power on the second node, and repeat the procedure above. Repeat again for any additional nodes. After you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide.
Installing Clusters under Windows 2000 4-23
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 82
4.7.13 Installing Cluster Service Software
Important: If drive letters were changed, make sure they correspond
on each node.
Before you begin the Cluster Service Software installation on the first node, ensure that all other nodes are either powered down or stopped and that all shared storage devices are powered on.
To create the cluster, you must provide the cluster information. The Cluster Configuration Wizard lets you input this information. To use the Wizard, perform these steps:
Step 1. Click Start.
Step 2. Click Settings, then click Control Panel.
Step 3. Double-click Add/Remove Programs.
Step 4. Double-click Add/Remove Windows Components.
Step 5. Select Cluster Service, then click Next.
Step 6. Cluster Service files are located on the Windows 2000
Advanced Server or Windows 2000 Datacenter Server CD-ROM.
Step 7. Enter x:\i386 (where x is the drive letter of your CD-ROM). If
you installed Windows 2000 from a network, enter the appropriate network path instead. (If the Windows 2000 Setup flashscreen displays, close it.)
Step 8. Click OK.
The Cluster Service Configuration Window displays.
Step 9. Click Next.
The Hardware Configuration Certification window appears.
Step 10. Click I Understand to accept the condition that Cluster Service
is supported only on hardware listed on the Hardware Compatibility List.
This is the first node in the cluster; therefore, you must create the cluster.
Step 11. Select the first node in the cluster in the dialog box shown in
4-24 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Figure 4.7 and click Next.
Page 83
Figure 4.7 Create or Join a Cluster Dialog Box
A screen used to validate the user name and password displays, as shown in Figure 4.8.
Installing Clusters under Windows 2000 4-25
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 84
Figure 4.8 User Account and Password Validation
Step 12. Enter a name for the cluster (up to 15 characters) and click on
Next. (In our example, the cluster is named ClusterOne.)
Step 13. Type the user name of the Cluster Service account that you
created during the pre-installation. (In our example, the user name is cluster.)
Step 14. Enter a password for the service account.
Step 15. Type the domain name, then click on Next.
At this point the Cluster Service Configuration Wizard validates the user account and password.
Step 16. Click on Next.
The Add or Remove Managed Disks screen displays, as shown in Figure 4.9.
4-26 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 85
Figure 4.9 Add or Removed Managed Disks Screen
4.7.14 Configuring Cluster Disks
The Windows 2000 Managed Disks displays all SCSI disks, as shown on
Figure 4.9. It might display SCSI disks that do not reside on the same
bus as the system disk. Because of this, a node that has multiple SCSI buses lists SCSI disks that are not to be used as shared storage. You must remove any SCSI disks that are internal to the node and not to be shared storage.
The Add or Remove Managed Disks dialog box (Figure 4.9) specifies disks on the shared SCSI bus that are used by Cluster Service.
Perform the following steps to configure the clustered disks:
Step 1. Add or remove disks as necessary, then click Next.
The Configure Cluster Networks dialog box displays, as shown in Figure 4.10.
Installing Clusters under Windows 2000 4-27
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 86
Figure 4.10 Configure Cluster Networks Dialog Box
Step 2. Click Next in the Configure Cluster Networks dialog box.
The Network Connections dialog box displays, as shown in
Figure 4.11.
4-28 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 87
Figure 4.11 Network Connections Dialog Box
In production clustering scenarios, you must use more than one private network for cluster communication; this avoids having a single point of failure. Cluster Service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. In addition, you can use a public network for cluster management, or you can use a mixed network for both private and public communications.
Verify that at least two networks are used for cluster communication. Using a single network for node-to-node communication creates a potential single point of failure. We recommend that you use multiple networks, with at least one network configured as a private link between nodes and other connections through a public network. If you use more than one private network, ensure that each uses a different subnet, as Cluster Service recognizes only one network interface per subnet.
This document assumes that only two networks are in use. It describes how you can configure these networks as one mixed and one private network.
Installing Clusters under Windows 2000 4-29
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 88
The order in which the Cluster Service Configuration Wizard presents these networks can vary. In this example, the public network is presented first.
Step 3. Verify that the network name and IP address correspond to the
network interface for the public network.
Step 4. Select Enable This Network for Cluster Use.
Step 5. Select the option All Communications (Mixed Network) and
click Next.
The next dialog box configures the private network, as shown in Figure 4.12. Make sure that the network name and IP address correspond to the network interface used for the private network.
Figure 4.12 Network Connections Dialog Box
Step 6. Select Enable This Network For Cluster Use.
Step 7. Select the option Internal Cluster Communications Only
4-30 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
(Private Network), then click Next.
In this example, both networks are configured so that they can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the
Page 89
networks are used. Because Private Cluster Connection represents a direct connection between nodes, it remains at the top of the list.
In normal operation, this connection is used for cluster communication. In case of the Private Cluster Connection failure, Cluster Service automatically switches to the next network on the list (in this case, Public Cluster Connection).
The Internal Cluster Communication dialog box displays next, as shown in Figure 4.13.
Figure 4.13 Internal Cluster Communication Dialog Box
Step 8. Verify that the first connection in the list is the Private Cluster
Important:
Installing Clusters under Windows 2000 4-31
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Connection, then click Next.
Always set the order of the connections so that the Private Cluster Connection is first in the list.
The Cluster IP Address dialog box displays next, as shown in
Figure 4.14.
Page 90
Figure 4.14 Cluster IP Address Dialog Box
Step 9. Enter the unique cluster IP address and Subnet mask for your
network, then click Next.
The Cluster Service Configuration Wizard automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network.
The final wizard dialog box displays.
Step 10. Click Finish to complete the cluster configuration on the first node.
The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster Service.
After the files are copied, the Cluster Service registry entries are created, the log files on the quorum resource are created, and the Cluster Service is started on the first node.
The dialog box displays, as shown in Figure 4.15.
4-32 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 91
Figure 4.15 Cluster Service Confirmation
Step 11. Click OK.
Step 12. Close the Add/Remove Programs window.
4.7.15 Validating the Cluster Installation
Use the Cluster Administrator snap-in to validate the Cluster Service installation on the first node.
To validate the cluster installation:
Step 1. Click Start.
Step 2. Click Programs.
Step 3. Click Administrative Tools.
Step 4. Click Cluster Administrator.
The Cluster Administrator screen displays. If your snap-in window is similar to the one shown in the screen, your Cluster Service was successfully installed on the first node. You are now ready to install Cluster Service on the second node.
Installing Clusters under Windows 2000 4-33
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 92
4.7.16 Configuring the Second Node
For this procedure, have node one and all shared disks powered on, then power up the second node.
Installation of Cluster Service on the second node takes less time than on the first node. Setup configures the Cluster Service network settings on the second node based on the configuration of the first node.
Installation of Cluster Service on the second node begins the same way as installation on the first node. The first node must be running during installation of the second node.
Follow the same procedures used to install Cluster Service on the first node, with the following differences:
Step 1. In the Create or Join a Cluster dialog box, select The Second
or Next Node in the Cluster, then click Next.
Step 2. Enter the cluster name that was previously created (in this
example, ClusterOne) and click Next.
Step 3. Leave Connect to Cluster as unselected.
The Cluster Service Configuration Wizard automatically supplies the name of the user account selected when you installed the first node. Always use the same account you used when you set up the first cluster node.
Step 4. Enter the password for the account (if there is one), then
click Next.
Step 5. At the next dialog box, click Finish to complete configuration.
The Cluster Service starts.
Step 6. Click OK.
Step 7. Close Add/Remove Programs.
Step 8. If you install additional nodes, repeat the preceding steps to
install Cluster Service on all other nodes.
4-34 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 93
4.7.17 Verifying Installation
There are several ways to verify that Cluster Service was successfully installed. Here is a simple one:
Step 1. Select Start
Administrator.
The Cluster Administrator Screen displays, as shown in
Figure 4.16. The presence of two nodes shows that a cluster
exists and is in operation.
Figure 4.16 Cluster Administrator Screen
—> Programs
—> Administrative Tools
—> Cluster
Step 2. Right-click the group Disk Group 1 and select the option Move.
Installing Clusters under Windows 2000 4-35
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
This option moves the group and all its resources to another node. Disks F: and G: are brought online on the second node. Watch the screen to see this change.
Page 94
Step 3. Close the Cluster Administrator snap-in.
This completes Cluster Service installation on all nodes. The server cluster is fully operational. Now you can install cluster resources, such as file shares, printer spoolers, cluster aware services like IIS, Message Queuing, Distributed Transaction Coordinator, DHCP, WINS, or cluster aware applications like Exchange or SQL Server.
4.8 Installing Clusters under Windows Server 2003
The preparation for the Windows Server 2003 Cluster Service follows the same guidelines as that of the Windows 2000 Cluster Service. The following is assumed to have already been done:
Installation of the controller and configuration of the controller for
cluster operation. Refer to Procedure to Install and Configure Your system as Part of a cluster in the this chapter.
The Windows Server 2003 driver for the RAID controller has been
installed. The procedures are similar to those in Section 4.4, “Driver
Installation Instructions under Microsoft Windows 2000 Advanced Server” in this chapter.
Network requirements have been met.
Shared disk requirements have been met.
4.8.1 Cluster Service Software Installation
Before you begin the Cluster Service Software installation on the first node, make sure that all other nodes are either powered down or stopped and all shared storage devices are powered on.
4.8.2 Installation Checklist
This checklist helps you prepare for installation. Step-by-step instructions begin after the checklist.
Software Requirements – The following are required for software installation:
4-36 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 95
Microsoft Windows Server 2003 Enterprise Edition or Windows Server
2003 Datacenter Edition installed on all computers in the cluster
A name resolution method such as Domain Name System (DNS),
DNS dynamic update protocol, Windows Internet Name Service (WINS), HOSTS, and so on
An existing domain model
All nodes must be members of the same domain
A domain-level account that is a member of the local administrators
group on each node. A dedicated account is recommended.
Network Requirements –
A unique NetBIOS name
Static IP addresses for all network interfaces on each node
Note:
Access to a domain controller. If the cluster service is unable to
authenticate the user account used to start the service, it could cause the cluster to fail. It is recommended that you have a domain controller on the same local area network (LAN) as the cluster is on to ensure availability.
Each node must have at least two network adapters – one for
connection to the client public network and the other for the node-to-node private cluster network. A dedicated private network adapter is required for HCL certification.
All nodes must have two physically independent LANs or virtual
LANs for public and private communication.
If you are using fault-tolerant network cards or network adapter
teaming, verify that you are using the most recent firmware and drivers. Check with your network adapter manufacturer for cluster compatibility.
Server Clustering does not support the use of IP addresses assigned from Dynamic Host Configuration Protocol (DHCP) servers.
Installing Clusters under Windows Server 2003 4-37
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 96
4.8.3 Shared Disk Requirements
An HCL-approved external disk storage unit connected to all
computers. This is used as the clustered shared disk.
All shared disks, including the quorum disk, must be physically
attached to a shared bus.
Shared disks must be on a different controller then the one used by
the system drive.
Creating multiple logical drives at the hardware level in the RAID
configuration is recommended rather than using a single logical disk that is then divided into multiple partitions at the operating system level. This is different from the configuration commonly used for stand-alone servers. However, it enables you to have multiple disk resources and to do Active/Active configurations and manual load balancing across the nodes in the cluster.
A dedicated disk with a minimum size of 50 megabytes (MB) to use
as the quorum device. A partition of at least 500 MB is recommended for optimal NTFS file system performance.
Verify that disks attached to the shared bus can be seen from all
nodes. This can be checked at the host adapter setup level.
SCSI devices must be assigned unique SCSI identification numbers
and properly terminated.
All shared disks must be configured as basic disks.
Software fault tolerance is not natively supported on cluster
shared disks.
All shared disks must be configured as master boot record (MBR)
disks on systems running the 64-bit versions of Windows Server 2003.
All partitions on the clustered disks must be formatted as NTFS.
Hardware fault-tolerant RAID configurations are recommended for
all disks.
A minimum of two logical shared drives is recommended.
4-38 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 97
4.8.4 Steps for Configuring the Shared Disks under Windows Server 2003
Windows Server 2003 disk management is similar to Windows 2000 Advanced Server, however care must be taken to ensure that the partitions are correctly created for cluster installation and drive lettering.
Perform the following steps to configure the shared disks under Windows Server 2003. Start on node 1 first and load disk management. Node 2 is powered off at this point.
Step 1. Start Computer Management to display Figure 4.17, then select
Disk Management.
Figure 4.17 Computer Management Screen
After selecting Disk Management, if there are any unconfigured disks, the Initialize and Convert Disk Wizard appears.
Step 2. At the first Wizard screen, click Next.
Installing Clusters under Windows Server 2003 4-39
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
The screen in Figure 4.18 displays.
Page 98
Figure 4.18 Initialize and Convert Disk Wizard
Step 3. Select the disks to initialize on the Select Disks to Initialize
screen, then click Next.
The Select Disks to Convert displays next. Do not select any disks to convert on the Select Disks to Convert Screen. Only basic disks are used for the cluster service.
Step 4. On the Select Disks to Convert screen, click Next.
The Disk Management screen displays, as shown in
Figure 4.19. At the Disk Management screen, after the shared
disks have been initialized in the operating system, they are now unallocated space which can then created as a new partition.
4-40 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 99
Figure 4.19 Disk Management Screen
Step 5. Right click and select New on the first shared disk, then select
New Partition.
The Select Partition Type screen displays, as shown in
Figure 4.20.
Installing Clusters under Windows Server 2003 4-41
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Page 100
Figure 4.20 Select Partition Type Screen
Step 6. Select Primary Partition, then click Next.
The Specify Partition Size screen displays.
Step 7. On the Specify Partition Size screen, select a full partition size,
if desired, then click Next.
The next screen that displays is used to assign the drive letter or path.
Step 8. Assign a drive letter and click Next.
The Format Partition screen displays next.
Step 9. On the Format Partition screen, select to format the partition,
set the volume label, and click Next.
The final Wizard screen displays, as shown in Figure 4.21. This screen displays the settings that you selected.
4-42 Installing and Configuring Clusters
Copyright © 2003-2008 by LSI Corporation. All rights reserved.
Loading...