LSI products are not intended for use in life-support appliances, devices, or
systems. Use of any LSI product in such applications without written consent of
the appropriate LSI officer is prohibited.
Purchase of I
Associated Companies, conveys a license under the Philips I
use these components in an I
2
the I
C standard Specification as defined by Philips.
Document 80-00143-01 Rev. B, March 2008. This document describes the
current version of the LSI Corporation MegaRAID 320 Storage Adapters and will
remain the official reference source for all revisions/releases of these products
until rescinded by an update.
LSI Corporation reserves the right to make changes to any products herein at
any time without notice. LSI does not assume any responsibility or liability arising
out of the application or use of any product described herein, except as expressly
agreed to in writing by LSI; nor does the purchase or use of a product from LSI
convey a license under any patent rights, copyrights, trademark rights, or any
other of the intellectual property rights of LSI or third parties.
TRADEMARK ACKNOWLEDGMENT
LSI, the LSI logo design, Fusion-MPT, and MegaRAID are trademarks or
registered trademarks of LSI Corporation. Microsoft, Windows, and Windows NT
are registered trademarks of Microsoft Corporation. Novell and NetWare are
registered trademarks of Novell Corporation. UNIX and UnixWare are registered
trademarks of The Open Group. SCO is a registered trademark of Caldera
International, Inc. Linux is a registered trademark of Linus Torvalds. PCI-X is a
registered trademark of PCI SIG. All other brand and product names may be
trademarks of their respective companies.
2
C components of LSI Corporation, or one of its sublicensed
2
C Patent Rights to
2
C system, provided that the system conforms to
CD
To receive product literature, visit us at http://www.lsi.com.
For a current list of our distributors, sales offices, and design resource
centers, view our web page located at
This book is the primary reference and user’s guide for the LSI
MegaRAID
instructions for these adapters and includes specifications for them.
The MegaRAID 320 Storage Adapter family consists of the following:
•MegaRAID 320-1 PCI SCSI Disk Array Controller
•MegaRAID 320-2 PCI SCSI Disk Array Controller
®
320 Storage Adapters. It contains complete installation
Audience
•MegaRAID 320-2E PCI Express SCSI Disk Array Controller
•MegaRAID 320-2X PCI-X SCSI Disk Array Controller
•MegaRAID 320-4X PCI-X SCSI Disk Array Controller
For details on how to configure the Storage Adapters, and for an
overview of the software drivers, see the MegaRAID Configuration Software User’s Guide.
This document assumes that you have some familiarity with RAID
controllers and related support devices. The people who benefit from this
book are:
•Engineers who are designing a MegaRAID 320 Storage Adapter into
a system
•Anyone installing a MegaRAID 320 Storage Adapter in their RAID
provides the characteristics and technical specifications for the
MegaRAID 320-1, 320-2, 320- 2E, 320-2X, and 320-4X Storage
Adapters.
•Chapter 4, Installing and Configuring Clusters, explains how to
implement clustering to enable two independent servers to access
the same shared data storage.
•Appendix A, Glossary of Terms and Abbreviations, lists and
explains the terms and abbreviations used in this manual.
Related Publications
MegaRAID Configuration Software User’s Guide, Document No.
DB15-000269-01 (on the MegaRAID Universal Software Suite CD
included with the MegaRAID 320 Storage Adapter)
MegaRAID Device Driver Installation User’s Guide, Document No.
DB11-000018-02 (on the MegaRAID Universal Software Suite CD
included with the MegaRAID 320 Storage Adapter)
Use the following safety guidelines to help protect your computer system
from potential damage and to ensure your own personal safety.
When Using Your Computer System – As you use your computer
system, observe the following safety guidelines:
Caution:
Do not operate your computer system with any cover(s)
(such as computer covers, bezels, filler brackets, and
front-panel inserts) removed:
•To help avoid damaging your computer, be sure the voltage selection
switch on the power supply is set to match the alternating current
(AC) power available at your location:
–115 volts (V)/60 hertz (Hz) in most of North and South America
and some Far Eastern countries such as Japan, South Korea,
and Taiwan
–230 V/50 Hz in most of Europe, the Middle East, and the Far East.
Also be sure your monitor and attached peripherals are electrically
rated to operate with the AC power available in your location.
•To help avoid possible damage to the system board, wait 5 seconds
after turning off the system before removing a component from the
system board or disconnecting a peripheral device from the computer.
•To help prevent electric shock, plug the computer and peripheral
power cables into properly grounded power sources. These cables
are equipped with 3-prong plugs to ensure proper grounding. Do not
use adapter plugs or remove the grounding prong from a cable. If
you must use an extension cable, use a 3-wire cable with properly
grounded plugs.
•To help protect your computer system from sudden, transient
increases and decreases in electrical power, use a surge suppressor,
line conditioner, or uninterruptible power supply.
•Be sure nothing rests on your computer system’s cables and that the
cables are not located where they can be stepped on or tripped over.
•Do not spill food or liquids on your computer. If the computer gets
•Do not push any objects into the openings of your computer. Doing so
can cause fire or electric shock by shorting out interior components.
•Keep your computer away from radiators and heat sources. Also, do
not block cooling vents. Avoid placing loose papers underneath your
computer; do not place your computer in a closed-in wall unit or on
a rug.
When Working Inside Your Computer –
Notice:
1.Turn off your computer and any peripherals.
2.Disconnect your computer and peripherals from their power sources.
Also disconnect any telephone or telecommunications lines from
the computer.
Doing so reduces the potential for personal injury or shock.
Also note these safety guidelines:
Do not attempt to service the computer system yourself,
except as explained in this guide and elsewhere in
LSI Logic documentation. Always follow installation and
service instructions closely.
•When you disconnect a cable, pull on its connector or on its
strain-relief loop, not on the cable itself. Some cables have a
connector with locking tabs; if you are disconnecting this type of
cable, press in on the locking tabs before disconnect the cable. As
you pull connectors apart, keep them evenly aligned to avoid bending
any connector pins. Also, before you connect a cable, make sure
both connectors are correctly oriented and aligned.
•Handle components and cards with care. Don’t touch the
components or contacts on a card. Hold a card by its edges or by its
metal mounting bracket. Hold a component such as a
microprocessor chip by its edges, not by its pins.
Protecting Against Electrostatic Discharge – Static electricity can
harm delicate components inside your computer. To prevent static
damage, discharge static electricity from your body before you touch any
of your computer’s electronic components, such as the microprocessor.
You can do so by touching an unpainted metal surface, such as the metal
around the card-slot openings at the back of the computer.
As you continue to work inside the computer, periodically touch an
unpainted metal surface to remove any static charge your body may have
accumulated. In addition to the preceding precautions, you can also take
the following steps to prevent damage from electrostatic discharge (ESD):
•When unpacking a static-sensitive component from its shipping
carton, do not remove the component from the antistatic packing
material until you are ready to install the component in your
computer. Just before unwrapping the antistatic packaging, be sure
to discharge static electricity from your body.
•When transporting a sensitive component, first place it in an
antistatic container or packaging.
•Handle all sensitive components in a static-safe area. If possible, use
This section provides a general overview of the MegaRAID 320 series of
PCI-to-SCSI Storage Adapters with RAID control capabilities. It consists
of the following sections.
•Section 1.1, “Overview”
•Section 1.2, “Features”
•Section 1.3, “Hardware”
1.1Overview
The MegaRAID 320 Storage Adapters are high-performance intelligent
PCI-to-SCSI host adapters with RAID control capabilities. MegaRAID
320 Storage Adapters provide reliability, high performance, and faulttolerant disk subsystem management. They are an ideal RAID solution
for the internal storage of workgroup, departmental, and enterprise
systems. MegaRAID 320 Storage Adapters offer a cost-effective way to
implement RAID in a server.
MegaRAID 320 Storage Adapters are available with one, two, or four
SCSI channels. There are two versions of the MegaRAID 320-1 Storage
Adapter. The following are descriptions of the adapters:
•The MegaRAID 320-1 Storage Adapter (single-channel) has one
LSI53C1020 controller chip that controls one SCSI channel. The
Storage Adapter has one very high-density cable interconnect
(VHDCI) 68-pin external SCSI connector and one high-density cable
interconnect (HDCI) 68-pin internal SCSI connector.
•The MegaRAID 320-2 Storage Adapter (dual-channel) has one
LSI53C1030 controller chip that controls two SCSI channels. The
Storage Adapter has two VHDCI 68-pin external SCSI connectors
and two HDCI 68-pin internal SCSI connectors.
•The MegaRAID 320-2E (Express) Storage Adapter has one 80332
processor that controls two SCSI channels. The Storage Adapter has
two UHDCI 68-pin external SCSI connectors and two HDCI 68-pin
internal SCSI connectors. Note that the MegaRAID 320-2E is a
PCI-Express controller.
•The MegaRAID 320-2X Storage Adapter (dual-channel) has one
LSI53C1030 controller chip that controls two SCSI channels. The
Storage Adapter has two VHDCI 68-pin external SCSI connectors
and two HDCI 68-pin internal SCSI connectors. Note that the
MegaRAID 320-2X is a PCI-X controller.
•The MegaRAID 320-4X Storage Adapter (quad-channel) has two
LSI53C1030 controller chips that control the four SCSI channels. The
Storage Adapter has four VHDCI 68-pin external SCSI connectors
and two HDCI 68-pin internal SCSI connectors. Note that the
MegaRAID 320-4X is a PCI-X controller.
The MegaRAID 320 Storage Adapters support a low-voltage differential
(LVD) or a single-ended (SE) SCSI bus. With LVD, you can use cables
up to 12 meters long. Throughput on each SCSI channel can be as high
as 320 Mbytes/s.
PCI, PCI-X, and PCI-Express are I/O architectures designed to increase
data transfers without slowing down the central processing unit (CPU).
You can install the MegaRAID 320 PCI and PCI-X Storage Adapters in
PCI-X computer systems with a standard bracket type. With these
adapters in your system, you can connect SCSI devices over a SCSI bus.
PCI-Express goes beyond the PCI specification in that it is intended as
a unifying I/O architecture for various systems: desktops, workstations,
mobile, server, communications, and embedded devices.
Note:
For Ultra320 SCSI performance, you must connect only
LVD devices to the bus. Do not connect a high voltage
differential (HVD) device to the controller. Do not mix SE
with LVD devices, or the bus speed will be limited to the
slower SE (Ultra SCSI) SCSI data transfer rates.
The MegaRAID 320 Storage Adapters support major operating systems,
such as Windows
®
SuSe
Other software support ensures data integrity by intelligently testing the
network before completing negotiation.
Linux, Novell® NetWare®, SCO OpenServer®, and UnixWare®.
®
(2000, Server 2003, and XP), Red Hat® Linux,
Note:
The MegaRAID 320 Storage Adapters use Fusion-MPT™ architecture for
all major operating systems for thinner drivers and better performance.
Refer to the MegaRAID Device Driver Installation User’s Guide for driver
installation instructions.
1.1.2Technical Support
For assistance installing, configuring, or running a MegaRAID 320 RAID
controller or obtaining a driver for an operating system other than the
ones already listed in Section 1.1.1, “Operating System Support,” contact
LSI Technical Support:
For assistance in installing, configuring, or running your SAS RAID
controller, contact LSI Technical Support.
Phone Support:
The MegaRAID 320 Storage Adapters do not support the
Windows NT
•Advanced array configuration and management utilities
•Online RAID level migration
•No reboot necessary after expansion
•Support for hard drives with capacities greater than 8 Gbytes
•More than 200 Qtags per array
•User-specified rebuild rate
•Hardware clustering support on the board
Note:
The MegaRAID 320-2, -2E, -2X, and -4X Storage Adapters
support clustering; the MegaRAID 320-1 Storage Adapter
does not. See Chapter 4, “Installing and Configuring
Clusters” for more information about clustering.
•Wide Ultra320 LVD SCSI performance up to 320 Mbytes/s
•Support for up to 14 SCSI drives per channel on storage system with
SAF-TE enclosures (SCSI accessed fault-tolerant enclosures),
15 SCSI drives per channel for other configurations
•32 Kbyte NVRAM for storing RAID system configuration information;
the MegaRAID 320 firmware is stored in Flash ROM for easy upgrade
•Battery backup for MegaRAID 320-2, -2E, -2X, and -4X
Note:
Battery backup is available for the MegaRAID 320-1, 320-2,
320-2E, 320-2X, and 320-4X controllers, either through an
onboard battery or daughter card. You can purchase the
controller with the battery backup unit (BBU) or purchase
the BBU separately.
1.2.4Drive Roaming
Drive roaming occurs when the hard drives are moved within a
configuration on the controller. When the drives are moved, the controller
detects the RAID configuration from the configuration information on the
drives. Configuration information is saved in both nonvolatile random
access memory (NVRAM) on the MegaRAID controller and on the hard
drives attached to the controller. This maintains the integrity of the data
on each drive, even if the drives have changed their target ID.
Before performing drive roaming, make sure to power off
both your platform and your drive enclosure.
Page 22
1.2.5Drive Migration
Drive migration is the transfer of a set of hard drives in an existing
configuration from one controller to a blank controller. The drives must
be reinstalled in the same order as in the original configuration.
Important:
Do not perform drive roaming and drive migration at the
same time.
1.3Hardware
You can install the MegaRAID 320-1 and 320-2 boards in a computer
with a mainboard that has 5 V or 3.3 V, 32- or 64-bit PCI slots, the
MegaRAID 320-2X and -4X in 3.3 V, 64-bit PCI or PCI-X slots, and the
MegaRAID 320-2E in 3.3 V PCI-Express slots.
The following subsection describes the hardware configuration features
for the MegaRAID 320 Storage Adapters.
Storage Adapter Features – Tab le 1 .1 compares the configurations
for the MegaRAID 320-1, 320-2, 320-2E, 320-2X, and 320-4X Storage
Adapters.
This chapter describes the procedures that install the MegaRAID 320-1,
320-1, 320-2, 320-2E, 320-2X, and 320-4X Storage Adapters. It contains
the following sections:
•Section 2.1, “Requirements”
•Section 2.2, “Quick Installation”
•Section 2.3, “Detailed Installation”
•Section 2.4, “SCSI Device Cables”
•Section 2.5, “Replacing a Failed Controller with Data in the TBBU”
•Section 2.6, “After Installing the Storage Adapter”
2.1Requirements
The following items are required to install a MegaRAID 320 Storage
Adapter:
•A MegaRAID 320-1, 320-1, 320-2, 320-2E, 320-2X, or 320-4X
Storage Adapter
•A host computer with an available 32- or 64-bit, 3.3 V PCI or PCI-X
expansion slot or a PCI-Express slot
•The MegaRAID Universal Software Suite CD, which contains drivers
The following steps are for quick Storage Adapter installation. These
steps are for experienced computer users/installers. Section 2.3,
“Detailed Installation,” contains the steps for all others to follow.
Step 1.Turn power off to the server and all hard disk drives,
enclosures, and system components and remove the PC power
cord.
Step 2.Open the cabinet of the host system by following the
instructions in the host system technical documentation.
Step 3.Determine the SCSI ID and SCSI termination requirements.
Step 4.Install the MegaRAID 320 Storage Adapter in the server,
connect SCSI devices to it, and set termination correctly on the
SCSI channel(s). Ensure that the SCSI cables you use conform
to all SCSI specifications.
Step 5.Perform a safety check.
–Ensure that all cables are properly attached.
–Ensure that the MegaRAID 320 Storage Adapter is
properly installed.
–Close the cabinet of the host system.
Step 6.Turn power on after completing the safety check.
This section provides detailed instructions for installing a MegaRAID 320
Storage Adapter.
Step 1.Unpack the Storage Adapter
Unpack and remove the Storage Adapter. Inspect it for
damage. If it appears damaged, or if any items listed below are
missing, contact your LSI support representative. The
MegaRAID 320 Storage Adapter is shipped with
–the MegaRAID Universal Software Suite CD, which
contains MegaRAID drivers for supported operating
systems, an electronic version of this User’s Guide, and
other related documentation
–a license agreement
Step 2.Power Down the System
Turn off the computer and remove the AC power cord. Remove
the system’s cover. Refer to the system documentation
for instructions.
Step 3.Check the jumpers.
Ensure that the jumper settings on the your Storage Adapter
are correct. Refer to Chapter 3, “MegaRAID 320 Storage
Adapter Characteristics” for diagrams of the Storage Adapters
with their jumpers and connectors.
Step 4.Install the MegaRAID 320 Storage Adapter
Select a PCI, PCI-X, or PCI-Express slot, and align the Storage
Adapter PCI bus connector to the slot. Press down gently but
firmly to ensure that the card is properly seated in the slot, as
shown in Figure 2.1. Figure 2.2 shows installation of the
PCI-Express controller. Then screw the bracket into the
computer chassis.
Figure 2.2Inserting the MegaRAID 320-2E Card in a PCI-Express
Edge of
Motherboard
Bracket Screw
PCI-Express Slot
Press Here
Press Here
Slot
Step 5.Set the Target IDs
Set target identifiers (TIDs) on the SCSI devices. Each device
in a channel must have a unique TID. Provide unique TIDs for
non-disk devices (CD-ROM or tapes), regardless of the channel
where they are connected. The MegaRAID 320 Storage
Adapter automatically occupies TID 7, which is the highest
priority. The arbitration priority for a SCSI device depends on its
TID.
System throughput problems can occur if SCSI cables are not
the correct type. To minimize the potential for problems,
–use cables no longer than 12 meters for Ultra160 and
Ultra320 devices (it is better to use shorter cables,
if possible)
–use the shortest SCSI cables for SE SCSI devices
(no longer than 3 meters for Fast SCSI, no longer than
1.5 meters for an 8-drive Ultra SCSI system, and no
longer than 3 meters for a 6-drive Ultra SCSI system)
–use active termination
–avoid clustering the cable nodes
–note that the cable stub length must be no greater than
0.1 meters (4 inches)
–use high impedance cables
–route SCSI cables carefully.
Step 7.Set SCSI Termination
The SCSI bus is an electrical transmission line and must be
terminated properly to minimize reflections and losses. Set
termination at each end of the SCSI cable(s).
For a disk array, set SCSI bus termination so that removing or
adding a SCSI device does not disturb termination. An easy
way to do this is to connect the Storage Adapter to one end of
the SCSI cable and to connect an external terminator module
at the other end of the cable. You can then connect SCSI disk
drives to the connectors between the two ends of the cable. If
necessary, disable termination on the SCSI devices. (This is not
necessary for Ultra320 and Ultra160 SCSI drives.)
Set the termination so that SCSI termination and TermPWR are
intact when any disk drive is removed from a SCSI channel, as
shown in Figure 2.3.
Replace the computer cover, and reconnect the AC power
cords. Turn power on to the host computer. Ensure that the
SCSI devices are powered up at the same time as, or before,
the host computer. If the computer is powered up before a SCSI
device, the device might not be recognized.
During boot, a BIOS message appears.
The firmware takes several seconds to initialize. During this
time, the Storage Adapter scans the SCSI channel(s).
The MegaRAID 320 BIOS Configuration utility prompt times out after
several seconds. The second portion of the BIOS message displays the
MegaRAID 320 Storage Adapter number, firmware version, and cache
SDRAM size. The numbering of the controllers follows the PCI slot
scanning order used by the host mainboard.
If you want to run the MegaRAID Configuration utility or the WebBIOS
utility at this point, press the appropriate keys when this message appears:
Press <CTRL><M> to run MegaRAID Configuration Utility, or
Press <CTRL><H> for WebBIOS
2.4SCSI Device Cables
For reliable Ultra320 operation, be sure to use an Ultra320-rated SCSI
cable. The internal Ultra320 SCSI cable has built-in low voltage
differential (LVD) and single-ended termination. This built-in feature is
included because most LVD SCSI hard disk drives are not made with
on-board low voltage differential termination.
2.4.1Internal SCSI Cables
You can connect all internal SCSI devices to the Storage Adapter with
an unshielded, twisted pair, 68-pin ribbon cable.Some 68-pin internal
cables come with a low voltage differential and single-ended terminator
on one end, which must be farthest from the host adapter. Figure 2.4 and
Figure 2.5 show internal cables with and without a terminator.
Figure 2.4SCSI Cable – 68-Pin High Density with Terminator
Te r mi n at o r
n
Figure 2.5SCSI Cable – 68-Pin High Density without Terminator
2.4.2External SCSI Cables
You must connect all external SCSI devices to the Storage Adapter with
shielded cables. Figures 2.6 through 2.8 are examples of external SCSI
cables. Select the correct 68-pin cable needed to connect your devices.
Figure 2.6SCSI Cable – 68-Pin VHDCI to 68-Pin VHDCI
This subsection provides step-by-step instructions for connecting internal
SCSI devices. The figures show the MegaRAID 320-2 Storage Adapter,
which has two internal connectors and two external connectors. Refer to
Section 2.4.1, “Internal SCSI Cables,” for examples of internal cables.
Perform the following steps to connect devices.
Step 1.Plug the 68-pin connector on the end of the SCSI ribbon cable
into the internal connector on the host adapter. Figure 2.9
shows how to do this.
Figure 2.9Connecting an Internal SCSI Cable to Host Adapter
SCSI ribbon cable to it. Figure 2.10 shows how to do this. You
can connect other devices if the cable has more connectors.
The Ultra320 SCSI host adapters support up to 15 SCSI
devices connected to each SCSI channel.
is farthest from the SCSI host adapter. Refer to Section 2.3,
“Detailed Installation,” page 2-3, for details on SCSI
bus termination.
Page 37
2.4.4Connecting External SCSI Devices
Terminator
This subsection provides step-by-step instructions for connecting
external SCSI devices. Refer to Section 2.4.2, “External SCSI Cables,”
for examples of external cables.
Step 1.Plug the 68-pin connector on one end of a shielded external
SCSI cable into the external SCSI connector on the host adapter.
This connector is exposed on the back panel of your computer.
Step 2.Plug the 68-pin connector on the other end of the shielded
external SCSI cable into the SCSI connector on the first
external SCSI device.
Figure 2.11 shows how to connect one external SCSI device. If
you have the correct cable, it matches the external connector.
Figure 2.11 Connecting One External SCSI Device
Step 3.Connect any additional SCSI devices to one another with
2.5Replacing a Failed Controller with Data in the TBBU
The MegaRAID Transportable Battery Backup Module (TBBU) is a cache
memory module with an integrated battery pack. The module provides
an uninterrupted power source to the module if power is unexpectedly
interrupted while cached data is still present. If the power failure is the
result of the MegaRAID controller itself failing, then the TBBU can be
moved to a new controller and the data recovered. The replacement
controller must have a cleared configuration.
Perform the following steps to replace a failed controller with data in the
transportable battery backup unit.
Step 1.Power-down the system and drives.
Step 2.Remove the failed controller from the system.
Step 3.Remove the TBBU from the failed controller.
Step 4.Insert the TBBU into the replacement controller.
Step 5.Insert the replacement controller into the system.
Step 6.Power-on the system.
The controller then reads the disk configuration into NVRAM
and flushes cache data to the logical drives.
Resolving a Configuration Mismatch – If the replacement controller
has a previous configuration, a message displays during the power-on
self-test (POST) stating that there is a configuration mismatch. A
configuration mismatch occurs when the configuration data in the
NVRAM and the hard disk drives are different. You need to update the
configuration data in the NVRAM with the data from the hard disk drive.
Perform the following steps to resolve the mismatch.
Step 1.Press <Ctrl> <M> when prompted during bootup to access the
BIOS Configuration Utility.
Step 2.Select Configure—>View/Add Configuration.
This gives you the option to view the configuration on both the
NVRAM and the hard drive disk.
Step 3.Select the configuration on disk.
Replacing a Failed Controller with Data in the TBBU2-15
Step 4.Press <ESC> and select YES to update the NVRAM.
Step 5.Exit and reboot.
2.6After Installing the Storage Adapter
After Storage Adapter installation, you must configure the Storage
Adapter and install the operating system driver. The MegaRAID Configuration Software User’s Guide instructs you on the configuration
options and how to set them on your Storage Adapter. The MegaRAID Device Driver Installation User’s Guide provides detailed installation
instructions for operating system drivers.
PCI is a high-speed standard local bus for interfacing I/O components to
the processor and memory subsystems in a high-end PC. The
component height on the top and bottom of the Ultra320 SCSI host
adapters follows the PCI Local Bus Specification, Revision 2.2, and
PCI-X Addendum to the PCI Local Bus Specification, Revision 1.0a. The
MegaRAID 320 Storage Adapters are used in PCI-X and PCI computer
systems with PCI standard and PCI low-profile bracket types.
The MegaRAID 320-2E controller is used in a system with a PCI-Express
slot. PCI-Express goes beyond the PCI specification in that it is intended
as a unifying I/O architecture for various systems: desktops,
workstations, mobile, server, communications, and embedded devices.
Ta bl e 3 .7 lists the features for the MegaRAID 320 Storage Adapters.
The MegaRAID 320-1 is a single-channel Ultra320 SCSI-to-PCI Storage
Adapter that supports one Ultra320 SCSI channel each. The MegaRAID
SCSI channel interface is made through connectors J1 and J7.
Figure 3.1 and Ta bl e 3 . 1 show the connectors and headers on the
Connector for optional backup battery unit (BBU)
located on a daughter card
(default - do not change)
1
J10SCSI Bus
Termination Enable
1.The MegaRAID 320-1 does not have an alarm integrated onto the board. For an alarm, the
controller requires a daughter card with integrated alarm. If you order the daughter card for battery
backup, it should have the alarm on it.
3-pin headerJumper on pins 1-2: Software uses drive detection to
control SCSI termination (default - do not change).
Jumper on pins 2-3: On-board SCSI termination
disabled.
No jumper: On-board SCSI termination enabled.
The MegaRAID 320-2 is a dual-channel Ultra320 SCSI-to-PCI Storage
Adapter that supports two Ultra320 SCSI channels. The MegaRAID
320-2X is a dual-channel Ultra320 SCSI-to-PCI-X Storage Adapter that
supports two Ultra320 SCSI channels. The MegaRAID 320-2E is a
dual-channel Ultra320 SCSI-to-PCI-Express Storage Adapter that
supports two Ultra320 SCSI channels.
Figure 3.2 and Ta bl e 3 . 2 show the connectors and headers on the
MegaRAID 320-2 Storage Adapter. Figure 3.3 and Tab le 3 .3 show the
connectors and headers on the MegaRAID 320-2E Storage Adapter.
Figure 3.4 and Ta bl e 3 . 4 show the connectors and headers on the
Jumper on pins 1-2: Software uses drive detection to
control SCSI termination (default: do not change).
Jumper on pins 2-3: On-board SCSI termination
disabled.
No jumper: On-board SCSI termination enabled.
Internal high-density SCSI bus connector
External very high-density SCSI bus connector
Reserved for LSI Logic internal use
Reserved for LSI Logic internal use
Reserved for LSI Logic internal use
LED glows when the on-board cache contains
data and a write from the cache to the hard drives
is pending.
J12BBU Daughter Card40-pin
header
J13SCSI Activity LED2-pin
header
J14External SCSI
Channel 1 Connector
J16EEPROM Access
Connector
J17Termination Power
Enable Channel 0
J18Termination Power
Enable Channel 1
J19On-Board BIOS Enable 4-pin
U6DIMM SocketDIMM
68-pin
connector
2-pin
connector
2-pin
header
2-pin
header
header
(two rows
of two
pins each)
socket
Connector for an optional back-up battery pack
Connector for enclosure LED to indicate data
transfers. Connection is optional.
External very high-density SCSI bus connector
Reserved for LSI Logic internal use
Jumpered: MegaRAID 320-2X supplies termination
power.
No jumper: SCSI bus provides termination power.
No jumper: BIOS enabled (default)
Jumpers on pins 1/3: NVSRAM clear
Jumper on pins 2/4: BIOS disable
The MegaRAID 320-2X supports up to 512 Mbytes of
333 MHz unbuffered DDR ECC SDRAM, in a x72
configuration.
The MegaRAID 320-4X is a quad-channel Ultra320 SCSI-to-PCI-X
Storage Adapter that supports four Ultra320 SCSI Channels. Figure 3.5
and Ta bl e 3 .5 show the connectors and headers on the MegaRAID
320-4X Storage Adapter.
Connector for LED on enclosure to indicate data
transfers. Optional.
Internal high-density SCSI bus connector
Socket for mounting DDR SDRAM DIMM. The
MegaRAID 320-4X supports 256 Mbytes of 333 MHz
DDR ECC SDRAM, in a x72 configuration.
Page 51
Table 3.5MegaRAID 320-4X Headers and Connectors (Cont.)
Connector DescriptionTypeComments
J5External SCSI
Channel 0/1 connectors
(side-by-side)
J21External SCSI
Channel 2/3 connectors
(side-by-side)
J6Termination Enable
Channel 1
J8Termination Enable
Channel 2
J10Termination Enable
Channel 3
J13Termination Enable
Channel 0
J7Termination Power
Enable Channel 1
J9Termination Power
Enable Channel 2
J11Termination Power
Enable Channel 3
J14Termination Power
Enable Channel 0
J12I2C Connector4-pin
J15Battery Connector
J16EEPROM Access
Connector
J17Write Pending Indicator
(Dirty Cache LED)
J19Serial Interface for
Code Debugging
J20NVRAM Clear2-pin
J2380321 Initialization
Mode Select
J24On-Board BIOS Enable 2-pin
1.The battery connector is not shipped connected. It is recommended that you connect the cable on
the battery pack to J15 before you install the card.
1
68-pin
connector
68-pin
connector
3-pin
header
3-pin
header
3-pin
header
3-pin
header
2-pin
header
2-pin
header
2-pin
header
2-pin
header
connector
3-pin
header
2-pin
header
2-pin
header
3-pin
header
connector
2-pin
connector
header
External very high-density SCSI bus connectors
Jumper on pins 1-2: Software uses drive detection to
control SCSI termination (default: do not change).
Jumper on pins 2-3: On-board SCSI termination
disabled.
No jumper: On-board SCSI termination enabled.
Jumper installed enables TermPWR from the SCSI
bus to the appropriate SCSI channel.
Reserved for LSI Logic internal use
Connector for an optional battery pack
Pin-1 -BATT Terminal (black wire)
Pin-2 Thermistor (white wire)
Pin-3 +BATT Terminal (red wire)
Reserved for LSI Logic internal use
Connector for enclosure LED to indicate when data in
the cache has yet to be written to the device. Optional.
Reserved for LSI Logic internal use
Used to clear the contents of the nonvolatile random
access memory
Reserved for LSI Logic internal use
When open, optional system BIOS is enabled; when
closed, it is disabled. Status of this jumper can be
read through bit 0 at local CPU address 0x9F84.0000.
Ta bl e 3 .6 shows the general characteristics for all MegaRAID 320
Storage Adapters.
Table 3.6Storage Adapter Characteristics
Flash
ROM
YesYes16-bit SE
1.For boot code and firmware
2.For BIOS configuration storage
Serial
1
EEPROM
LVD/SE
2
SignalingUltra320 SCSI Data TransfersSCSI Features
Up to 320 Mbytes/s as well as Fast,
or LVD
interfaces
Each MegaRAID 320 Storage Adapter ensures data integrity by
intelligently validating the compatibility of the SCSI domain. The Storage
Adapters use Fusion-MPT architecture that allows for thinner drivers and
better performance.
Ultra, Ultra2, and Ultra160 speeds;
Synchronous offsets up to 62.
3.3Technical Specifications
The design and implementation of the MegaRAID 320 Storage Adapters
minimizes electromagnetic emissions, susceptibility to radio frequency
energy, and the effects of electrostatic discharge. The Storage Adapters
carry the CE mark, C-Tick mark, FCC Self-Certification logo, Canadian
Compliance Statement, Korean MIC, Taiwan BSMI, and Japan VCCI, and
they meet the requirements of CISPR Class B.
This subsection provides the power requirements for the MegaRAID 320
Storage Adapters. Ta bl e 3 . 10 lists the maximum power requirements,
which include SCSI TERMPWR, under normal operation.
Table 3.10Maximum Power Requirements
Storage
Adapter
PCI/PCI-X/
Express +12 V
PCI/PCI-X
+5.0 V
PCI/PCI-X/
Express+3.3 V
PCI PRSNT1#/
PRSNT2#
Power
Operating
Range
320-1115 mA; used only if
battery is present
320-2N/A1.5 A (PCI only)N/A15 W0 °C to 55 °C
320-2E1.4 A without battery;
1.6 A when battery is
charging
320-2X,
320-4X
0.0 A5 A0.0 A25 W0 °C to 55 °C
1.5 A (PCI only)N/A15 W0 °C to 55 °C
N/A1.5 A25 W0 °C to 50 °C
3.3.5Thermal and Atmospheric Characteristics
For all MegaRAID 320 Storage Adapters, the thermal and atmospheric
characteristics are
•relative humidity range: 5% to 90% noncondensing
•maximum dew point temperature: 32 ° C
•airflow must be at least 300 linear feet per minute (LFPM) to keep
the LSI53C1020 and LSI53C1030 heat sink temperature below 80 ° C
The following parameters define the storage and transit environment for
the MegaRAID 320 Storage Adapter
•temperature range: −40 ° C to +105 ° C (dry bulb)
•relative humidity range: 5% to 90% noncondensing
3.3.6Safety Characteristics
All MegaRAID 320 Storage Adapters meet or exceed the requirements
of UL flammability rating 94 V0. Each bare board is also marked with the
supplier name or trademark, type, and UL flammability rating. For the
This chapter explains how clusters work and how to install and configure
them. It contains the following sections:
•Section 4.1, “Overview”
•Section 4.2, “Benefits of Clusters”
•Section 4.3, “Installing and Configuring Your System as Part of a
Cluster”
•Section 4.4, “Driver Installation Instructions under Microsoft Windows
•Section 4.5, “Installing the Peer Processor Device in a Windows Cluster”
•Section 4.6, “Installing SCSI Drives”
•Section 4.7, “Installing Clusters under Windows 2000”
•Section 4.8, “Installing Clusters under Windows Server 2003”
4.1Overview
A cluster is a grouping of two independent servers that can access the
same shared data storage and provide services to a common set of
clients (servers connected to common I/O buses and a common network
for client access).
2000 Advanced Server”
Note:
The MegaRAID 320-2, -2E, -2X, and -4X Storage Adapters
support clustering; the MegaRAID 320-1 does not.
Logically, a cluster is a single management unit. Any server can provide
any available service to any authorized client. The servers must have
access to the same shared data and must share a common security
model. This generally means that the servers in a cluster have the same
architecture and run the same version of the operating system.
Step 8.Change the initiator ID in the Objects→ Adapter→ Initiator ID
menu.
For example, you can change the initiator ID to 6. If ID 6 is used
by a disk drive, select a different ID.
Step 9.Power down the first server.
Step 10. Attach the Storage Adapter to the shared array.
Step 11. Configure the first Storage Adapter to the arrays using the
Configure→ New Configuration menu.
Important.
Step 12. Follow the on-screen instructions to create arrays and save the
Step 13. Repeat steps 5 through 8 for the second Storage Adapter.
Note:
Step 14. Power down the second server.
Step 15. Attach the cables for the second Storage Adapter to the shared
Step 16. If a configuration mismatch occurs, press <Ctrl> <M> to enter
Use the entire array size of any created array. Do not create
partitions of different sizes on the RAID arrays from the
BIOS Configuration Utility (<Ctrl><M>); these cannot be
failed over individually when they are assigned drive letters
in Windows 2000 or Windows Server 2003.
configuration.
Changing the initiator ID is optional if you had changed the
initiator for Node 1 to 6. The initiator ID for Node 2 remains
7 when the cluster mode is enabled.
enclosure, and power up the second server.
the BIOS Configuration Utility.
Step 17. Go to the Configure→ View/Add Configuration→ View Disk
Step 18. Save the configuration.
Step 19. Proceed to the driver installation for a Microsoft cluster
Installing and Configuring Your System as Part of a Cluster4-3
4.4Driver Installation Instructions under Microsoft Windows
2000 Advanced Server
After the hardware is set up for the MS cluster configuration, perform the
following procedure to configure the driver under Microsoft Windows
2000 Advanced Server. Note that when the Storage Adapter is added
after a Windows 2000 Advanced Server installation, the operating system
detects it.
Step 1.When the Found New Hardware Wizard screen displays the
detected hardware device, click Next.
Step 2.When the next screen appears, select Search for a Suitable
Driver and click Next.
The Locate Driver Files screen appears.
Step 3.Insert the floppy disk with the appropriate driver for Windows
2000, then select Floppy Disk Drives on the screen and
click Next.
The Wizard detects the device driver on the diskette; the
“Completing the Upgrade Device Driver” Wizard displays the
name of the device.
Step 4.Click Finish to complete the installation.
Step 5.Repeat steps 1 through 4 to install the device driver on the
second system.
4.4.1Network Requirements
The network requirements for clustering are
•A unique NetBIOS cluster name
•Five unique, static IP addresses:
–Two addresses are for the network adapters on the
internal network.
–Two addresses are for the network adapters on the
•A domain user account for Cluster Service (all nodes must be part
of the same domain)
•Two network adapters for each node – one for connection to the
external network, the other for the node-to-node internal cluster
network. If you do not use two network adapters for each node, your
configuration is unsupported. HCL certification requires a separate
private network adapter.
4.4.2Shared Disk Requirements
Disks can be shared by the nodes. The requirements for sharing disks
are the following:
•All shared disks, including the quorum disk, must be physically
attached to the shared bus.
•All disks attached to the shared bus must be visible from all nodes.
You can check this at the setup level in the BIOS Configuration Utility,
which is accessed by pressing <Ctrl> <M> during bootup. Refer to
Section 4.6, “Installing SCSI Drives,” page 4-12, for installation
information.
•Each SCSI device must have a unique SCSI identification number
assigned to it, and each device at the end of the bus must be
terminated properly. Refer to the storage enclosure manual for
details on installing and terminating SCSI devices.
•Configure all shared disks as basic (not dynamic).
•Format all partitions on the disks as NTFS.
Important:
Use fault-tolerant RAID configurations for all disks. This
includes RAID levels 1, 5, 10, and 50.
Driver Installation Instructions under Microsoft Windows 2000 Advanced Server
4-5
Page 64
4.5Installing the Peer Processor Device in a Windows
Cluster
Use the procedure in this section to install the peer processor device in
a Windows cluster.
Note:
After the shared drives are configured, and both nodes powered up, a
prompt for another device to be installed appears. This is the peer
controller’s initiator ID and is installed as the processor device. The peer
processor device for the 320-2 controller is detected as LSI SCSI 320-2.
The 320-2X and 320-4X controllers peer processor device are detected
as 320-2X SCSI Processor Device and 320-4X SCSI Processor Device.
Perform the following steps to correctly install the driver for this device so
that the prompt does not display anymore.
Step 1.Using the MegaRAID SCSI 320-2 controller as an example, in
These steps apply to both Windows 2000 and
Windows Server 2003 clusters.
Windows Server 2003, when the peer initiator ID is detected,
the New Hardware Wizard detects the peer initiator as
LSI SCSI 320-2.
The peer initiator in this example, LSI SCSI 320-2, is shown in
Step 5.Select the hardware types based on the following options.
a.For Windows 2000, select Other Devices from the list of
hardware types, then click Next.
b.For Windows 2003, select System Devices from the Common
Hardware Types list and click Next.
The next dialog box, shown in Figure 4.4, is used to select the
maker and model of your hardware device and to indicate
whether you have a disk with the driver you want to install.
Installing the Peer Processor Device in a Windows Cluster4-9
This information is provided as a generic instruction set for SCSI drive
installations. If the SCSI hard disk vendor’s instructions conflict with the
instructions in this section, always use the instructions supplied by
the vendor.
The SCSI bus listed in the hardware requirements must be configured
prior to installation of Cluster Services. This includes
•Configuring the SCSI devices.
•Configuring the SCSI Storage Adapters and hard disks to work
properly on a shared SCSI bus.
•Properly terminating the bus. The shared SCSI bus must have a
terminator at each end of the bus. It is possible to have multiple
shared SCSI buses between the nodes of a cluster.
In addition to the information on the next page, refer to the
documentation from the SCSI device manufacturer or the SCSI
specifications, which can be ordered from the American National
Standards Institute (ANSI). The ANSI web site contains a catalog that
you can search for the SCSI specifications.
4.6.1Configuring the SCSI Devices
Each device on the shared SCSI bus must have a unique SCSI ID. Since
most SCSI Storage Adapters default to SCSI ID 7, part of configuring the
shared SCSI bus is to change the SCSI ID on one Storage Adapter to a
different SCSI ID, such as SCSI ID 6. If more than one disk is to be on
the shared SCSI bus, each disk must also have a unique SCSI ID.
4.6.2Terminating the Shared SCSI Bus
You can connect Y cables to devices if the device is at the end of the
SCSI bus. You can then attach a terminator to one branch of the Y cable
to terminate the SCSI bus. This method of termination requires either
disabling or removing any internal terminators the device has.
Important:
Any devices that are not at the end of the shared bus must
have their internal termination disabled.
During installation, some nodes are shut down, and other nodes are
rebooted. This ensures uncorrupted data on disks attached to the shared
storage bus. Data corruption can occur when multiple nodes try to write
simultaneously to the same disk that is not yet protected by the
cluster software.
Ta bl e 4 .1 shows which nodes and storage devices must be powered on
during each step.
Table 4.1Nodes and Storage Devices
StepNode 1 Node 2 StorageComments
Set up NetworksOnOnOffEnsure that power to all storage devices on
the shared bus is turned off. Power on
all nodes.
Set up Shared DisksOnOffOnPower down all nodes. Next, power on the
shared storage, then power on the first
node.
Verify Disk ConfigurationOffOnOnShut down the first node. Power on the
second node.
Configure the First NodeOnOffOnShut down all nodes. Power on the first
node.
Configure the Second
Node
Post-installationOnOnOnAll nodes must be active.
OnOnOnPower on the second node after the first
node was successfully configured.
Before installing the Cluster Service software, perform the following steps.
Step 1.Install Windows 2000 Advanced Server or Windows 2000
Datacenter Server on each node.
Step 2.Set up networks.
Step 3.Set up disks.
Important:
To configure the Cluster Service on a Windows 2000-based server, you
must be able to log on as administrator or have administrative
These steps must be completed on every cluster node
before proceeding with the installation of Cluster Service on
the first node.
permissions on each node. Each node must be a member server, or
must be a domain controller inside the same domain. A mix of domain
controllers and member servers in a cluster is not supported.
4.7.1Installing the Microsoft Windows 2000 Operating System
Install the Microsoft Windows 2000 operating system on each node.
Refer to your Windows 2000 manual for information.
Log on as administrator before you install the Cluster Services.
4.7.2Setting Up Networks
Important:Do not allow both nodes to access the shared storage
device before the Cluster Service is installed. To prevent
this, power down any shared storage devices, then power
up nodes one at a time. Install the Clustering Service on at
least one node, and ensure it is online before you power up
the second node.
Install at least two network card adapters for each cluster node. One
network card adapter card is used to access the public network. The
second network card adapter is used to access the cluster nodes.
The network card adapter used to access the cluster nodes establishes
the following:
•Node-to-node communications
•Cluster status signals
•Cluster Management
Ensure that all the network connections are correct. Network cards that
access the public network must be connected to the public network.
Network cards that access the cluster nodes must connect to each other.
Verify that all network connections are correct, with private network
adapters connected only to other private network adapters, and public
network adapters connected only to the public network. View the
Network and Dial-up Connections screen in Figure 4.6 to check the
connections.
Use crossover cables for the network card adapters that
access the cluster nodes. If you do not use the crossover
cables properly, the system does not detect the network
card adapter that accesses the cluster nodes. If the
network card adapter is not detected, you cannot configure
the network adapters during the Cluster Service
installation. However, if you install Cluster Service on both
nodes, and both nodes are powered on, you can add the
adapter as a cluster resource and configure it properly for
the cluster node network in the Cluster Administrator
application.
4.7.3Configuring the Cluster Node Network Adapter
Note:The wiring determines which network adapter is private and
which is public. For this chapter, the first network adapter
(Local Area Connection) is connected to the public network;
the second network adapter (Local Area Connection 2) is
connected to the private cluster network. This might not be
the case in your network.
Renaming the Local Area Connections – To clarify the network
connection, you can change the name of the Local Area Connection (2).
Renaming helps you identify the connection and correctly assign it.
Perform the following steps to change the name.
Step 1.Right-click on the Local Area Connection 2 icon.
Step 2.Click Rename.
Step 3.In the text box, type:
Private Cluster Connection
and press <Enter>.
Step 4.Repeat steps 1 through 3 to change the name of the public
LAN network adapter to Public Cluster Connection.
The renamed icons look like those in the picture above.
Step 5.Close the Networking and Dial-up Connections window.
The new connection names automatically replicate to other
cluster servers as the servers are brought online.
4.7.4Setting Up the First Node in Your Cluster
Perform the following steps to set up the first node in your cluster:
Step 1.Right-click My Network Places, then click Properties.
Step 2.Right-click the Private Connection icon.
Step 3.Click Status.
The Private Connection Status window shows the connection
status, as well as the speed of connection.
If the window shows that the network is disconnected, examine
cables and connections to resolve the problem before proceeding.
The network card adapter properties window displays.
Page 75
Step 9.Set network adapter speed on the private network to 10
Mbytes/s, rather than the default automated speed selection.
10 Mbytes/s is the recommended setting.
a.Select the network speed from the drop-down list.
Important:
b.Set the network adapter speed by clicking the appropriate option,
Step 10. Configure identically all network adapters in the cluster that are
Step 11. Click Transmission Control Protocol/Internet Protocol (TCP/IP).
Step 12. Click Properties.
Step 13. Click the radio-button for Use the Following IP Address.
Step 14. Enter the IP addresses you want to use for the private network.
Step 15. Type in the subnet mask for the network.
Step 16. Click the Advanced radio button, then select the WINS tab.
Do not use “Auto detect” as the setting for speed. Some
adapters can drop packets while determining the speed.
such as Media Type or Speed.
attached to the same network, so they use the same
Duplex Mode, Flow Control, Media Type, and so on.
These settings should stay the same even if the hardware
is different.
Step 17. Select Disable NetBIOS over TCP/IP.
Step 18. Click OK to return to the previous menu. Perform this step for
the private network adapter only.
4.7.5Configuring the Public Network Adapter
Important:It is strongly recommended that you use static IP
addresses for all network adapters in the cluster. This
includes both the network adapter used to access the
cluster nodes and the network adapter used to access the
LAN (Local Area Network). If you use a dynamic IP address
through DHCP, access to the cluster could be terminated
and become unavailable if the DHCP server goes down or
goes offline.
Use long lease periods to assure that a dynamically assigned IP address
remains valid in the event that the DHCP server is temporarily lost. In all
cases, set static IP addresses for the private network connector. Note
that Cluster Service recognizes only one network interface per subnet.
4.7.6Verifying Connectivity and Name Resolution
Perform the following steps to verify that the network adapters are
working properly.
Important:
Step 1.Click Start.
Step 2.Click Run.
Step 3.Type:
Step 4.Click OK.
Step 5.Type:
Step 6.If you do not already have the command prompt on your
Before proceeding, you must know the IP address for each
network card adapter in the cluster. You can obtain it by
using the IPCONFIG command on each node.
cmd
in the text box.
ipconfig /all
and press Enter.
IP information displays for all network adapters in the machine.
where ipaddress is the IP address for the corresponding
network adapter in the other node. For example, assume that
the IP addresses are set as shown in Ta bl e 4 .2 :
Page 77
Table 4.2Example IP Addresses
NodeNetwork NameNetwork Adapter IP Address
1Public Cluster Connection192.168.0.171
1Private Cluster Connection10.1.1.1
2Public Cluster Connection192.168.0.172
2Private Cluster Connection10.1.1.2
In this example, you would type:
Ping 192.168.0.172
and
Ping 10.1.1.1
from Node 1.
Then you would type:
Ping 192.168.0.172
and
10.1.1.1
from Node 2.
To confirm name resolution, ping each node from a client using
the node’s machine name instead of its IP number.
4.7.7Verifying Domain Membership
All nodes in the cluster must be members of the same domain and must
be capable of accessing a domain controller and a DNS Server. You can
configure them as either member servers or domain controllers. If you
configure one node as a domain controller, configure all other nodes as
domain controllers in the same domain.
The Cluster Service requires a domain user account under which the
Cluster Service can run. Create the user account before installing the
Cluster Service. Setup requires a user name and password. This user
account should not belong to a user on the domain.
Perform the following steps to set up a cluster user account.
Step 1.Click Start.
Step 2.Point to Programs, then point to Administrative Tools.
Step 3.Click Active Directory Users and Computers.
Step 4.Click the plus sign (+) to expand the domain name (if it is not
already expanded.)
Step 5.Click Users.
Step 6.Right-click Users.
Step 7.Point to New and click User.
Step 8.Type in the cluster name and click Next.
Step 9.Set the password settings to User Cannot Change Password
and Password Never Expires.
Step 10. Click Next, then click Finish to Create This User.
Important:
Step 11. Right-click Cluster in the left pane of the Active Directory Users
Step 12. Select Properties from the context menu.
Step 13. Click Add Members to a Group.
Step 14. Click Administrators and click on OK. This gives the new user
Step 15. Close the Active Directory Users and Computers snap-in.
If your company’s security policy does not allow the use of
passwords that never expire, you must renew the password
on each node before password expiration. You must also
update the Cluster Service configuration.
and Computers snap-in.
account administrative privileges on this computer.
Caution:Ensure that Windows 2000 Advanced Server or
Windows 2000 Datacenter Server and the Cluster Service
are installed and running on one node before you start an
operating system on another node. If the operating system
is started on other nodes before you install and configure
Cluster Service and run it on at least one node, the cluster
disks have a high chance of becoming corrupted.
To continue, power off all nodes. Power up the shared storage devices.
Once the shared storage device is powered up, power up node one.
Quorum Disk – The quorum disk stores cluster configuration database
checkpoints and log files that help manage the cluster. Microsoft makes
the following quorum disk recommendations:
•Create a small partition. Use a minimum of 50 Mbytes as a quorum
disk. Microsoft generally recommends that a quorum disk be
500 Mbytes.
•Dedicate a separate disk for a quorum resource. The failure of the
quorum disk would cause the entire cluster to fail; therefore,
Microsoft strongly recommends that you use a volume on a RAID
disk array.
During the Cluster Service installation, you must provide the drive letter
for the quorum disk. For our example, we use the letter E.
4.7.10Configuring Shared Disks
Perform these steps to configure the shared disks:
Step 4.Ensure that all shared disks are formatted as NTFS and are
designated as Basic.
If you connect a new drive, the Write Signature and
Upgrade Disk Wizard starts automatically. If this occurs:
1.Click Next to go through the wizard.
The wizard sets the disk to dynamic, but you can deselect
it at this point to set it to Basic.
2.To reset the disk to Basic, right-click Disk # (where #
identifies the disk that you are working with) and click
Revert to Basic Disk.
Step 5.Right-click unallocated disk space.
Step 6.Click Create Partition… .
The Create Partition Wizard begins.
Step 7.Click Next twice.
Step 8.Enter the desired partition size in Mbytes or change it if desired,
but each node’s drive letters must match.
Step 9.Click Next.
Step 10. Accept the default drive letter assignment by clicking Next.
Step 11. Click Next to format and create a partition.
4.7.11Assigning Drive Letters
After you have configured the bus, disks, and partitions, you must assign
drive letters to each partition on each clustered disk. Perform the
following steps to assign drive letters.
Important:
Step 1.Right-click the desired partition and select Change Drive Letter
Mountpoints is a feature of the file system that lets you
mount a file system using an existing directory without
assigning a drive letter. Mountpoints is not supported on
Windows 2000 clusters. Any external disk that is used as a
cluster resource must be partitioned using NTFS partitions
and must have a drive letter assigned to it.
Step 5.Power down node 1 and boot to node 2 to verify the drive letters.
4.7.12Verifying Disk Access and Functionality
Perform these steps to verify disk access and functionality:
Step 1.Click Start.
Step 2.Click Programs.
Step 3.Click Accessories, then click Notepad.
Step 4.Type some words into Notepad and use the File/Save As
command to save it as a test file called test.txt. Close Notepad.
Step 5.Double-click the My Documents icon.
Step 6.Right-click test.txt and click on Copy.
Step 7.Close the window.
Step 8.Double-click My Computer.
Step 9.Double-click a shared drive partition.
Step 10. Click Edit and click Paste.
A copy of the file should now exist on the shared disk.
Step 11. Double-click test.txt to open it on the shared disk.
Step 12. Close the file.
Step 13. Highlight the file, then press the Del key to delete it from the
clustered disk.
Step 14. Repeat the process for all clustered disks to ensure they can
be accessed from the first node.
After you complete the procedure, shut down the first node, power on the
second node, and repeat the procedure above. Repeat again for any
additional nodes. After you have verified that all nodes can read and
write from the disks, turn off all nodes except the first, and continue with
this guide.
Important:If drive letters were changed, make sure they correspond
on each node.
Before you begin the Cluster Service Software installation on the first
node, ensure that all other nodes are either powered down or stopped
and that all shared storage devices are powered on.
To create the cluster, you must provide the cluster information. The
Cluster Configuration Wizard lets you input this information. To use the
Wizard, perform these steps:
Step 1.Click Start.
Step 2.Click Settings, then click Control Panel.
Step 3.Double-click Add/Remove Programs.
Step 4.Double-click Add/Remove Windows Components.
Step 5.Select Cluster Service, then click Next.
Step 6.Cluster Service files are located on the Windows 2000
Advanced Server or Windows 2000 Datacenter Server
CD-ROM.
Step 7.Enter x:\i386 (where x is the drive letter of your CD-ROM). If
you installed Windows 2000 from a network, enter the
appropriate network path instead. (If the Windows 2000 Setup
flashscreen displays, close it.)
Step 8.Click OK.
The Cluster Service Configuration Window displays.
Step 9.Click Next.
The Hardware Configuration Certification window appears.
Step 10. Click I Understand to accept the condition that Cluster Service
is supported only on hardware listed on the Hardware
Compatibility List.
This is the first node in the cluster; therefore, you must create
the cluster.
Step 11. Select the first node in the cluster in the dialog box shown in
The Windows 2000 Managed Disks displays all SCSI disks, as shown on
Figure 4.9. It might display SCSI disks that do not reside on the same
bus as the system disk. Because of this, a node that has multiple SCSI
buses lists SCSI disks that are not to be used as shared storage. You
must remove any SCSI disks that are internal to the node and not to be
shared storage.
The Add or Remove Managed Disks dialog box (Figure 4.9) specifies
disks on the shared SCSI bus that are used by Cluster Service.
Perform the following steps to configure the clustered disks:
Step 1.Add or remove disks as necessary, then click Next.
The Configure Cluster Networks dialog box displays, as shown
in Figure 4.10.
In production clustering scenarios, you must use more than one
private network for cluster communication; this avoids having a
single point of failure. Cluster Service can use private networks
for cluster status signals and cluster management. This
provides more security than using a public network for these
roles. In addition, you can use a public network for cluster
management, or you can use a mixed network for both private
and public communications.
Verify that at least two networks are used for cluster
communication. Using a single network for node-to-node
communication creates a potential single point of failure. We
recommend that you use multiple networks, with at least one
network configured as a private link between nodes and other
connections through a public network. If you use more than one
private network, ensure that each uses a different subnet, as
Cluster Service recognizes only one network interface per subnet.
This document assumes that only two networks are in use. It
describes how you can configure these networks as one mixed
and one private network.
The order in which the Cluster Service Configuration Wizard
presents these networks can vary. In this example, the public
network is presented first.
Step 3.Verify that the network name and IP address correspond to the
network interface for the public network.
Step 4.Select Enable This Network for Cluster Use.
Step 5.Select the option All Communications (Mixed Network) and
click Next.
The next dialog box configures the private network, as shown
in Figure 4.12. Make sure that the network name and IP
address correspond to the network interface used for the
private network.
Figure 4.12 Network Connections Dialog Box
Step 6.Select Enable This Network For Cluster Use.
Step 7.Select the option Internal Cluster Communications Only
In this example, both networks are configured so that they can
be used for internal cluster communication. The next dialog
window offers an option to modify the order in which the
Page 89
networks are used. Because Private Cluster Connection
represents a direct connection between nodes, it remains at the
top of the list.
In normal operation, this connection is used for cluster
communication. In case of the Private Cluster Connection
failure, Cluster Service automatically switches to the next
network on the list (in this case, Public Cluster Connection).
The Internal Cluster Communication dialog box displays next,
as shown in Figure 4.13.
Figure 4.13 Internal Cluster Communication Dialog Box
Step 8.Verify that the first connection in the list is the Private Cluster
Always set the order of the connections so that the Private
Cluster Connection is first in the list.
The Cluster IP Address dialog box displays next, as shown in
Figure 4.14.
Page 90
Figure 4.14 Cluster IP Address Dialog Box
Step 9.Enter the unique cluster IP address and Subnet mask for your
network, then click Next.
The Cluster Service Configuration Wizard automatically
associates the cluster IP address with one of the public or mixed
networks. It uses the subnet mask to select the correct network.
The final wizard dialog box displays.
Step 10. Click Finish to complete the cluster configuration on the first node.
The Cluster Service Setup Wizard completes the setup process
for the first node by copying the files needed to complete the
installation of Cluster Service.
After the files are copied, the Cluster Service registry entries
are created, the log files on the quorum resource are created,
and the Cluster Service is started on the first node.
Use the Cluster Administrator snap-in to validate the Cluster Service
installation on the first node.
To validate the cluster installation:
Step 1.Click Start.
Step 2.Click Programs.
Step 3.Click Administrative Tools.
Step 4.Click Cluster Administrator.
The Cluster Administrator screen displays. If your snap-in
window is similar to the one shown in the screen, your
Cluster Service was successfully installed on the first node. You
are now ready to install Cluster Service on the second node.
For this procedure, have node one and all shared disks powered on, then
power up the second node.
Installation of Cluster Service on the second node takes less time than
on the first node. Setup configures the Cluster Service network settings
on the second node based on the configuration of the first node.
Installation of Cluster Service on the second node begins the same way
as installation on the first node. The first node must be running during
installation of the second node.
Follow the same procedures used to install Cluster Service on the first
node, with the following differences:
Step 1.In the Create or Join a Cluster dialog box, select The Second
or Next Node in the Cluster, then click Next.
Step 2.Enter the cluster name that was previously created (in this
example, ClusterOne) and click Next.
Step 3.Leave Connect to Cluster as unselected.
The Cluster Service Configuration Wizard automatically
supplies the name of the user account selected when you
installed the first node. Always use the same account you used
when you set up the first cluster node.
Step 4.Enter the password for the account (if there is one), then
click Next.
Step 5.At the next dialog box, click Finish to complete configuration.
The Cluster Service starts.
Step 6.Click OK.
Step 7.Close Add/Remove Programs.
Step 8.If you install additional nodes, repeat the preceding steps to
This option moves the group and all its resources to another
node. Disks F: and G: are brought online on the second node.
Watch the screen to see this change.
Page 94
Step 3.Close the Cluster Administrator snap-in.
This completes Cluster Service installation on all nodes. The
server cluster is fully operational. Now you can install cluster
resources, such as file shares, printer spoolers, cluster aware
services like IIS, Message Queuing, Distributed Transaction
Coordinator, DHCP, WINS, or cluster aware applications like
Exchange or SQL Server.
4.8Installing Clusters under Windows Server 2003
The preparation for the Windows Server 2003 Cluster Service follows the
same guidelines as that of the Windows 2000 Cluster Service. The
following is assumed to have already been done:
•Installation of the controller and configuration of the controller for
cluster operation. Refer to Procedure to Install and Configure Your
system as Part of a cluster in the this chapter.
•The Windows Server 2003 driver for the RAID controller has been
installed. The procedures are similar to those in Section 4.4, “Driver
Installation Instructions under Microsoft Windows 2000 Advanced
Server” in this chapter.
•Network requirements have been met.
•Shared disk requirements have been met.
4.8.1Cluster Service Software Installation
Before you begin the Cluster Service Software installation on the first
node, make sure that all other nodes are either powered down or
stopped and all shared storage devices are powered on.
4.8.2Installation Checklist
This checklist helps you prepare for installation. Step-by-step instructions
begin after the checklist.
Software Requirements – The following are required for software
installation:
•Microsoft Windows Server 2003 Enterprise Edition or Windows Server
2003 Datacenter Edition installed on all computers in the cluster
•A name resolution method such as Domain Name System (DNS),
DNS dynamic update protocol, Windows Internet Name Service
(WINS), HOSTS, and so on
•An existing domain model
•All nodes must be members of the same domain
•A domain-level account that is a member of the local administrators
group on each node. A dedicated account is recommended.
Network Requirements –
•A unique NetBIOS name
•Static IP addresses for all network interfaces on each node
Note:
•Access to a domain controller. If the cluster service is unable to
authenticate the user account used to start the service, it could
cause the cluster to fail. It is recommended that you have a domain
controller on the same local area network (LAN) as the cluster is on
to ensure availability.
•Each node must have at least two network adapters – one for
connection to the client public network and the other for the
node-to-node private cluster network. A dedicated private network
adapter is required for HCL certification.
•All nodes must have two physically independent LANs or virtual
LANs for public and private communication.
•If you are using fault-tolerant network cards or network adapter
teaming, verify that you are using the most recent firmware and drivers.
Check with your network adapter manufacturer for cluster compatibility.
Server Clustering does not support the use of IP addresses
assigned from Dynamic Host Configuration Protocol
(DHCP) servers.
•An HCL-approved external disk storage unit connected to all
computers. This is used as the clustered shared disk.
•All shared disks, including the quorum disk, must be physically
attached to a shared bus.
•Shared disks must be on a different controller then the one used by
the system drive.
•Creating multiple logical drives at the hardware level in the RAID
configuration is recommended rather than using a single logical disk
that is then divided into multiple partitions at the operating system
level. This is different from the configuration commonly used for
stand-alone servers. However, it enables you to have multiple disk
resources and to do Active/Active configurations and manual load
balancing across the nodes in the cluster.
•A dedicated disk with a minimum size of 50 megabytes (MB) to use
as the quorum device. A partition of at least 500 MB is
recommended for optimal NTFS file system performance.
•Verify that disks attached to the shared bus can be seen from all
nodes. This can be checked at the host adapter setup level.
•SCSI devices must be assigned unique SCSI identification numbers
and properly terminated.
•All shared disks must be configured as basic disks.
•Software fault tolerance is not natively supported on cluster
shared disks.
•All shared disks must be configured as master boot record (MBR)
disks on systems running the 64-bit versions of Windows Server 2003.
•All partitions on the clustered disks must be formatted as NTFS.
•Hardware fault-tolerant RAID configurations are recommended for
all disks.
•A minimum of two logical shared drives is recommended.
4.8.4Steps for Configuring the Shared Disks under Windows Server
2003
Windows Server 2003 disk management is similar to Windows 2000
Advanced Server, however care must be taken to ensure that the
partitions are correctly created for cluster installation and drive lettering.
Perform the following steps to configure the shared disks under
Windows Server 2003. Start on node 1 first and load disk management.
Node 2 is powered off at this point.
Step 1.Start Computer Management to display Figure 4.17, then select
Disk Management.
Figure 4.17 Computer Management Screen
After selecting Disk Management, if there are any unconfigured disks,
the Initialize and Convert Disk Wizard appears.
Step 3.Select the disks to initialize on the Select Disks to Initialize
screen, then click Next.
The Select Disks to Convert displays next. Do not select any
disks to convert on the Select Disks to Convert Screen. Only
basic disks are used for the cluster service.
Step 4.On the Select Disks to Convert screen, click Next.
The Disk Management screen displays, as shown in
Figure 4.19. At the Disk Management screen, after the shared
disks have been initialized in the operating system, they are
now unallocated space which can then created as a new
partition.