warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP
shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft, Windows, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Windows Server 2003 is a trademark of
Microsoft Corporation. Intel, Pentium, and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United
States and other countries. UNIX is a registered trademark of The Open Group.
September 2006 (Second Edition)
Part Number 364780-002
Audience assumptions
This document is for the person who installs, administers, and troubleshoots servers and storage systems.
HP assumes you are qualified in the servicing of computer equipment and trained in recognizing hazards
in products with hazardous energy levels.
Contents
HP ProLiant Cluster F500 DT for Enterprise Virtual Array overview...................................................... 5
HP ProLiant Cluster F500 for EVA introduction .............................................................................................. 5
Disaster tolerance for EVA..........................................................................................................................5
HP ProLiant Cluster F500 for EVA overview ..................................................................................................5
EVA basic DT configuration ........................................................................................................................ 6
EVA bidirectional DT configuration .............................................................................................................. 7
EVA maximum DT configuration .................................................................................................................. 8
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array......................................................... 9
Installing the hardware............................................................................................................................... 9
Preparing the F500 for continuous access EVA hardware installation ...................................................10
Setting up the servers ..................................................................................................................... 10
Setting up the storage subsystem .....................................................................................................10
Setting up the Fibre Channel adapters.............................................................................................. 10
Setting up the Fibre Channel switches at both locations, if applicable................................................... 11
Connecting the controllers to the switches ......................................................................................... 11
Connecting the host to the switches.................................................................................................. 13
Data Replication Manager........................................................................................................................ 24
Installing the hardware............................................................................................................................. 24
Setting up the servers for MA8000 .................................................................................................. 25
Contents 3
Setting up the storage subsystem .....................................................................................................25
Setting up the host bus adapters ...................................................................................................... 25
Designating the server as a maintenance terminal.............................................................................. 25
Setting up the Fibre Channel switches at both locations ...................................................................... 25
HP ProLiant Cluster F500 DT for Enterprise
Virtual Array overview
In this section
HP ProLiant Cluster F500 for EVA introduction ............................................................................................. 5
Disaster tolerance for EVA......................................................................................................................... 5
HP ProLiant Cluster F500 for EVA overview................................................................................................. 5
EVA basic DT configuration....................................................................................................................... 6
EVA bidirectional DT configuration............................................................................................................. 7
EVA maximum DT configuration................................................................................................................. 8
HP ProLiant Cluster F500 for EVA introduction
This guide provides supplemental information for setting up an HP ProLiant Cluster F500 Disaster Tolerant
for EVA configuration using HP StorageWorks Continuous Access EVA software. This guide serves as a
link between the various clustering guides needed to complete a DT cluster installation. Other guides
include:
•
HP ProLiant Cluster F500 Installation Guide
•
Best Practices Guide — ProLiant Cluster HA/F500 for Enterprise Virtual Array (HSV100/HSV110)
Using Microsoft Windows 2000 Advanced Server and Microsoft Windows Server 2003, Enterprise
Edition
•
HP StorageWorks Continuous Access EVA Design Reference Guide
For the latest version of the reference guide and other Continuous Access EVA documentation, access the
HP storage website (http://h18006.www1.hp.com/storage/index.html).
Disaster tolerance for EVA
Disaster-tolerant solutions provide high levels of availability with rapid data access recovery, no single
point of failure, and continued data processing after the loss of one or more system components in a
cluster configuration. Data is simultaneously written to both local and remote sites during normal
operation. The local site is known as the source site because it is in control of the operation. The remote
site is known as the destination site because it is where the information is copied.
Copied data resides at both the source and destination sites. However, in base DT cluster configurations
under normal conditions, host data access occurs only through the source site. Processing will migrate to
the destination site and continue normal operation if a component failure or a catastrophe occurs at the
source site.
HP ProLiant Cluster F500 for EVA overview
The HP ProLiant Cluster F500 DT for EVA configuration is a two-node cluster for Microsoft® Windows®
2000 Advanced Server or a two-to-eight-node cluster for Microsoft® Windows® Server 2003, Enterprise
HP ProLiant Cluster F500 DT for Enterprise Virtual Array overview 5
Edition. The DT configurations use HP ProLiant servers, HP StorageWorks storage subsystems, HP
StorageWorks Continuous Access software, HP OpenView software, and HP StorageWorks Secure Path
software.
This solution combines the failover functionality of Microsoft® Cluster Server with the remote data
mirroring functionality of Continuous Access. This solution also allows for a distance of up to 100 km
between a primary (local) external storage subsystem and a mirrored (remote) external storage subsystem.
The server-to-storage connection is based on a Fibre Channel switch, using a shortwave connection, and
server-to-server communication, using Ethernet over FDDI or FCIP connections. The extended Continuous
Access EVA-over-IP configuration is similar to the simple Continuous Access EVA configuration except for
the use of Fibre Channel-to-IP gateways. Two gateways are required at each site, one per fabric, for a
total of four per solution, dedicated to that solution. When multiport gateways become available, each
port must be dedicated to another single port.
The ProLiant server nodes in the cluster are connected or stretched over a distance. Up to two storage
subsystems for FCIP connections and four storage subsystems for non-FCIP connections can be used at one
site. These storage subsystems act as the source sites for the Continuous Access software and process disk
subsystem requests for all nodes in the cluster. The storage subsystems are connected to the server nodes
by means of redundant Fibre Channel connections that are managed by Secure Path. Additionally, the
source storage subsystems are connected by means of redundant longwave fiber/FCIP connections to the
destination site. As with standard ProLiant clusters using Microsoft® operating systems, MSCS manages
failovers at the server and application levels.
The Continuous Access software functions at the redundant storage controller level in each of the storage
subsystems and performs synchronous mirroring from the source site to the destination site, creating an
exact copy of the cluster-shared disk. A manual recovery process performed at the destination site enables
access to the mirrored data in the event of a disaster at the source site. The cluster is functional again
within minutes, and application processing can continue.
The F500 DT for EVA cluster requires two types of links: network and storage. The first requirement is at
least two network links between the servers. MSCS uses the first network link as a dedicated private
connection to pass heartbeat and cluster configuration information between the servers. The second
network link is a public network connection that clients use to communicate with the cluster nodes. The DT
cluster configuration can use any network card that is supported by Microsoft® operating systems.
However, MSCS requires that the dedicated private link and public link between the servers be located
on different TCP/IP subnets.
Because typical network topologies, such as 100-Mb Ethernet, cannot normally meet this criterion over the
longer distances used in a DT cluster, another topology, such as FDDI or FCIP, must be used. FDDI
network cards can be used in each server in place of the standard Ethernet NICs, or standard Ethernet
NICs can be used to connect to an FDDI concentrator/FCIP switch that will connect the two sites.
The second requirement is the storage link between the storage subsystems. The servers are connected to
the local storage systems, using multimode fiber optic cable. Each storage subsystem is connected to two
Fibre Channel switches at each site in a redundant path configuration using multimode fiber optic cable.
The switches at one site are connected to the switches at the other site by means of single-mode fiber optic
or FCIP connections.
EVA basic DT configuration
The basic DT configuration includes a second destination storage subsystem that mirrors the data on the
source storage subsystem. The basic DT configuration consists of:
•
Two ProLiant servers as cluster nodes
•
Two storage subsystems
•
Four HP StorageWorks Fibre Channel SAN switches (for a current list of supported switches, refer to
the High Availability website) (http://www.hp.com/servers/proliant/highavailability
)
HP ProLiant Cluster F500 DT for Enterprise Virtual Array overview 6
Two FCAs in each server
•
•
Two storage management appliances, at least one for each site
A basic DT cluster configuration consists of two separated nodes. The two nodes, plus the source storage
subsystem, form an MSCS cluster.
EVA bidirectional DT configuration
The bidirectional DT configuration enables a source subsystem to also be configured as a destination
subsystem. The bidirectional DT configuration consists of:
•
Two ProLiant servers as cluster nodes
•
Two storage subsystems
•
Four HP StorageWorks Fibre Channel switches
•
Two FCAs in each server
•
Two SMAs, at least one for each site
NOTE: Refer to the HP StorageWorks Continuous Access EVA Design Reference Guide for complete lists of
supported equipment.
HP ProLiant Cluster F500 DT for Enterprise Virtual Array overview 7
A bidirectional DT configuration consists of two separated nodes. As in the basic DT configuration, data
at the first site is mirrored on a second storage subsystem at the second site. Two storage systems can
communicate bidirectionally, meaning that a storage system can be used as the primary source for data
and as a destination for data replication. By providing redundant systems and software, as well as
alternate paths for data flow, high availability and disaster tolerance is achieved with no single point of
failure.
EVA maximum DT configuration
Refer to the High Availability website (http://www.hp.com/servers/proliant/highavailability) or HP
StorageWorks Continuous Access EVA Design Reference Guide for maximum configuration information.
HP ProLiant Cluster F500 DT for Enterprise Virtual Array overview 8
Setting up the ProLiant Cluster F500 for
Enterprise Virtual Array
Installing the hardware ............................................................................................................................. 9
Configuring the software......................................................................................................................... 13
Pre-presenting the destination VDs to cluster nodes ..................................................................................... 19
Required materials
To configure an F500 DT cluster for Continuous Access, you will need any applicable documents listed in
the "Related Documents" section of the HP Continuous Access EVA Getting Started Guide. Many of the
referenced documents will be online.
The following is a short list of essential documents.
•
HP Continuous Access EVA Getting Started Guide
•
HP ProLiant Cluster F500 Installation Guide
•
HP StorageWorks Continuous Access EVA Design Reference Guide
•
HP StorageWorks Continuous Access User Interface Installation Guide
•
HP StorageWorks Continuous Access User Interface Release Notes
•
HP StorageWorks Command View EVA documentation, including online help
•
HP StorageWorks Continuous Access Management User Interface online help
•
HP StorageWorks Secure Path for Windows installation guide
•
Fibre Channel SAN Switch installation and hardware guide
Continuous access
Refer to the HP StorageWorks Continuous Access EVA Operations Guide for detailed information on
Continuous Access, including any restrictions.
Installing the hardware
Depending on the size of your SAN and the considerations used in designing it, many different hardware
configurations are possible. Refer to the HP StorageWorks Continuous Access Enterprise Virtual Array
Design Reference Guide for a detailed description of various hardware configurations.
Set up the cluster using the following procedures:
1.
"Preparing the F500 for continuous access EVA hardware installation (on page 10)"
2.
"Setting up the servers (on page 10)"
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 9
"Setting up the storage subsystem (on page 10)"
3.
4.
"Setting up the Fibre Channel adapters (on page 10)"
5.
"Setting up the Fibre Channel switches at both locations, if applicable (on page 11)"
6.
"Connecting the controllers to the switches (on page 11)"
7.
"Connecting the host to the switches (on page 13)"
8.
"Zoning recommendations (on page 13)"
9.
"Setting Up a Bidirectional Solution (on page 13)"
Preparing the F500 for continuous access EVA hardware installation
Task Reference document
Set up EVA storage system hardware. HP StorageWorks Enterprise Virtual Array User Guide
Make fabric connections:
• Connect GBICs or small form factor pluggables
to switch ports.
• Connect HSV controller pair, SMA, and hosts to
Fibre Channel fabrics.
• HP StorageWorks Continuous Access EVA
Operations Guide
• Compaq SANworks Management Appliance
Getting Started Guide
• Test intersite links.
Install intersite links and connect to switches with
GBICs or SFPs.
Install host kits and device drivers, if required.
Upgrade drivers if necessary.
HP StorageWorks SAN Extension Using FCIP
Configuration Guide
OS-specific kit version 3.x for EVA installation and
configuration guide
Install Fibre Channel adapters. FCA documentation
Create separate management zones for any HSG80
HP StorageWorks SAN Design Reference Guide
systems in your SAN.
Plan and populate layout of one or more physical
disk groups.
Power up storage systems and SMAs.
HP StorageWorks Continuous Access EVA Operations
Guide
• HP ProLiant Cluster F500 Installation Guide
• Compaq SANworks Management Appliance
Getting Started Guide
Setting up the servers
To prepare the Continuous Access solution for setup, refer to the ProLiant cluster documentation. Follow
the installation instructions in the ProLiant server documentation.
Setting up the storage subsystem
Refer to the documentation that was shipped with the storage subsystem for detailed installation
instructions.
Setting up the Fibre Channel adapters
Two FCAs must be installed on each host. For detailed installation instructions, refer to the documentation
that comes in the host kit or with your adapter.
Locate and record the World Wide Names of each FCA on the zoning worksheet. Keep a copy of the
worksheets at all your sites. In addition, record the WWNs for the EVA and the SMAs for each site.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 10
NOTE: The WWN can be found on the bottom of the adapter board. Look for a small bar code label with
an IEEE precursor. A WWN example is 1000-0000-C920-A5BA.
Setting up the Fibre Channel switches at both locations, if applicable
NOTE: Both Fibre Channel switches can be configured from the same site.
Your Fibre Channel switches must be installed and configured with two working redundant fabrics before
you connect the remaining Continuous Access EVA components to your fabrics. For information on the
specific switches used and GBICs needed, refer to the HP website
(http://h18006.www1.hp.com/storage/saninfrastructure.html
).
Connecting the controllers to the switches
Before connecting fiber optic cables between storage components, HP recommends that you tag each end
to identify switch names, port numbers, controller names, and so on.
Four fiber optic cable connections are required for each controller pair. The only supported connection
scheme is shown below. Connect the fiber optic cable such that port 1 of controller A and controller B go
to different fabrics. Connect port 2 of controller A and controller B to separate fabrics that are the fabric
opposite from port 1 on that controller.
The basic rule is that the first or left-hand port on the top controller is cabled to the first fabric and the
other port of the same controller is cabled to the other fabric. The other (bottom) controller is cabled such
that the left-hand port is attached to the second fabric, while the second port is cabled to the first fabric,
the opposite of the first (top) controller. Even though it does not matter which switch ports are used,
symmetry is recommended.
Either controller can be controller A or controller B. In a storage system that has not been configured, the
first controller that powers up and passes a self-test becomes controller A. Also, under certain conditions,
controller A and controller B can have their designations reversed.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 11
Any other controller-to-fabric cabling scheme is not supported. The cabling in the following example is not
supported because both port 1s share the same fabric and both port 2s share the same fabric.
The cabling in the following example is not supported because both ports of a controller are on the same
fabric, limiting failover options when there is a fabric issue.
You can control the power sequence of the controllers in your storage system, thereby forcing a controller
designation. To guarantee that a particular controller is designated controller A:
1.
Uninitialize the storage system. If it is a new storage system that has never been initialized, this step
is not necessary.
2.
Power off both controllers.
3.
Pull out and reseat the cache batteries on the controller you want to designate controller A. This
procedure clears the cache.
4.
Power up that controller.
5.
After controller A passes the self-test, power up the other controller.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 12
Connecting the host to the switches
Tag each end of your fiber optic cable to identify switch names, port numbers, host names, and so on.
Two fiber optic connections are required for each host. Connect the fiber optic cable such that
connections to the two FCAs go to two separate switches (fabrics).
Zoning recommendations
Having both fabrics in place and operational is necessary before you even begin any other equipment
installation. For information on fabric topologies, fabric design rules, and switch zoning, refer to the HP StorageWorks SAN Design Reference Guide.
For Continuous Access-specific guidelines, refer to the HP StorageWorks Continuous Access EVA Operations Guide.
Zoning is a logical grouping of end-to-end Fibre Channel connections, implemented by switches, to create
a barrier between different environments and allow for finer segmentation of the fabric. Switch ports that
are members of a zone can communicate with each other but are isolated from ports in other zones.
Because the SMA and hosts in a Continuous Access EVA environment can conflict with each other, they
must reside in separate zones.
NOTE: If any HSG80 controllers with DRM reside within the SAN, they must be zoned out of an EVA
fabric.
Setting up a bidirectional solution
You can configure data replication groups to replicate data from storage system A to storage system B
and other unrelated data replication groups to replicate data from storage system B back to storage
system A. This feature, called bidirectional replication, enables a storage system to have both source and
destination virtual disks, where these Vdisks belong to separate Data Replication groups. This setup has
no effect on normal operation or failover policy and has the advantage of allowing for the destination
storage system to be actively used while also providing a disaster-tolerant copy of the other site's data. If
the business needs require bidirectional data transfers, you must determine the effect on the links.
Refer to the Continuous Access EVA Design Reference Guide for more information.
Configuring the software
The storage system must be initialized before it can be used. This process binds the controllers together as
an operational pair and establishes preliminary data structures on the disk array.
Initialization is performed through the use of the Command View EVA. This procedure is documented in
the HP StorageWorks Command View EVA Getting Started Guide. HP StorageWorks Command View
EVA maintains a database that resides on the management appliance and is accessible only by the
element manager for each storage system.
The Command View EVA software can be installed on more than one management appliance in a fabric.
Each installation of the Command View EVA software is a management agent. The client for the agent is
a standard browser.
To begin the configuration process, create, or initialize, the storage system. When you first view the EVA
from the Command View EVA software, the storage pool is presented as "uninitialized storage."
Before the host servers can use the virtual disks, you must:
•
Initialize the storage systems at both the source and destination. The storage pool is "uninitialized
storage" at the outset.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 13
Add hosts to the storage system.
•
•
Create and present virtual disks to hosts.
Configure the software using the following procedures:
1.
"Preparing the F500 for Continuous Access EVA Software Installation (on page 14)"
2.
"Logging On to the SAN Management Appliance (on page 15)"
3.
"Entering a License Key (on page 15)"
4.
"Initializing the Source Site (on page 15)"
5.
"Naming the Site (on page 15)"
6.
"Creating the VD Folders (on page 16)"
7.
"Creating the VDs (on page 16)"
8.
"Creating the Host Folder (on page 17)"
9.
"Adding a Host (on page 17)"
10.
"Presenting the VDs to the Host (on page 17)"
11.
"Discovering the Devices (on page 18)"
12.
"Creating the DR Groups (on page 18)"
13.
"Creating the Copy Sets (on page 18)"
14.
"Creating the Managed Sets (on page 19)"
15.
"Pre-presenting the Destination VDs to Cluster Nodes (on page 19)"
Preparing the F500 for continuous access EVA software installation
Task Reference document
Install latest version of system software:
• HP OpenView Storage Management
Appliance Software
• HP StorageWorks Virtual Controller Software
Install HP StorageWorks Secure Path for
Windows® for your platform.
Install HP StorageWorks Continuous Access User
Interface.
Enter storage system WWN into controller using
the Operator Control Panel.
Configure one switch zone encompassing the
active SMA and all storage systems.
Configure one switch zone for each operating
system/host and the storage systems.
Install Command View EVA and set up agents and
user options.
HP OpenView Storage Management Appliance Software
User Guide
EVA Read Me First
EVA Release Notes
HP StorageWorks system software for EVA installation
card
HP StorageWorks upgrade instructions for EVA
HP StorageWorks Secure Path for Windows® platform-
specific installation guide
HP StorageWorks Continuous Access User Interface
Installation Guide
HP ProLiant Cluster F500 DT Installation Guide
HP StorageWorks SAN Design Reference Guide, 4th
Edition
Compaq StorageWorks Switch Zoning Reference Guide
HP StorageWorks SAN Design Reference Guide, 4th
Edition
Compaq StorageWorks Switch Zoning Reference Guide
HP StorageWorks Command View EVA Getting Started
Guide
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 14
Logging on to the SAN management appliance
1.
Log on to the management appliance by opening a browser and accessing the management
appliance remotely by entering the IP address (or the network name if a DNS is configured) as the
URL. The logon screen opens.
2.
Click anonymous.
3.
Log in as administrator.
4.
Enter the password for the account.
5.
Click OK. The hp openview storage management appliance window displays.
Entering a license key
You must enter a license key encoded with the WWN of the storage system before you initialize the
storage system.
Follow the "Obtaining a License Key" instructions in the HP StorageWorks Enterprise Virtual Array Licensing Guide.
Each license key belongs to a specific WWN, so enter a license key that matches the WWN of the
storage system. You can enter the license keys for all storage systems that this management agent will
control at the same time.
NOTE: The WWN number must be entered exactly as it appears on the label. This field is case-sensitive.
License keys require an ASCII text editor to ensure their format.
To enter a license key:
1.
Click Agent Options in the Session pane. The Management Agent Options window displays.
2.
Click Licensing Options. The Licensing Options window displays.
3.
Click Enter new license key. The Add a License window displays.
4.
Enter the license key.
You must enter the license key exactly as it was in the e-mail you received from the license key
fulfillment website. If possible, copy the license key from the e-mail and paste it into the text field.
5.
Click Add license. The license key is added.
6.
To enter additional license keys, repeat steps 4 and 5.
Initializing the source site
The procedures to initialize the source site are as follows:
Using the Command View EVA
•
Naming the site
•
Creating the VD folder
•
Creating the Host folder
•
Creating the Disk group folder
Using the Continuous Access GUI
•
Creating the DR Group folder
Naming the site
In the hp openview storage management appliance window:
1.
Click Devices. The Devices window displays.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 15
Click command view eva. The HSV Storage Network Properties window displays. You can now
2.
browse the EVAs in the Uninitialized Storage System in the navigation panel.
3.
Determine which site is to be designated Site A and which is to be designated Site B by selecting
Hardware>Controller Enclosure. The Initialize an HSV Storage System window displays.
4.
Enter any requested license key information. Refer to "Entering a license key (on page 15)."
5.
In the Step 1: Enter a Name field, enter the site name.
6.
In the Step 2: Enter the number of disks field, enter the maximum number of disks (minimum of eight
in a disk group) or the number of disks you will use in the default disk group.
NOTE: You must determine if you will configure your storage in a single disk group or multiple disk groups.
CAUTION: Do not use the browser Back button because you will undo the previous operation.
7.
Select Advanced Options to set up the clock.
IMPORTANT: Set up the clock on both EVAs to pull time from the same source.
8.
Click Next. Request a disk failure protection level.
NOTE: HP recommends selecting double for disk failure protection.
9.
Click Finish, and then click OK (if the operation was successful).
NOTE: If the operation is not successful, it typically is caused by a communication problem. Verify the SAN
connection, fix the problem, and begin again at step 1.
Creating the VD folders
1.
In the Command View EVA navigation pane, click Virtual Disks. The Create a Folder window
displays.
2.
In the Step 1: Enter a Name field, enter the folder name (use the cluster name).
3.
In the Step 2: Enter comments field, enter any additional information.
4.
Select Finish>OK.
Creating the VDs
You are given the opportunity to select a preferred path during the creation of a Vdisk. This means that
host I/O to a Vdisk will go to the controller you designate as preferred, as long as the paths to that
controller are available. There are five possible preferred path settings. However, the Windows®
environment enables only those shown in the bulleted list, as Secure Path is responsible for supporting
failback capability.
NOTE: For path A and B, all members of a DR group must have the same preferred path.
•
None (not recommended)
•
Path A—Failover only
•
Path B—Failover only
1.
In the Command View EVA navigation pane, click the new VD folder. The Create a Vdisk Family
window displays.
2.
In the Vdisk name: field, enter the VD name.
3.
In the Size: field, enter the size in gigabytes.
4.
In the Preferred path/mode: dropdown menu, make a selection (for load balancing).
NOTE: All members of DR group must have same setting.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 16
Click Create More and repeat steps 2 through 4 for each VD you create.
5.
6.
Select Finish>OK.
NOTE: The Continuous Access software will create the Site B VDs.
Creating the host folder
Create a host folder for each cluster to enable ease of administration.
1.
Click Create Folder. The Create a Folder window displays.
2.
In the Step 1: Enter a Name field, enter SiteA or any name up to 32 characters long.
3.
In the Step 2: Enter comments field, enter any additional information, up to 64 characters long.
4.
Select Finish>OK.
5.
Repeat steps 1 through 4 for all remaining clusters.
Adding a host
NOTE: If the SAN appliance cannot see the host WWNs, perform steps 1 and 2. Otherwise, begin at step
3.
1.
Reboot the SAN appliance.
2.
Access the Command View EVA application.
3.
Click the desired host in the navigation pane. The Add a Host window displays.
4.
In the Host name: field, enter the host name.
5.
In the Host IP address: dropdown menu, select the appropriate scheme or enter the IP address if it is
a static IP address.
6.
In the Port WW Name: dropdown menu, select a port WWN for the first FCA.
7.
Click Add Host, and then click OK. The Add a Host Port window displays.
8.
For each FCA:
a.
In the Click to select from list dropdown menu, select the appropriate FCA.
b.
Click Add port.
9.
Select the Ports tab (which displays only after selecting to add the port) and verify the ports are
correctly assigned.
10.
Repeat the procedure for Site B.
Presenting the VDs to the host
CAUTION: Shut down all the nodes. Only one node should see the drives at one time.
1.
In the Command View EVA navigation pane, click the first new VD. The Vdisk Active Member
Properties window displays.
2.
Click Presentation. The Vdisk Active Member Properties window displays.
3.
Click Present. The Present Vdisk window displays.
4.
Select both hosts, and then click Present Vdisk.
5.
Click OK. You are returned to the Vdisk Active Members Property window.
6.
Select the Presentation tab to verify that both hosts are on the same LUN. The Vdisk Active
Members Property window displays.
7.
Repeat steps 1 through 6 for each VD.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 17
Power on Node 1.
8.
9.
Log on to the domain.
10.
Wait until all the VDs are discovered.
11.
Open the operating system Device Manager and look at that disk drive.
12.
Go to the operating system Disk Manager and select Initialize, do not upgrade to dynamic
disk.
13.
Format the disk and label the volumes.
14.
Install MSCS on Node 1. Refer to the appropriate documentation.
15.
Repeat steps 8 through 14 for Node 2.
16.
Join Node 2 to the cluster.
Discovering the devices
You will be creating the copy sets and DR groups in the same sequence.
1.
In the HP OpenView Storage Management Appliance window, click Tools.
2.
Click continuous access. The Continuous Access Status window displays. The window is empty.
NOTE: You are now working in the Continuous Access user interface, not the Command View EVA.
3.
Click Refresh>Discover. A pop-up window informs you that the discovery process could be
lengthy.
After the system has discovered the devices, you will create the DR groups and copy sets.
NOTE: You must plan how to separate managed sets and copy sets. Refer to the HP StorageWorks
Continuous Access EVA Operations Guide.
Creating the DR groups
You can create the DR groups first or create the initial copy set, which forces the DR group creation
process.
The following procedure is for creating the DR groups before the copy sets.
1.
On the Continuous Access window, select the site from the navigation pane.
2.
Click Create>DR Group. The Create a new DR Group window opens.
3.
In the DR Group: field, enter the name.
4.
In the Destination Storage System: dropdown list, select the destination site.
5.
In the Comments: field, enter any comments.
6.
Click Next.
7.
Select Finish>OK.
8.
Repeat the procedure for each DR group.
Creating the copy sets
NOTE: Entering the first copy set will force the DR group creation sequence if no DR Group has yet been
created.
1.
On the Continuous Access window, select the site from the navigation pane.
2.
Click Create>Copy Set. The Create a new Copy Set window opens.
3.
In the DR Group: dropdown list, select the DR group to which the copy set will belong.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 18
In the Copy Set: field, enter the copy set name.
4.
5.
Select the source VD from the Source Virtual Disk: dropdown list.
6.
Enter the DR Group.
7.
Select the destination from the Destination Storage System: dropdown list (Site B, if you have
followed suggested naming conventions).
8.
Click Finish.
Creating the managed sets
A managed set is a folder created to hold DR groups. One or more DR groups can be combined to create
a managed set.
1.
Choose Create>Managed Sets. The Edit or create a Managed Set window displays.
2.
In the Managed Set Name: field, enter the name.
3.
Click Finish.
4.
Repeat the procedure for each managed set to create.
5.
In the navigation pane, select the first DR group to be part of a managed set.
6.
In the Configuration dropdown menu, select Edit. The Edit an existing DR Group window displays.
7.
Select a managed set from the Managed Set list, and then click Finish.
8.
Repeat steps 5 through 7 for each DR group to add.
Pre-presenting the destination VDs to cluster nodes
1.
In the hp openview storage management appliance window, select Devices. The Devices window
displays.
2.
Click command view eva. The command view eva Properties window opens.
3.
In the navigation pane, select the destination subsystem, and then click Virtual Disks.
4.
Select the virtual disk to present on the destination subsystem.
5.
Select Active. The Vdisk Active Member Properties window displays.
6.
Select the Presentation tab, and then click Present. The Present Vdisk window opens.
7.
Select the VDs, and then click Present Vdisk.
8.
Click OK.
9.
Repeat for each VD to present.
10.
Verify the disks are properly presented.
a.
In the navigation pane, select the host to verify.
b.
Select the Presentation tab. The Host Properties window displays.
c.
Verify that each VD is presented to a unique LUN.
The configuration is complete.
Setting up the ProLiant Cluster F500 for Enterprise Virtual Array 19
Loading...
+ 43 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.