Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays User Manual

Page 1
Oracle® Solaris Cluster 3.3 with StorageTek RAID Arrays Manual
Part No: 821–1558–10 September 2010, Revision A
Page 2
Copyright © 2000, 2010, Oracle and/or its aliates. All rights reserved.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you nd any errors, please report them to us in writing.
If this is software or related software documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are “commercial computer software” or “commercial technical data” pursuant to the applicable Federal Acquisition Regulation and agency-specic supplemental regulations. As such, the use, duplication, disclosure, modication, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management applications. Itis not developed or intended for use in any inherently dangerous applications, including applications which may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its aliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its aliates. Other names may be trademarks of their respective owners.
AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd.
This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporationand its aliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its aliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
110714@25097
Page 3
Contents
Preface .....................................................................................................................................................5
1 Restrictions and Requirements .........................................................................................................11
Requirements ....................................................................................................................................... 11
Restrictions ........................................................................................................................................... 11
2 Installing and Conguring a StorageTek Array .............................................................................. 13
Installing Storage Arrays .................................................................................................................... 13
Storage Array Cabling Congurations ...................................................................................... 13
How to Install Storage Arrays in a New Cluster ....................................................................... 16
How to Add Storage Arrays to an Existing Cluster .................................................................. 17
Conguring Storage Arrays ............................................................................................................... 18
How to Create a Logical Volume ................................................................................................ 19
How to Remove a Logical Volume ............................................................................................. 21
3 Maintaining a StorageTek Array .......................................................................................................25
FRUs That Do Not Require Oracle Solaris Cluster Maintenance Procedures .............................. 25
Maintaining Storage Arrays ............................................................................................................... 26
How to Upgrade Storage Array Firmware ................................................................................ 26
How to Remove a Storage Array ................................................................................................ 27
Index ......................................................................................................................................................29
3
Page 4
4
Page 5
Preface
The Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual provides procedures that are specic to StorageTek RAID arrays that are placed in an Oracle Solaris Cluster environment.
Use this manual with any version of Oracle Solaris Cluster 3.3 software on SPARC based clusters or x86 based clusters. See the
“Revision History” on page 6 for a list of changes to this
manual.
Note – This Oracle Solaris Cluster release supports systems that use the SPARC and x86 families
of processor architectures: UltraSPARC, SPARC64, AMD64, and Intel 64. In this document, x86 refers to the larger family of 64-bit x86 compatible products. Information in this document pertains to all platforms unless otherwise specied.
This book assumes that you are performing one or more of the following tasks:
You want to replace an array component to prevent a failure.
You want to replace an array component because you have an existing failure.
You want to add (to an established cluster) or install (to a new cluster) a storage array.
Who Should UseThis Book
This book is for Oracle representatives who are performing the initial installation of an Oracle Solaris Cluster conguration and for system administrators who are responsible for maintaining the system.
This document is intended for experienced system administrators with extensive knowledge of Oracle software and hardware. Do not use this document as a planning or presales guide. You should have already determined your system requirements and purchased the appropriate equipment and software before reading this document.
5
Page 6
HowThis Book Is Organized
This book contains the following chapters:
Chapter 1, “Restrictions and Requirements,” lists limitations of your use of StorageTek
storage arrays in an Oracle Solaris Cluster environment.
Chapter 2, “Installing and Conguring a StorageTek Array,” discusses how to install
StorageTek storage arrays and how to congure logical units on them.
Chapter 3, “Maintaining a StorageTek Array,” describes how to maintain StorageTek storage
arrays in a running cluster.
Revision History
The following table lists the information that has been revised or added since the initial release of this documentation. The table also lists the revision date for these changes.
TABLE P–1 Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual
Revision Date New Information
May 2008 General edits to make the guide generic to all types of RAID arrays.
March 2008 Replaced outdated information about SunSolve with information about
Sun
Connection Update Manager
.
September 2010 Updated book with new product name and removed old CLI commands.
Related Documentation
The following books provide conceptual information or procedures to administer hardware and applications. If you plan to use this documentation in a hardcopy format, ensure that you have these books available for your reference.
The following books support the Oracle Solaris Cluster 3.3 release. You can also access the documentation for the Sun Cluster 3.1 and 3.2 releases. All Sun Cluster and Oracle Solaris Cluster documentation is available at
http://docs.sun.com. Documentation that is not available
at http://docs.sun.com is listed with the appropriate URL.
Refer to your storage array's documentation on docs.sun.com for detailed information on each storage array. For the StorageTek array, refer to the online product documentation
Preface
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A6
Page 7
TABLE P–2 Oracle Solaris Cluster and Sun Cluster Documentation
Documentation
Oracle Solaris Cluster 3.3
Sun Cluster 3.2
Sun Cluster 3.1
Using UNIX Commands
This document contains information about commands that are used to install, congure, or upgrade an Oracle Solaris Cluster conguration. This document might not contain complete information about basic UNIX commands and procedures such as shutting down the system, booting the system, and conguring devices.
See one or more of the following sources for this information:
Online documentation for the Oracle Solaris Operating System (Oracle Solaris OS)
Other software documentation that you received with your system
Oracle Solaris Operating System man pages
Getting Help
If you have problems installing or using Oracle Solaris Cluster, contact your service provider and provide the following information.
Your name and email address (if available)
Your company name, address, and phone number
The model number and serial number of your systems
The release number of the operating environment (for example, Oracle Solaris 10)
The release number of Oracle Solaris Cluster (for example, Oracle Solaris Cluster 3.3)
Use the following commands to gather information about your system for your service provider.
Command Function
prtconf -v Displays the size of the system memory and reports information
about peripheral devices
psrinfo -v Displays information about processors
Preface
7
Page 8
Command Function
showrev -p Reports which patches are installed
prtdiag -v Displays system diagnostic information
/usr/cluster/bin/clnode show-rev Displays Oracle Solaris Cluster release and package version
information for each node
Also have available the contents of the /var/adm/messages le.
Documentation, Support, and Training
See the following web sites for additional resources:
Documentation (http://docs.sun.com)
Support (http://www.oracle.com/us/support/systems/index.html)
Training (http://education.oracle.com) – Click the Sun link in the left navigation bar.
OracleWelcomesYour Comments
Oracle welcomes your comments and suggestions on the quality and usefulness of its documentation. If you nd any errors or have any other suggestions for improvement, go to
http://docs.sun.com and click Feedback. Indicate the title and part number of the
documentation along with the chapter, section, and page number, if available. Please let us know if you want a reply.
Oracle Technology Network (http://www.oracle.com/technetwork/index.html) oers a
range of resources related to Oracle software:
Discuss technical problems and solutions on the Discussion Forums
(http://forums.oracle.com)
.
Get hands-on step-by-step tutorials with Oracle By Example (http://www.oracle.com/
technology/obe/start/index.html)
.
Download Sample Code (http://www.oracle.com/technology/sample_code/
index.html)
.
Preface
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A8
Page 9
Typographic Conventions
The following table describes the typographic conventions that are used in this book.
TABLE P–3 TypographicConventions
Typeface Meaning Example
AaBbCc123 The names of commands, les, and directories,
and onscreen computer output
Edit your .login le.
Use ls -a to list all les.
machine_name% you have mail.
AaBbCc123 What you type, contrasted with onscreen
computer output
machine_name% su
Password:
aabbcc123 Placeholder: replace with a real name or value The command to remove a le is rm
lename.
AaBbCc123 Book titles, new terms, and terms to be
emphasized
Read Chapter 6 in the User's Guide.
A cache is a copy that is stored locally.
Do not save the le.
Note: Some emphasized items appear bold online.
Shell Prompts in Command Examples
The following table shows the default UNIX system prompt and superuser prompt for shells that are included in the Oracle Solaris OS. Note that the default system prompt that is displayed in command examples varies, depending on the Oracle Solaris release.
TABLE P–4 ShellPrompts
Shell Prompt
Bash shell, Korn shell, and Bourne shell $
Bash shell, Korn shell, and Bourne shell for superuser #
C shell machine_name%
C shell for superuser machine_name#
Preface
9
Page 10
10
Page 11
Restrictions and Requirements
This chapter includes only restrictions and requirements that have a direct impact on the procedures in this book. For general support information, contact your Oracle service provider.
Requirements
If you are replacing a Host Bus Adapter (HBA), you must re-mask the HBA's World Wide Name (WWN) numbers to the respective Logical Unit Numbers (LUNs) on the array.
Restrictions
When using arrays that support storage-based replication (for example, Oracle's StorageTek 9900 array), do not congure a replicated volume as a quorum device. Locate any quorum devices on an unreplicated volume. See
“Using Storage-Based Data Replication Within a Cluster” in Oracle Solaris Cluster System Administration Guide for more information on
storage-based replication.
1
CHAPTER 1
11
Page 12
12
Page 13
Installing and Conguring a StorageTek Array
This chapter contains the procedures about how to install and congure Oracle's StorageTek RAID arrays. These procedures are specic to an Oracle Solaris Cluster environment.
This chapter contains the following main topics:
“Installing Storage Arrays” on page 13
“Conguring Storage Arrays” on page 18
For detailed information about storage array architecture, features, conguration utilities, and installation, see “Related Documentation” on page 6.
Installing Storage Arrays
This section contains the procedures listed in Table 2–1.
TABLE 2–1 Task Map: Installing Storage Arrays
Task Information
Install a storage array in a new cluster, before the OS and Oracle Solaris Cluster software are installed.
“How to Install Storage Arrays in a New Cluster” on page 16
Add a storage array to an existing cluster. “How to Add Storage Arrays to an Existing Cluster”
on page 17
Storage Array Cabling Congurations
You can install your storage array in several dierent congurations; see Figure 2–1 through
Figure 2–4 for examples.
2
CHAPTER 2
13
Page 14
Oracle's StorageTek 6140 array houses two controllers; each controller has four host ports. The cabling approach is the same as shown in
Figure 2–1, but it can support up to four nodes in a
direct-attach conguration.
Figure 2–2 shows a switched conguration for a two-node storage array.
FIGURE 2–1 StorageTek Array Direct-Connect Conguration
Node 1 Node 2
Controller module
FIGURE 2–2 StorageTek Array Switched Conguration
Node 1 Node 2
Switch Switch
Controller module
Expansion tray
Installing Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A14
Page 15
You can connect one or more hosts to a storage array. Figure 2–3 shows an example of a direct host connection from each data host with dual HBAs.
Note – For maximum hardware redundancy, you should install a minimum of two HBAs in each
host and distribute I/O paths between these HBAs. A single, dual-port HBA can provide both data paths to the storage array but does not ensure redundancy if the HBA fails.
Figure 2–4 shows that three hosts can be connected directly or through a switch.
FIGURE 2–3 Direct Connections from Three DataHosts with Dual HBAs
FIGURE 2–4 Mixed Topology-Three Hosts Connected Through a Switch or Connected Directly
Installing Storage Arrays
Chapter 2 • InstallingandConguring a StorageTek Array 15
Page 16
How to Install Storage Arrays in a New Cluster
Use this procedure to install a storage array in a new cluster. To add a storage array to an existing cluster, use the procedure in “How to Replace a Host Adapter” in Oracle Solaris
Cluster 3.3 With Sun StorEdge A3500FC System Manual.
This procedure relies on the following assumptions:
You have not installed the Oracle Solaris Operating System.
You have not installed the Oracle Solaris Cluster software.
You have enough host adapters to connect the nodes and the storage array.
Install and Cable the Hardware
Unpack, place, and level the storage array.
For instructions, see the StorageTek online documentation.
If necessary, install the Fibre Channel (FC) switch for the storage array (if the switch is not already installed).
For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.
Connect the nodes to the storage array.
SAN Conguration — Connect the FC switches to the storage array
Direct-Attached Conguration — Connect each node directly to the storage array
SAS Direct-Attached Conguration
iSCSI Direct-Attached Conguration
iSCSI Switched Conguration
For instructions, see your storage array documentation and the
“Related Documentation” on
page 6
section.
Hook up the cards for the storage array.
For instructions, see your storage array documentation.
Poweron the storage array and the nodes.
For instructions, see your storage array documentation.
1
2
3
4
5
Installing Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A16
Page 17
Congure the storagearray,if needed.
For instructions, see “Conguring Storage Arrays” on page 18 and consult your storage array documentation.
Install the Oracle Solaris OS
On all nodes, install the Oracle Solarisoperating system and any required patches for Oracle Solaris Cluster software and storage array support.
For the procedure about how to install the Oracle Solaris operating environment, see
“How to
Install Solaris Software” in Oracle Solaris Cluster Software Installation Guide
.
Oracle Solaris 10 automatically installs Solaris I/O multipathing. Verify that the paths to the storage device are functioning.
To create a logical volume, see “How to Create a Logical Volume” on page 19.
To continue with Oracle Solaris Cluster software installation tasks, see your Oracle Solaris Cluster software installation documentation.
How to Add Storage Arrays to an Existing Cluster
Use this procedure to add a new storage array to a running cluster. To install a new storage array in an Oracle Solaris Cluster conguration that is not running (the nodes are in noncluster mode), use the procedure in
“How to Install Storage Arrays in a New Cluster” on page 16.
This procedure relies on the following assumptions:
(Veritas Volume Manager Only) You have a version of Veritas Volume Manager that includes Array Support Library (ASL).
You have enough host adapters to connect the nodes and the storage array.
If you need to install host adapters, see “How to Replace a Host Adapter” in Oracle Solaris
Cluster 3.3 With Sun StorEdge A3500FC System Manual
. When this procedure asks you to
replace the failed host adapter, install the new host adapter instead.
All cluster nodes have joined the cluster.
If you need to add a node to your cluster, see your Oracle Solaris Cluster system administration documentation. Ensure that you install the required Solaris patches for storage array support.
Unpack, place, and level the storage array.
For instructions, see the StorageTek online documentation.
6
1
2
See Also
BeforeYou Begin
1
Installing Storage Arrays
Chapter 2 • InstallingandConguring a StorageTek Array 17
Page 18
If necessary, install the Fibre Channel (FC) switch for the storage array (if the switch is not already installed).
For the procedure about how to install an FC switch, see the documentation that shipped with your FC switch hardware.
Connect the nodes to the storage array.
SAN Conguration — Connect the FC switches to the storage array
Direct-Attached Conguration — Connect each node directly to the storage array
SAS Direct-Attached Conguration
iSCSI Direct-Attached Conguration
iSCSI Switched Conguration
For instructions, see your storage array documentation and the
“Related Documentation” on
page 6
section.
Hook up the cards for the storage array.
For instructions, see your storage array documentation.
Poweron the storage array and the nodes.
For instructions, see your storage array documentation.
Congure the storagearray,if needed.
For instructions, see
“Conguring Storage Arrays” on page 18 and consult your storage array
documentation.
To create a logical volume, see “How to Create a Logical Volume” on page 19.
Conguring Storage Arrays
This section contains the procedures to congure a storage array in a running cluster. Table 2–2 lists these procedures.
TABLE 2–2 Task Map: Conguringa Storage Array
Task Information
Create a logical volume “How to Create a Logical Volume” on
page 19
2
3
4
5
6
See Also
Conguring Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A18
Page 19
TABLE 2–2 Task Map: Conguring a Storage Array (Continued)
Task Information
Remove a logical volume “How to Remove a Logical Volume” on
page 21
The following is a list of administrative tasks that do not require cluster-specic procedures. See the storage array's documentation“Related Documentation” on page 6 for the following procedures.
Creating a storage pool
Removing a storage pool
Creating a volume group
Removing a volume group
Creating an initiator group
Adding an initiator group
Removing an initiator group
How to Create a Logical Volume
Use this procedure to create a logical volume from unassigned storage capacity.
Note – Oracle's Sun storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
This procedure relies on the following prerequisites and assumptions.
All nodes are booted in cluster mode and attached to the storage device.
The storage device is installed and congured. If you are using multipathing, the storage device is congured as described in the installation procedure.
If you are using Solaris I/O multipathing (MPxIO) for the Oracle Solaris 10 OS, previously called Sun StorEdge Trac Manager in the Solaris 9 OS, verify that the paths to the storage device are functioning. To congure multipathing, see the
Solaris Fibre Channel Storage
Conguration and Multipathing Support Guide
.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
BeforeYou Begin
Conguring Storage Arrays
Chapter 2 • InstallingandConguring a StorageTek Array 19
Page 20
Become superuser or assume a role that provides solaris.cluster.modify role-based access control (RBAC) authorization.
Follow the instructionsin your storage device's documentation to create and map the logical volume. For a URL to this storage documentation, see
“Related Documentation”on page 6.
Completely set up the logical volume. When you are nished, the volume must be created, mapped, mounted, and initialized.
If necessary, partition the volume.
To allow multiple clusters and nonclustered nodes to access the storage device, create initiator groups by using LUN masking.
If you are not using multipathing, skip to
Step 5.
If you are using multipathing, and if any devices that are associated with the volume you created are at an unconfigured state, congure the multipathing paths on each node that is connected to the storage device.
To determine whether any devices that are associated with the volume you created are at an
unconfigured state, use the following command.
# cfgadm -al | grep disk
Note – To congure the Oracle Solaris I/O multipathing paths on each node that is connected to
the storage device, use the following command.
# cfgadm -o force_update -c configure controllerinstance
To congure multipathing, see the Solaris Fibre Channel Storage Conguration and
Multipathing Support Guide
.
On one node that is connected to the storage device, use the format command to label the new logical volume.
From any node in the cluster, update the global device namespace.
# cldevice populate
Note – You might have a volume management daemon such as vold running on your node, and
have a DVD drive connected to the node. Under these conditions, a device busy error might be returned even if no disk is inserted in the drive. This error is expected behavior. You can safely ignore this error message.
1
2
3
4
5
6
Conguring Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A20
Page 21
To manage this volume with volume management software, use SolarisVolume Manager or VeritasVolume Manager commands to update the list of devices on all nodes that are attached to the new volume that you created.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
To congure a logical volume as a quorum device, see Chapter 6, “Administering Quorum,”
in Oracle Solaris Cluster System Administration Guide
.
To create a new resource or congure a running resource to use the new logical volume, see
Chapter 2, “Administering Data Service Resources,” in Oracle Solaris Cluster Data Services
Planning and Administration Guide
.
How to Remove a Logical Volume
Use this procedure to remove a logical volume. This procedure denes Node A as the node with which you begin working.
Note – Sun storage documentation uses the following terms:
Logical volume
Logical device
Logical unit number (LUN)
This manual uses logical volume to refer to all such logical constructs.
This procedure relies on the following prerequisites and assumptions.
All nodes are booted in cluster mode and attached to the storage device.
The logical volume and the path between the nodes and the storage device are both operational.
This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical.
Become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.
Identify the logical volume that you are removing.
Refer to your Solaris Volume Manager or Veritas Volume Manager documentation for more information.
7
See Also
BeforeYou Begin
1
2
Conguring Storage Arrays
Chapter 2 • InstallingandConguring a StorageTek Array 21
Page 22
(Optional) Migrate all data o the logical volume that you are removing. Alternatively, back up that data.
If the LUN that you are removing is congured as a quorum device, choose and congure another device as the quorum device. Then remove the old quorum device.
To determine whether the LUN is congured as a quorum device, use the following command.
# clquorum show
For procedures about how to add and remove quorum devices, see Chapter 6, “Administering
Quorum,” in Oracle Solaris Cluster System Administration Guide
.
If you are using volume management software, use that software to update the list of devices on all nodes that are attached to the logical volume that youare removing.
For instructions about how to update the list of devices, see your Solaris Volume Manager or Veritas Volume Manager documentation.
If you are using volume management software, run the appropriateSolaris Volume Manager or VeritasVolume Manager commands to removethe logical volume from any diskset or disk group.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
Note – Volumes that were managed by Veritas Volume Manager must be completely removed
from Veritas Volume Manager control before you can delete them from the Oracle Solaris Cluster environment. After you delete the volume from any disk group, use the following commands on both nodes to remove the volume from Veritas Volume Manager control.
# vxdisk offline Accessname # vxdisk rm Accessname
Accessname Disk access name
If you are using multipathing, uncongure the volume in Solaris I/O multipathing.
# cfgadm -o force_update -c unconfigure Logical_Volume
Access the storage device and removethe logical volume.
To remove the volume, see your storage documentation. For a list of storage documentation, see
“Related Documentation” on page 6.
Determine the resource groups and device groups that are running on all nodes.
Record this information because you use it in
Step 14 and Step 15 of this procedure to return
resource groups and device groups to these nodes.
3
4
5
6
7
8
9
Conguring Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A22
Page 23
Use the following command:
# clresourcegroup status + # cldevicegroup status +
Move all resource groups and device groups o Node A.
# clnode evacuate nodename
Shut down and reboot Node A.
To shut down and boot a node, see
Chapter 3, “Shutting Down and Booting a Cluster,” in Oracle
Solaris Cluster System Administration Guide
.
On Node A, remove the paths to the logical volume that you removed. Remove obsolete device IDs.
# devfsadm -C # cldevice clear
For each additional node that is connected to the shared storage that hosted the logical volume, repeat
Step 9 to Step 12.
(Optional) Restore the device groups to the original node.
Do the following for each device group that you want to return to the original node.
# cldevicegroup switch -n nodename devicegroup1[ devicegroup2 ...]
-n nodename The node to which you are restoring device groups.
devicegroup1[ devicegroup2 …] The device group or groups that you are restoring to the
node.
(Optional) Restore the resource groups to the original node.
Do the following for each resource group that you want to return to the original node.
# clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 ...]
nodename For failover resource groups, the node to which the
groups are returned. For scalable resource groups, the node list to which the groups are returned.
resourcegroup1[ resourcegroup2 …] The resource group or groups that you are returning to
the node or nodes.
10
11
12
13
14
15
Conguring Storage Arrays
Chapter 2 • InstallingandConguring a StorageTek Array 23
Page 24
24
Page 25
Maintaining a StorageTek Array
This chapter contains the procedures about how to maintain a StorageTek array. These procedures are specic to an Oracle Solaris Cluster environment.
This chapter contains the following procedures:
“How to Upgrade Storage Array Firmware” on page 26
“How to Remove a Storage Array” on page 27
For detailed information about storage array architecture, features, and conguration utilities, see the StorageTek documentation listed in
“Related Documentation” on page 6.
FRUsThat Do Not Require Oracle Solaris Cluster Maintenance Procedures
In general, the following is a list of administrative tasks that require no cluster-specic procedures. See the base-product documentation for these procedures. Refer to your documentation for components not listed below.
Depending on your conguration type and the state of your cluster, a few of the following FRUs might require cluster-specic steps. Some FRUs include the DSP and the storage array.
Adding a disk drive
Replacing a storage array's chassis
Replacing an Ethernet cable
Replacing a power supply
Replacing the power cable on the storage array
Replacing a power and cooling unit (PCU)
Replacing a controller
3
CHAPTER 3
25
Page 26
Maintaining Storage Arrays
This section contains the procedures about how to maintain a storage system in a running cluster.
Table 3–1 lists these procedures.
TABLE 3–1 Task Map: Maintaining a Storage System
Task Information
Remove a storage array “How to Remove a Storage Array” on
page 27
Upgrade storage array rmware “How to Upgrade Storage Array Firmware”
on page 26
Replace a node-to-switch component “How to Replace a Node-to-Switch
Component in a Cluster Without Multipathing” in Oracle Solaris Cluster 3.3
With Sun StorEdge 9900 Series Storage Device Manual
Replace a node's host adapter “How to Replace a Host Adapter” in Oracle
Solaris Cluster 3.3 With Sun StorEdge A3500FC System Manual
Replace a disk drive “How to Replace a Failed Disk Drive in a
Running Cluster” in Oracle Solaris
Cluster 3.3 With Sun StorEdge A3500FC System Manual
Add a node to the storage array Oracle Solaris Cluster system
administration documentation
Remove a node from the storage array Oracle Solaris Cluster system
administration documentation
Note – Most storage arrays are maintained through common array management software. For
example, see
Sun StorageTek Common Array Manager Software.
How to Upgrade Storage Array Firmware
Use this procedure to upgrade storage array rmware in a running cluster. Storage array rmware includes controller rmware, unit interconnect card (UIC) rmware, EPROM rmware, and disk drive rmware.
Maintaining Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A26
Page 27
Note – When you upgrade rmware on a storage device or on an enclosure, redene the stripe
size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID conguration by running the cldevice check command, the following error message appears on your console if the device ID changed unexpectedly.
device id for nodename:/dev/rdsk/cXtYdZsN does not match physical device’s id for ddecimalnumber, device may have been replaced.
To x device IDs that report this error, run the cldevice repair command for each aected device.
Stop all I/O to the storage arrays you are upgrading.
Apply the controller, disk drive, and loop-card rmware patches by using the arrays'GUI tools.
For specic instructions, see your storage array's documentation.
Conrm that all storage arrays that you upgraded are visible to all nodes.
# luxadm probe
Restart all I/O to the storage arrays.
You stopped I/O to these storage arrays in
Step 1.
How to Remove a Storage Array
Use this procedure to permanently remove a storage array from a running cluster.
Caution – During this procedure, you lose access to the data that resides on the storage array that
you are removing. Back up the data before you proceed.
If you are using Oracle Solaris Cluster Geographic Edition, you might need to back up all database tables, data services, and volumes that are associated with each partnergroup that is aected.
Remove references to the volumes that reside on the storage array that you are removing. For instructions, see
“How to Removea Volume From a Device Group (VeritasVolumeManager)”in
Oracle Solaris Cluster System Administration Guide
.
For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.
If you have a Fibre Channel array,disconnect the cables that connected Node N to the FC switches in your storage array.
1
2
3
4
1
2
3
Maintaining Storage Arrays
Chapter 3 • MaintainingaStorageTek Array 27
Page 28
On all nodes, remove the obsolete Oracle Solaris links and device IDs.
# devfsadm -C # cldevice clear
Repeat Step 3 and Step 4 for each node that is connected to the storage array.
4
5
Maintaining Storage Arrays
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A28
Page 29
Index
A
adding
See also installing disk drives, 25 initiator groups, 19 storage arrays, 17–18
arrays, See storage arrays
C
cables, replacing, 25 chassis, replacing, 25 controllers, replacing, 25 cooling units, replacing, 25 creating
initiator groups, 19 logical volumes, 19–21 storage pools, 19 volume groups, 19
D
deleting logical volumes, 21–23 disk drives, adding, 25
E
Ethernet cables, replacing, 25
F
rmware, upgrade storage array rmware, 26–27
FRUs, 25
H
help, 7–8
I
initiator groups, 19
installing
See also adding storage arrays, 16–17
L
logical devices, See logical volumes
logical unit numbers, See logical volumes
logical volumes
creating, 19–21
removing, 21–23 LUN masking, 11 LUNs, See logical volumes
M
midplane, replacing, 25 modifying, initiator groups, 19
29
Page 30
N
nodes, adding and removing, 26
P
pools, 19 power cables, replacing, 25 power supplies, replacing, 25 power units, replacing, 25
R
removing
initiator groups, 19 logical volumes, 21–23 nodes, 26 storage arrays, 27–28 storage pools, 19 volume groups, 19
replacing
chassis, 25 controllers, 25 cooling units, 25 Ethernet cables, 25 midplane, 25 power cables, 25 power supplies, 25 power units, 25
S
storage array rmware, upgrading, 26–27 storage arrays
adding, 17–18 installing, 16–17
removing, 27–28 storage-based replication, 11 storage pools, 19 systems, See storage arrays
T
technical support, 7–8
U
upgrading, storage array rmware, 26–27
V
volume groups, 19
Index
Oracle Solaris Cluster 3.3 with StorageTek RAID Arrays Manual • September 2010, Revision A30
Loading...