Dell Oracle Database 10g Enterprise Edition - Linux Owner's Manual

Dell™ PowerEdge™ Systems
Oracle Database 10g Extended
Memory 64 Technology (EM64T)
Enterprise Edition
Linux Deployment Guide
Version 2.1.1
www.dell.com | support.dell.com
Notes and Notices
NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.
____________________
Information in this document is subject to change without notice. © 2006 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden. Trademarks used in this text: Dell, the DELL logo, and PowerEdge are trademarks of Dell Inc.; EMC, PowerPath, and Navisphere are registered
trademarks of EMC Corporation; Intel and Xeon are registered trademarks of Intel Corporation; Red Hat is a registered trademark of Red Hat, Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
September 2006 Rev. A01

Contents

Oracle RAC 10g Deployment Service . . . . . . . . . . . . . . . . . . . . . . . 5
Software and Hardware Requirements
License Agreements Important Documentation Before You Begin
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . . . . . . . . . . . 7
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Installing and Configuring Red Hat Enterprise Linux
. . . . . . . . . . . . . . . . . . . . . . 6
. . . . . . . . . . . . . . . 8
Installing Red Hat Enterprise Linux Using the Deployment CDs Configuring Red Hat Enterprise Linux
. . . . . . . . . . . . . . . . . . . . 9
Updating Your System Packages Using Red Hat Network
Verifying Cluster Hardware and Software Configurations
Fibre Channel Cluster Setup Cabling Your Storage System
. . . . . . . . . . . . . . . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . . . . . . 12
Configuring Storage and Networking for Oracle RAC 10g
Configuring the Public and Private Networks Verifying the Storage Configuration Disable SELinux
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
. . . . . . . . . . . . . . . . . . . . 19
. . . . . . . . . . . . . . . 15
Configuring Shared Storage for Oracle Clusterware and the Database Using OCFS2
. . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Configuring Shared Storage for Oracle Clusterware and the Database Using ASM
Installing Oracle RAC 10g
Before You Begin Installing Oracle Clusterware Installing the Oracle Database 10g Software RAC Post Deployment Fixes and Patches Configuring the Listener Creating the Seed Database Using OCFS2 Creating the Seed Database Using ASM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
. . . . . . . . . . . . . . . . . . . . . . . . 26
. . . . . . . . . . . . . . . 28
. . . . . . . . . . . . . . . . . 29
. . . . . . . . . . . . . . . . . . . . . . . . . . 33
. . . . . . . . . . . . . . . . . 33
. . . . . . . . . . . . . . . . . . 35
. . . . . . . 8
. . . . . . . . . . 9
. . . . . . . . . . . 10
. . . . . . . . . . . 15
Securing Your System
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Setting the Password for the User oracle
. . . . . . . . . . . . . . . . . 37
Contents 3
Configuring and Deploying Oracle Database 10g (Single Node) . . . . . . . . 38
Configuring the Public Network Configuring Database Storage Configuring Database Storage Using the Oracle ASM Library Driver Installing Oracle Database 10g Installing the Oracle Database 10g 10.2.0.2 Patchset Configuring the Listener Creating the Seed Database
. . . . . . . . . . . . . . . . . . . . . . 38
. . . . . . . . . . . . . . . . . . . . . . . 38
. . . 39
. . . . . . . . . . . . . . . . . . . . . . . 41
. . . . . . . . . . . 42
. . . . . . . . . . . . . . . . . . . . . . . . . . 43
. . . . . . . . . . . . . . . . . . . . . . . . 43
Adding and Removing Nodes
Adding a New Node to the Network Layer Configuring Shared Storage on the New Node Adding a New Node to the Oracle Clusterware Layer Adding a New Node to the Database Layer Reconfiguring the Listener Adding a New Node to the Database Instance Layer Removing a Node From the Cluster
Reinstalling the Software
Additional Information
Supported Software Versions Determining the Private Network Interface
Troubleshooting
Getting Help
Dell Support Oracle Support
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Obtaining and Using Open Source Files
. . . . . . . . . . . . . . . . . . . . . . . . . . 46
. . . . . . . . . . . . . . . . . 46
. . . . . . . . . . . . . . 47
. . . . . . . . . . . 48
. . . . . . . . . . . . . . . . 48
. . . . . . . . . . . . . . . . . . . . . . . . . 49
. . . . . . . . . . . 50
. . . . . . . . . . . . . . . . . . . . . 51
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
. . . . . . . . . . . . . . . . . . . . . . . 54
. . . . . . . . . . . . . . . . 55
. . . . . . . . . . . . . . . . . . . . . 62
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 Contents
This document provides information about installing, configuring, reinstalling, and using Oracle Database 10g Enterprise Edition with the Oracle Real Application Clusters (RAC) software on your Dell|Oracle supported configuration. Use this document in conjunction with the Dell Deployment, Red Hat Enterprise Linux, and Oracle RAC 10g software CDs to install your software.
NOTE: If you install your operating system using only the operating system CDs, the steps in this document may not
be applicable.
This document covers the following topics:
Software and hardware requirements
®
Installing and configuring Red Hat
Verifying cluster hardware and software configurations
Configuring storage and networking for Oracle RAC
Installing Oracle RAC
Configuring and installing Oracle Database 10
Adding and removing nodes
Reinstalling the software
Additional information
Troubleshooting
Getting help
Obtaining and using open source files
For more information on Dell supported configurations for Oracle, see the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g.
Enterprise Linux
g
(single node)

Oracle RAC 10g Deployment Service

If you purchased the Oracle RAC 10g Deployment Service, your Dell Professional Services representative will assist you with the following:
Verifying cluster hardware and software configurations
Configuring storage and networking
Installing Oracle RAC 10
g
Release 2
Deployment Guide 5

Software and Hardware Requirements

Before you install the Oracle RAC software on your system:
Download the Red Hat CD images from the Red Hat website at
Locate your Oracle CD kit.
Download the
Dell Deployment CD
images that are appropriate for the solution being installed from the Dell|Oracle Tested and Validated Configurations website at downloaded CD images to CDs.
Table 1-1 lists basic software requirements for Dell supported configurations for Oracle. Table 1-2 through Table 1-3 list the hardware requirements. For more information on the minimum software versions for drivers and applications, see "Supported Software Versions."
Table 1-1. Software Requirements
Software Component Configuration
Red Hat Enterprise Linux AS EM64T (Version 4) Update 3
Oracle Database 10g Versio n 10 .2
• Enterprise Edition, including the RAC option for clusters
• Enterprise Edition for single-node configuration
EMC® PowerPath
NOTE: Depending on the number of users, the applications you use, your batch processes, and other factors, you
may need a system that exceeds the minimum hardware requirements in order to achieve desired performance.
®
Version 4.5.1
rhn.redhat.com
.
www.dell.com/10g
. Burn all these
NOTE: The hardware configuration of all the nodes must be identical.
Table 1-2. Minimum Hardware Requirements—Fibre Channel Cluster
Hardware Component Configuration
®
Dell™ PowerEdge™ system (two to eight nodes using Automatic Storage Management [ASM])
Dell|EMC Fibre Channel storage system
Xeon® processor family
Intel
1 GB of RAM with Oracle Cluster File System Version 2 (OCFS2)
PowerEdge Expandable RAID Controller (PERC) for internal hard drives
Two 73-GB hard drives (RAID 1) connected to PERC
Three Gigabit network interface controller (NIC) ports
Two optical host bus adapter (HBA) ports
See the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g for information on supported configurations
6 Deployment Guide
Table 1-2. Minimum Hardware Requirements—Fibre Channel Cluster (continued)
Hardware Component Configuration
Gigabit Ethernet switch (two) See the Dell|Oracle Tested and Validated Configurations website at
www.dell.com/10g for information on supported configurations
Dell|EMC Fibre Channel switch (two) Eight ports for two to six nodes
16 ports for seven or eight nodes
Table 1-3. Minimum Hardware Requirements—Single Node
Hardware Component Configuration
PowerEdge system Intel Xeon processor family
1 GB of RAM
Two 73-GB hard drives (RAID 1) connected to PERC
Two NIC ports
Dell|EMC Fibre Channel storage system (optional)
Dell|EMC Fibre Channel switch (optional)
See the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g for information on supported configurations
Eight ports

License Agreements

NOTE: Your Dell configuration includes a 30-day trial license of Oracle software. If you do not have a license for
this product, contact your Dell sales representative.

Important Documentation

For more information on specific hardware components, see the documentation included with your system.
For Oracle product information, see the How to Get Started guide in the Oracle CD kit.

Before You Begin

Before you install the Red Hat Enterprise Linux operating system, download the Red Hat Enterprise Linux Quarterly Update ISO images from the Red Hat Network website at rhn.redhat.com and burn these images to CDs.
To download the ISO images, perform the following steps:
1
Navigate to the Red Hat Network website at
2
Click
Channels
3
In the left menu, click
.
Easy ISOs
.
rhn.redhat.com
.
Deployment Guide 7
4
In the
Easy ISOs
The ISO images for all Red Hat products appear.
5
In the
Channel Name
Linux software.
6
Download the ISOs for your Red Hat Enterprise Linux software as listed in your Solution Deliverable List (SDL) from the Dell|Oracle Tested and Validated Configurations website at
7
Burn the ISO images to CDs.
page left menu, click
menu, click the appropriate ISO image for your Red Hat Enterprise
All
.
www.dell.com/10g

Installing and Configuring Red Hat Enterprise Linux

NOTICE: To ensure that the operating system is installed correctly, disconnect all external storage devices from
the system before you install the operating system.
This section describes the installation of the Red Hat Enterprise Linux AS operating system and the configuration of the operating system for Oracle Database deployment.

Installing Red Hat Enterprise Linux Using the Deployment CDs

1
Disconnect all external storage devices from the system.
2
Locate your
3
Insert the
The system boots to the
4
When the deployment menu appears, type 1 to select
Linux 4 U3 (x86_64)
5
When another menu asking deployment image source appears, type 1 to select
Deployment CD.
Dell Deployment CD
Dell Deployment CD 1
Dell Deployment CD
.
and the
into the CD drive and reboot the system.
Red Hat Enterprise Linux AS EM64T
.
Oracle 10g R2 EE on Red Hat Enterprise
CDs.
Copy solution by
.
NOTE: This procedure may take several minutes to complete.
6
When prompted, insert
A deployment partition is created and the contents of the CDs are copied to it. When the copy operation is completed, the system automatically ejects the last CD and boots to the deployment partition.
When the installation is completed, the system automatically reboots and the Red Hat Setup Agent appears.
7
In the
Red Hat Setup Agent Welcome
Do not create any operating system users at this time.
8
When prompted, specify a
8 Deployment Guide
Dell Deployment CD 2
window, click
root password
.
and each Red Hat installation CD into the CD drive.
Next
to configure your operating system settings.
9
10
When the
When the
Network Setup
Security Level
window appears, click
window appears, disable the firewall. You may enable the firewall after
completing the Oracle deployment.
11
Log in as
root
.

Configuring Red Hat Enterprise Linux

1
Log in as
2
Insert the
mount /dev/cdrom /media/cdrom/install.sh
root
.
Dell Deployment CD 2
into the CD drive and type the following commands:
Next
. You will configure network settings later.
The contents of the CD are copied to the procedure is completed, type
3
Ty p e
cd /dell-oracle-deployment/scripts/standard
umount /dev/cdrom
containing the scripts installed from the
NOTE: Scripts discover and validate installed component versions and, when required, update components
to supported levels.
4
Ty p e
./005-oraclesetup.py
5
Ty p e
source /root/.bash_profile
6
Ty p e
./010-hwCheck.py
to configure the Red Hat Enterprise Linux for Oracle installation.
to verify that the CPU, RAM, and disk sizes meet the minimum
/usr/lib/dell/dell-deploy-cd
directory. When the copy
and remove the CD from the CD drive.
to navigate to the directory
Dell Deployment CD
.
to start the environment variables.
Oracle Database installation requirements.
If the script reports that a parameter failed, update your hardware configuration and run the script again (see Table 1-2 and Table 1-3 for updating your hardware configuration).
7
Connect the external storage device.
8
Reload the HBA driver(s) using
rmmod
and
modprobe
commands. For instance, for Emulex HBAs,
reload the lpfc driver by issuing
rmmod lpfc
modprobe lpfc
For QLA HBAs, identify the drivers that are loaded (
lsmod | grep qla
), and reload these drivers.

Updating Your System Packages Using Red Hat Network

Red Hat periodically releases software updates to fix bugs, address security issues, and add new features. You can download these updates through the Red Hat Network (RHN) service. See the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g for the latest supported configurations before you use RHN to update your system software to the latest revisions.
NOTE: If you are deploying Oracle Database on a single node, skip the following sections and see "Configuring and
Deploying Oracle Database 10g (Single Node)."
Deployment Guide 9

Verifying Cluster Hardware and Software Configurations

Dell|EMC Fibre Channel storage systems
client systems
PowerEdge systems (Oracle Database)
Gb Ethernet switches (private network)
Dell|EMC Fibre Channel switches (SAN)
LAN/WAN
CAT 5e/6 (public NIC)
CAT 5e/6 (copper gigabit NIC)
fiber optic cables
additional fiber optic cables
Before you begin cluster setup, verify the hardware installation, communication interconnections, and node software configuration for the entire cluster. The following sections provide setup information for hardware and software Fibre Channel cluster configurations.

Fibre Channel Cluster Setup

Your Dell Professional Services representative completed the setup of your Fibre Channel cluster. Verify the hardware connections and the hardware and software configurations as described in this section. Figure 1-1 and Figure 1-3 show an overview of the connections required for the cluster, and Table 1-4 summarizes the cluster connections.
Figure 1-1. Hardware Connections for a Fibre Channel Cluster
10 Deployment Guide
Table 1-4. Fibre Channel Hardware Interconnections
Cluster Component Connections
Each PowerEdge system node One Category 5 enhanced (CAT 5e) or CAT 6 cable from public NIC to local area
network (LAN)
One CAT 5e or CAT 6 cable from private Gigabit NIC to Gigabit Ethernet switch
One CAT 5e or CAT 6 cable from a redundant private Gigabit NIC to a redundant Gigabit Ethernet switch
One fiber optic cable from optical HBA 0 to Fibre Channel switch 0
One fiber optic cable from HBA 1 to Fibre Channel switch 1
Each Dell|EMC Fibre Channel storage system
Each Dell|EMC Fibre Channel switch
Each Gigabit Ethernet switch One CAT 5e or CAT 6 connection to the private Gigabit NIC on each PowerEdge
Two CAT 5e or CAT 6 cables connected to the LAN
One to four fiber optic cable connections to each Fibre Channel switch; for example, for a four-port configuration:
•One
fiber optic cable
•One
fiber optic cable
•One
fiber optic cable
•One
fiber optic cable
One to four fiber optic cable connections to the Dell|EMC Fibre Channel storage system
One fiber optic cable connection to each PowerEdge system’s HBA
system
One CAT 5e or CAT 6 connection to the remaining Gigabit Ethernet switch
from SPA port 0 to Fibre Channel switch 0 from SPA port 1 to Fibre Channel switch 1 from SPB port 0 to Fibre Channel switch 1 from SPB port 1 to Fibre Channel switch 0
Verify that the following tasks are completed for your cluster:
All hardware is installed in the rack.
All hardware interconnections are set up as shown in Figure 1-1 and
Figure 1-3, and
listed in Table 1-4.
All logical unit numbers (LUNs), redundant array of independent disk (RAID) groups, and storage groups are created on the Dell|EMC Fibre Channel storage system.
Storage groups are assigned to the nodes in the cluster.
Before continuing with the following sections, visually inspect all hardware and interconnections for correct installation.
Deployment Guide 11
Fibre Channel Hardware and Software Configurations
Each node must include the minimum hardware peripheral components as described in Table 1-2.
Each node must have the following software installed:
Red Hat Enterprise Linux software (see Table 1-1)
Fibre Channel HBA driver
The Fibre Channel storage system must be configured with the following:
A minimum of three LUNs created and assigned to the cluster storage group (see Table 1-5)
A minimum LUN size of 5 GB
Table 1-5. LUNs for the cluster storage group
LUN Minimum Size Number of Partitions Used For
First LUN 512 MB three of 128 MB each Voting disk, Oracle Cluster
Registry (OCR), and storage processor (SP) file
Second LUN Larger than the size of your database one Database
Third LUN Minimum twice the size of your
second LUN
one Flash Recovery Area

Cabling Your Storage System

You can configure your Oracle cluster storage system in a direct-attached configuration or a four-port SAN-attached configuration, depending on your needs. See the following procedures for both configurations.
12 Deployment Guide
Figure 1-2. Cabling in a Direct-Attached Fibre Channel Cluster
HBA ports (2)
node 1
node 2
HBA ports (2)
0
1
0
1
CX700 storage system
SP-B
SP-A
0
1
2
3
SP ports
3
2
1
0
Direct-Attached Configuration
To configure your nodes in a direct-attached configuration (see Figure 1-2), perform the following steps:
Connect one optical cable from HBA0 on node 1 to port 0 of SP-A.
1
2
Connect one optical cable from HBA1 on node 1 to port 0 of SP-B.
3
Connect one optical cable from HBA0 on node 2 to port 1 of SP-A.
4
Connect one optical cable from HBA1 on node 2 to port 1 of SP-B.
Deployment Guide 13
Figure 1-3. Cabling in a SAN-Attached Fibre Channel Cluster
HBA ports (2)
node 1
node 2
HBA ports (2)
SP-B
SP-A
0
1
2
3
SP ports
3
2
1
0
sw0
sw1
01
0
1
CX700 storage system
SAN-Attached Configuration
To configure your nodes in a four-port SAN-attached configuration (see Figure 1-3), perform the following steps:
1
Connect one optical cable from SP-A port 0 to Fibre Channel switch 0.
2
Connect one optical cable from SP-A port 1 to Fibre Channel switch 1.
3
Connect one optical cable from SP-A port 2 to Fibre Channel switch 0.
4
Connect one optical cable from SP-A port 3 to Fibre Channel switch 1.
5
Connect one optical cable from SP-B port 0 to Fibre Channel switch 1.
6
Connect one optical cable from SP-B port 1 to Fibre Channel switch 0.
14 Deployment Guide
7
Connect one optical cable from SP-B port 2 to Fibre Channel switch 1.
8
Connect one optical cable from SP-B port 3 to Fibre Channel switch 0.
9
Connect one optical cable from HBA0 on node 1 to Fibre Channel switch 0.
10
Connect one optical cable from HBA1 on node 1 to Fibre Channel switch 1.
11
Connect one optical cable from HBA0 on node 2 to Fibre Channel switch 0.
12
Connect one optical cable from HBA1 on node 2 to Fibre Channel switch 1.

Configuring Storage and Networking for Oracle RAC 10g

This section provides information and procedures for setting up a Fibre Channel cluster running a seed database:
Configuring the public and private networks
Securing your system
Verifying the storage configuration
Configuring shared storage for Cluster Ready Services (CRS) and Oracle Database
Oracle RAC 10g is a complex database configuration that requires an ordered list of procedures. To configure networks and storage in a minimal amount of time, perform the following procedures in order.

Configuring the Public and Private Networks

This section presents steps to configure the public and private cluster networks.
NOTE: Each node requires a unique public and private internet protocol (IP) address and an additional public
IP address to serve as the virtual IP address for the client connections and connection failover. The virtual IP address must belong to the same subnet as the public IP. All public IP addresses, including the virtual IP address, should be registered with Domain Naming Service and routable.
Depending on the number of NIC ports available, configure the interfaces as shown in Table 1-6.
Table 1-6. NIC Port Assignments
NIC Port Three Ports Available Four Ports available
1 Public IP and virtual IP Public IP
2 Private IP (bonded) Private IP (bonded)
3 Private IP (bonded) Private IP (bonded)
4 NA Virtual IP
Deployment Guide 15
Configuring the Public Network
NOTE: Ensure that your public IP address is a valid, routable IP address.
If you have not already configured the public network, do so by performing the following steps on each node:
1
Log in as
2
Edit the network device file
root
.
/etc/sysconfig/network-scripts/ifcfg-eth#
, where # is the number of the
network device, and configure the file as follows:
DEVICE=eth0 ONBOOT=yes IPADDR=<Public IP Address> NETMASK=<Subnet mask> BOOTPROTO=static HWADDR=<MAC Address> SLAVE=no
3
Edit the
/etc/sysconfig/network
file, and, if necessary, replace
localhost.localdomain
with the
fully qualified public node name.
For example, the line for node 1 would be as follows:
HOSTNAME=node1.domain.com
4
Ty p e :
service network restart
5
Ty p e
ifconfig
6
To check your network configuration, ping each public IP address from a client on the LAN outside
to verify that the IP addresses are set correctly.
the cluster.
7
Connect to each node to verify that the public network is functioning and type to verify that the secure shell (
Configuring the Private Network Using Bonding
ssh)
command is working.
ssh <public IP>
Before you deploy the cluster, configure the private cluster network to allow the nodes to communicate with each other. This involves configuring network bonding and assigning a private IP address and hostname to each node in the cluster.
To set up network bonding for Broadcom or Intel NICs and configure the private network, perform the following steps on each node:
1
Log in as
2
Add the following line to the
root
.
/etc/modprobe.conf
file:
alias bond0 bonding
16 Deployment Guide
3
For high availability, edit the
/etc/modprobe.conf
file and set the option for link monitoring.
The default value for miimon is 0, which disables link monitoring. Change the value to 100 milliseconds initially, and adjust it as needed to improve performance as shown in the following example. Type:
options bonding miimon=100 mode=1
In the
4
/etc/sysconfig/network-scripts/
directory, create or edit the
ifcfg-bond0
configuration file.
For example, using sample network parameters, the file would appear as follows:
DEVICE=bond0 IPADDR=192.168.0.1 NETMASK=255.255.255.0 NETWORK=192.168.0.0 BROADCAST=192.168.0.255 ONBOOT=yes BOOTPROTO=none USERCTL=no
The entries for
DEVICE=bondn
IPADDR
NETMASK, NETWORK
is the required name for the bond, where n specifies the bond number.
is the private IP address.
, and
BROADCAST
are optional.
To use bond0 as a virtual device, you must specify which devices will be bonded as slaves.
5
For each device that is a bond member, perform the following steps:
a
In the directory
/etc/sysconfig/network-scripts/
, edit the
ifcfg-ethn file, containing the
following lines:
DEVICE=ethn HWADDR=<MAC ADDRESS> ONBOOT=yes TYPE=Ethernet USERCTL=no MASTER=bond0 SLAVE=yes BOOTPROTO=none
b
Ty p e
6
On
each node
service network restart
, type
ifconfig
to verify that the private interface is functioning.
and ignore any warnings.
The private IP address for the node should be assigned to the private interface bond0.
7
When the private IP addresses are set up on every node, ping each IP address from one node to ensure that the private network is functioning.
Deployment Guide 17
8
Connect to each node and verify that the private network and
ssh
are functioning correctly by typing:
ssh <private IP>
9
On
each node,
modify the
/etc/hosts
file by adding the following lines:
127.0.0.1 localhost.localdomain localhost <private IP node1> <private hostname node1> <private IP node2> <private hostname node2>
<public IP node1> <public hostname node1> <public IP node2> <public hostname node2>
<virtual IP node1> <virtual hostname node1> <virtual IP node2> <virtual hostname node2>
NOTE: The examples in this and the following step are for a two-node configuration; add lines for each
additional node.
10
On
each node
, create or modify the
/etc/hosts.equiv
file by listing all of your public IP addresses or host names. For example, if you have one public hostname, one virtual IP address, and one virtual hostname for each node, add the following lines:
<public hostname node1> oracle <public hostname node2> oracle
<virtual IP or hostname node1> oracle <virtual IP or hostname node2> oracle
11
Log in as
oracle
by typing:
rsh <public hostname nodex>
where x is the node number.
18 Deployment Guide
, connect to each node to verify that the remote shell (
rsh
) command is working

Verifying the Storage Configuration

While configuring the clusters, create partitions on your Fibre Channel storage system. In order to create the partitions, all the nodes must be able to detect the external storage devices. To verify that each node can detect each storage LUN or logical disk, perform the following steps:
1
For Dell|EMC Fibre Channel storage system, verify that the EMC Navisphere® agent and the correct version of PowerPath (see Table 1-7) are installed on each node, and that each node is assigned to the correct storage group in your EMC Navisphere software. See the documentation that came with your Dell|EMC Fibre Channel storage system for instructions.
NOTE: The Dell Professional Services representative who installed your cluster performed this step. If you
reinstall the software on a node, you must perform this step.
2
Visually verify that the storage devices and the nodes are connected correctly to the Fibre Channel switch (see Figure 1-1 and Table 1-4).
3
Verify that you are logged in as
4
On
each node
, type:
more /proc/partitions
The node detects and displays the LUNs or logical disks, as well as the partitions created on those external devices.
NOTE: The listed devices vary depending on how your storage system is configured.
A list of the LUNs or logical disks that are detected by the node is displayed, as well as the partitions that are created on those external devices. PowerPath pseudo devices appear in the list, such as
/dev/emcpowera, /dev/emcpowerb
root
, and
.
/dev/emcpowerc
.
5
In the
/proc/partitions
file, ensure that:
All PowerPath pseudo devices appear in the file with similar device names across all nodes.
For example,
/dev/emcpowera
, /
dev/emcpowerb
, and
/dev/emcpowerc
.
The Fibre Channel LUNs appear as SCSI devices, and each node is configured with the same
number of LUNs.
For example, if the node is configured with a SCSI drive or RAID container attached to a Fibre Channel storage device with three logical disks, internal drive, and
emcpowera, emcpowerb
, and
sda
identifies the node’s RAID container or
emcpowerc
identifies the LUNs (or PowerPath
pseudo devices).
If the external storage devices do not appear in the /proc/partitions file, reboot the node.
Deployment Guide 19

Disable SELinux

To run the Oracle database, you must disable SELinux.
To temporarily disable SELinux, perform the following steps:
Log in as
1
2
At the command prompt, type:
setenforce 0
To permanently disable SELinux, perform the following steps on all the nodes:
1
Open your
2
Locate the kernel command line and append the following option:
selinux=0
For example:
kernel /vmlinuz-2.6.9-34.ELlargesmp ro root=LABEL=/ apic rhgb quiet selinux=0
3
Reboot your system.
root
.
grub.conf
file.

Configuring Shared Storage for Oracle Clusterware and the Database Using OCFS2

Before you begin using OCFS2:
Download the RPMs from
Find your kernel version by typing:
uname –r
http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL4/x86_64/1.2.3-1
.
and then download the OCFS2 packages for the kernel version.
Download the ocfs2-tools packages from
http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL4/x86_64/1.2.1-1
Install all the ocfs2 and ocfs2-tools packages by typing:
rpm –ivh *
20 Deployment Guide
.
To configure storage using OCFS2:
1
On the
2
Perform the following steps:
a
first node
, log in as
root
Start the X Window System by typing:
startx
b
Generate the OCFS2 configuration file ( ocfs2 by typing the following in a terminal:
ocfs2console
c
From the menu, click
Cluster→ Configure Nodes
If the cluster is offline, the console will start it. A message window appears displaying that information. Close the message window.
The
Node Configuration
d
To add nodes to the cluster, click private IP. Retain the default value of the port number. After entering all the details, click
Repeat this step to add all the nodes to the cluster.
e
When all the nodes are added, click window.
f
From the menu, click
Cluster→ Propagate Configuration
Propagate Cluster Configuration
the window and then click
.
/etc/ocfs2/cluster.conf
window appears.
Add
. Enter the node name (same as the host name) and the
Apply
window appears. Wait until the message
Close
.
.
and then click
) with a default cluster name of
Close
in the
Node Configuration
.
Finished
OK
.
appears on
g
Select
3
On
all the nodes
File→ Quit
.
, enable the cluster stack on startup by typing:
/etc/init.d/o2cb enable
Change the
4
a
Stop the O2CB service on all the nodes by typing:
O2CB_HEARTBEAT_THRESHOLD
/etc/init.d/o2cb stop
b
Edit the
c
Start the O2CB service on all the nodes by typing:
O2CB_HEARTBEAT_THRESHOLD
/etc/init.d/o2cb start
value on all the nodes using the following steps:
value in
/etc/sysconfig/o2cb
to 61 on all the nodes.
Deployment Guide 21
5
On the storage devices with
a
first node
, for a Fibre Channel cluster, create one partition on each of the other two external
fdisk
:
Create a primary partition for the entire device by typing:
fdisk /dev/emcpowerx
Ty p e
h
for help within the
b
Verify that the new partition exists by typing:
fdisk
utility.
cat /proc/partitions
c
If you do not observe the new partition, type:
sfdisk -R /dev/<device name>
NOTE: The following steps use the sample values /u01, /u02, and /u03 for mount points and u01, u02, and u03 as labels.
6
On
any one node
slots (node slots refer to the number of cluster nodes) using the command line utility
, format the external storage devices with 4 K block size, 128 K cluster size, and 4 node
mkfs.ocfs2
follows:
mkfs.ocfs2 -b 4K -C 128K -N 4 -L u01 /dev/emcpowera1 mkfs.ocfs2 -b 4K -C 128K -N 4 -L u02 /dev/emcpowerb1 mkfs.ocfs2 -b 4K -C 128K -N 4 -L u03 /dev/emcpowerc1
NOTE: For more information about setting the format parameters of clusters, see
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html.
7
On
each node
a
Create mount points for each OCFS2 partition. To perform this procedure, create the target
, perform the following steps:
partition directories and set the ownerships by typing:
mkdir -p /u01 /u02 /u03 chown -R oracle.dba /u01 /u02 /u03
as
b
On
each node
system:
/dev/emcpowera1 /u01 ocfs2 _netdev,datavolume,nointr 0 0 /dev/emcpowerb1 /u02 ocfs2 _netdev,datavolume,nointr 0 0 /dev/emcpowerc1 /u03 ocfs2 _netdev,datavolume,nointr 0 0
Make appropriate entries for all OCFS2 volumes.
c
On
each node
mount -a -t ocfs2
d
On
each node
mount -a -t ocfs2
22 Deployment Guide
, modify the
/etc/fstab
file by adding the following lines for a Fibre Channel storage
, type the following to mount all the volumes listed in the
, add the following command to the
/etc/rc.local
file:
/etc/fstab
file:

Configuring Shared Storage for Oracle Clusterware and the Database Using ASM

Configuring Shared Storage for Oracle Clusterware
This section provides instructions for configuring shared storage for Oracle Clusterware.
Configuring Shared Storage Using the RAW Device Interface
1
On the
Ty p e Voting disk, and the Oracle system parameter file.
2
Verify the new partitions by typing:
more /proc/partitions
first node
, create three partitions on an external storage device with the
fdisk /dev/emcpowerx
fdisk
and create three partitions of 150 MB each for the Cluster Repository,
utility:
On all the nodes, if the new partitions do not appear in the
/proc/partitions
file, type:
sfdisk -R /dev/<device name>
3
On all the nodes, perform the following steps:
a
Edit the
/etc/sysconfig/rawdevices
file and add the following lines for a Fibre Channel cluster:
/dev/raw/votingdisk /dev/emcpowera1 /dev/raw/ocr.dbf /dev/emcpowera2 /dev/raw/spfile+ASM.ora /dev/emcpowera3
b
Ty p e
udevstart
c
Ty p e
service rawdevices restart
NOTE: If the three partitions on PowerPath pseudo devices are not consistent across the nodes,
modify your /dev/sysconfig/rawdevices configuration file accordingly.
Configuring Shared Storage for the Database Using ASM
to create the RAW devices.
to restart the RAW Devices Service.
To configure your cluster using ASM, perform the following steps on all nodes:
Log in as
1
2
On all the nodes, create one partition on each of the other two external storage devices with the
a
root
.
Create a primary partition for the entire device by typing:
fdisk /dev/emcpowerx
Ty p e h for help within the
fdisk
utility.
fdisk
utility:
b
Verify that the new partition exists by typing:
cat /proc/partitions
If you do not see the new partition, type:
sfdisk -R /dev/<device name>
Deployment Guide 23
NOTE: Shared storage configuration using ASM can be done either using the RAW device interface or the Oracle
ASM library driver.
Configuring Shared Storage Using the RAW Device Interface
1
Edit the
/etc/sysconfig/rawdevices
file and add the following lines for a Fibre Channel cluster:
/dev/raw/ASM1 /dev/emcpowerb1 /dev/raw/ASM2 /dev/emcpowerc1
Create the RAW devices by typing:
2
udevstart
3
Restart the RAW Devices Service by typing:
service rawdevices restart
4
To add an additional ASM disk (for example,
ASM3
), edit the
/etc/udev/scripts/raw-dev.sh
file on all
the nodes and add the appropriate bold entries as shown below:
MAKEDEV raw mv /dev/raw/raw1 /dev/raw/votingdisk mv /dev/raw/raw2 /dev/raw/ocr.dbf mv /dev/raw/raw3 /dev/raw/spfile+ASM.ora mv /dev/raw/raw4 /dev/raw/ASM1 mv /dev/raw/raw5 /dev/raw/ASM2 mv /dev/raw/raw6 /dev/raw/ASM3 chmod 660 /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora,ASM1,ASM2,ASM3} chown oracle.dba /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora,ASM1,ASM2,ASM3}
To add additional ASM disks type
Configuring Shared Storage Using the ASM Library Driver
1
Log in as
2
Open a terminal window and perform the following steps on all nodes:
a
b
root
.
Ty p e
service oracleasm configure
Type the following inputs for all the nodes:
udevstart
Default user to own the driver interface [ ]:
Default group to own the driver interface []:
Start Oracle ASM library driver on boot (y/n) [n]:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
24 Deployment Guide
on all the nodes and repeat step 4.
oracle
dba
y
y
3
On the
4
Repeat step 3 for any additional ASM disks that need to be created.
5
Verify that the ASM disks are created and marked for ASM usage.
In the terminal window, type the following and press <Enter>:
service oracleasm listdisks
The disks that you created in step 3 appear.
For example:
ASM1
ASM2
6
Ensure that the remaining nodes are able to access the ASM disks that you created in step 3.
On each remaining node, open a terminal, type the following, and press <Enter>:
service oracleasm scandisks
first node
service oracleasm createdisk ASM1 /dev/emcpowerb1
service oracleasm createdisk ASM2 /dev/emcpowerc1
, in the terminal window, type the following and press <Enter>:

Installing Oracle RAC 10g

This section describes the steps required to install Oracle RAC 10g, which involves installing CRS and installing the Oracle Database 10g software. Dell recommends that you create a seed database to verify that the cluster works correctly before you deploy it in a production environment.

Before You Begin

To prevent failures during the installation procedure, configure all the nodes with identical system clock settings.
Synchronize your node system clock with a Network Time Protocol (NTP) server. If you cannot access an NTP server, perform one of the following procedures:
Ensure that the system clock on the Oracle Database software installation node is set to a later time than the remaining nodes.
Configure one of your nodes as an NTP server to synchronize the remaining nodes in the cluster.
Deployment Guide 25

Installing Oracle Clusterware

1
Log in as
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
root
.
Mount the
4
5
Ty p e :
<CD_mountpoint>/cluvfy/runcluvfy.sh stage -pre crsinst
-n node1,node2 -r 10gR2 -verbose
where
If your system is command, above.
If your system is configured correctly, the following message appears:
Pre-check for cluster services setup was successful on all the nodes.
6
Ty p e :
su - oracle
7
Type the following commands to start the Oracle Universal Installer:
unset ORACLE_HOME
<CD_mountpoint>
The following message appears:
Was ’rootpre.sh’ been run by root? [y/n] (n)
8
Ty p e y to proceed.
9
In the
10
In the and click
11
In the column for each system check, and then click
Oracle Clusterware
node1
and
node2
not
configured correctly, troubleshoot the issues and then repeat the
/runInstaller
Welc om e
Specify Home Details
Product-Specific Prerequisite Checks
window, click
Next
.
CD.
are the public host names.
Next
.
window, change the Oracle home path to
window, ensure that
Next
.
runcluvfy.sh
/crs/oracle/product/10.2.0/crs
Succeeded
appears in the
Status
26 Deployment Guide
12
In the
13
Specify Cluster Configuration
a
Click
Add
.
b
Enter a name for the click
OK
.
c
Repeat step a and step b for the remaining nodes.
d
In the
Cluster Name
Public Node Name, Private Node Name
field, type a name for your cluster.
The default cluster name is
e
Click
Next
.
In the
Specify Network Interface Usage
window, add the nodes that will be managed by Oracle Clusterware.
crs
.
window, ensure that the public and private interface names
are correct.
To modify an interface, perform the following steps:
a
14
Select the interface name and click
b
In the
Edit private interconnect type
interface type and then click
c
In the
Specify Network Interface Usage
names are correct, and then click
In the
Specify Oracle Cluster Registry (OCR) Location
a
In the
b
OCR Configuration
In the
Specify OCR Location
box, select
Edit
OK
.
Next
field, type:
/dev/raw/ocr.dbf
, and
Virtual Host Name
, and then
.
window in the
Interface Type
box, select the appropriate
window, ensure that the public and private interface
.
window, perform the following steps:
External Redundancy
.
15
Or
/u01/ocr.dbf
c
Click
Next
.
In the
Specify Voting Disk Location
a
In the
OCR Configuration
b
In the
Specify OCR Location
/dev/raw/votingdisk
Or
/u01/votingdisk
c
Click
Next
.
if using OCFS2.
window, perform the following steps:
box, select
field, type:
if using OCFS2.
External Redundancy
.
Deployment Guide 27
16
In the
Summary
window, click
Install
Oracle Clusterware is installed on your system.
.
When completed, the
17
Follow the instructions in the window and then click OK.
NOTE: If root.sh hangs while formatting the Voting disk, apply Oracle patch 4679769 and then repeat this step.
18
In the
Configuration Assistants
Execute Configuration scripts
window, ensure that
window appears.
Succeeded
appears in the
tool name.
19
20
Next, the
Click
On
a
End of Installation
Exit
.
all nodes
, perform the following steps:
Verify the Oracle Clusterware installation by typing the following command:
window appears.
olsnodes -n -v
A list of the public node names of all nodes in the cluster appears.
b
Ty p e :
crs_stat -t
All running Oracle Clusterware services appear.

Installing the Oracle Database 10g Software

1
Log in as
cluvfy stage -pre dbinst -n node1,node2 -r 10gR2 -verbose
root
, and type:
Status
column for each
where
node1
and
If your system is
If your system is configured correctly, the following message appears:
Pre-check for database installation was successful.
2
As user
root
, type:
xhost +
3
As user
4
Log in as
root
oracle
, mount the
<CD_mountpoint>/runInstaller
The Oracle Universal Installer starts.
5
In the
Welc om e
28 Deployment Guide
node2
are the public host names.
not
configured correctly, see "Troubleshooting" for more information.
Oracle Database 10g
CD.
, and type:
window, click
Next
.
6
In the
Select Installation Type
7
In the
Specify Home Details
/opt/oracle/product/10.2.0/db_1
NOTE: The Oracle home name in this step must be different from the Oracle home name that you identified
during the CRS installation. You cannot install the Oracle 10g Enterprise Edition with RAC into the same home name that you used for CRS
8
In the
Specify Hardware Cluster Installation Mode
9
In the
Product-Specific Prerequisite Checks
column for each system check, and then click
NOTE: In some cases, a warning may appear regarding swap size. Ignore the warning and click Yes
to proceed.
10
In the
Select Configuration Option
11
In the
Summary
window, click
window, select
window in the
and click
Next
.
window, select
Install
.
Enterprise Edition
Path
window, ensure that
Next
The Oracle Database software is installed on your cluster.
and click
Next
.
field, verify that the complete Oracle home path is
.
window, click
Select All
Succeeded
and click
Next
appears in the
.
Status
.
Install database Software only
and click
Next
.
Next, the
12
Follow the instructions in the window and click OK.
13
In the
Execute Configuration Scripts
End of Installation
window, click
window appears.
Exit
.

RAC Post Deployment Fixes and Patches

This section provides the required fixes and patch information for deploying Oracle RAC 10g.
Reconfiguring the CSS Miscount for Proper EMC PowerPath Failover
When an HBA, switch, or EMC Storage Processor (SP) failure occurs, the total PowerPath failover time to an alternate device may exceed 105 seconds. The default CSS disk time-out for Oracle 10g R2 version
10.2.0.1 is 60 seconds. To ensure that the PowerPath failover procedure functions correctly, increase the CSS time-out to 120 seconds.
For more information, see Oracle Metalink Note 294430.1 on the Oracle Metalink website at metalink.oracle.com.
To increase the CSS time-out:
1
Shut down the database and CRS on all nodes except on one node.
2
On the running node, log in as user
crsctl set css misscount 120
3
Reboot all nodes for the CSS setting to take effect.
root
and type:
Deployment Guide 29
Installing the Oracle Database 10g 10.2.0.2 Patchset
Downloading and Extracting the Installation Software
1
On the
2
Create a folder for the patches and utilities at
3
Open a web browser and navigate to the Oracle Support website at
4
Log in to your Oracle Metalink account.
5
Search for the patch number 4547817 with Linux x86-64 (AMD64/EM64T) as the platform.
6
Download the patch to the
7
To unzip the downloaded zip file, type the following in a terminal window and press <
first node
, log in as
oracle
.
/opt/oracle/patches
/opt/oracle/patches
directory.
.
metalink.oracle.com
unzip p4547817_10202_LINUX-x86-64.zip
Upgrading Oracle Clusterware Installation
1
On the
2
Shut down Oracle Clusterware. To do so, type the following in the terminal window and press <
Enter
first node
>:
, log in as
root
.
crsctl stop crs
3
On the remaining nodes, open a terminal window and repeat step 1 and step 2.
4
On the
5
In the terminal window, type the following and press <
first node
, log in as
oracle
.
Enter
>:
export ORACLE_HOME=/crs/oracle/product/10.2.0/crs
.
Enter
>:
6
Start the Oracle Universal Installer. To do so, type the following in the terminal window and
Enter
press <
>:
cd /opt/oracle/patches/Disk1/ ./runInstaller
The
10
7
8
9
Welc om e
Click
In the
In the
In the
Next
screen appears.
.
Specify Home Details
Specify Hardware Cluster Installation Mode
Summary
The Oracle Universal Installer scans your system, displays all the patches that are required to be installed, and installs them on your system. When the installation is completed, the screen appears.
NOTE: This procedure may take several minutes to complete.
30 Deployment Guide
screen, click
screen, click
Install
.
Next
.
screen, click
Next
.
End of Installation
11
Read all the instructions that are displayed in the message window, which appears.
NOTE: Do not shut down the Oracle Clusterware daemons, as you already performed this procedure in step 1
and step 2.
12
Open a terminal window.
13
Log in as
14
Type the following and press <
root
.
Enter
>:
$ORA_CRS_HOME/install/root102.sh
15
Repeat step 12 through step 14 on the remaining nodes, one node at a time.
16
On the
17
Click
18
Click
Upgrading the RAC Installation
1
On the
2
Log in as
3
Run the Oracle Universal Installer from the same node that you installed the Oracle Database software.
a
b
c
first node
Exit
Yes
to exit the Oracle Universal Installer.
first node
On the
Log in as
, return to the
.
, open a terminal window.
oracle
.
first node
oracle
, open a terminal window.
.
End of Installation
screen.
Shut down the Oracle Clusterware node applications on all nodes.
Enter
In the terminal window, type the following and press <
>:
$ORACLE_HOME/bin/srvctl stop nodeapps -n <nodename>
NOTE: Ignore any warning messages that may appear.
4
Repeat step 3 (c) on the remaining nodes and change the
5
On the
6
Log in as
7
Open a terminal window.
8
Type the following and press <Enter>:
first node
oracle
, open a terminal window.
.
nodename
of that given node.
export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
9
Start the Oracle Universal Installer. To do so, type the following in the terminal window, and press <
Enter
>:
cd /opt/oracle/patches/Disk1/ ./runInstaller
The
Welc om e
screen appears.
Deployment Guide 31
10
Click
Next
.
11
In the
12
13
Specify Home Details
In the
Specify Hardware Cluster Installation Mode
In the
Summary
screen, click
screen, click
Install
.
Next
.
screen, click
Next
.
The Oracle Universal Installer scans your system, displays all the patches that are required to be installed, and installs them on your system. When the installation is completed, the
End of Installation
screen appears.
Next, a message window appears, prompting you to run
14
Open a terminal window.
15
Type the following and press <
Enter
>:
root.sh
as user
root
.
/opt/oracle/product/10.2.0/db_1/root.sh
16
Repeat step 14 and step 15 on the remaining nodes, one node at a time.
When the installation is completed, the
NOTE: This procedure may take several minutes to complete.
17
In the
End of Installation
18
Click
Yes
to exit the Oracle Universal Installer.
19
On the
20
Log in as
21
Type the following and press <
first node
oracle
, open a terminal window.
.
screen, click
Enter
End of Installation
Exit
.
>:
screen appears.
srvctl start nodeapps -n <nodename>
Where <
22
On all the remaining nodes, shut down CRS by issuing the following command:
nodename
> is the public host name of the node.
crsctl stop crs
As the user
23
/opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a
oracle
, from the node where you applied the patchset, copy
to all the other nodes in the cluster.
For example, to copy it from node1 to node2, type the following:
scp /opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a node2:/opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a
NOTE: Do not perform this step as root.
24
Remake the Oracle binary on all the nodes by issuing the following commands on each node:
cd /opt/oracle/product/10.2.0/db_1/rdbms/lib make -f ins_rdbms.mk ioracle
32 Deployment Guide

Configuring the Listener

This section describes the steps to configure the listener, which is required for remote client connection to a database.
On one node only, perform the following steps:
1
Log in as
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
root
.
As the user
4
5
Select
6
In the
7
In the
8
In the
9
In the and click
10
In the
11
In the
of 1521
12
In the
13
In the
14
Click
oracle
Cluster Configuration
TOPSNodes
Welc om e
Listener Configuration→ Listener
Listener Configuration→ Listener Name
Next
Listener Configuration→ Select Protocols
Listener Configuration→ TCP/IP Protocol
and click
Listener Configuration→ More Listeners?
Listener Configuration Done
Finish
.
, type
window, click
window, select
.
Next
.
netca
to start the Net Configuration Assistant.
and click
Select All Nodes
Listener Configuration
window, click

Creating the Seed Database Using OCFS2

1
On the
dbca -datafileDestination /u02
In the
2
3
In the
4
In the
5
In the
6
In the
7
In the
first node
Welc om e
Operations
Node Selection
Database Templates
Database Identification
Management Options
, as user
window, select
oracle
window, click
window, click
window, click
, start the Database Configuration Assistant (DBCA) by typing:
Oracle Real Application Cluster Database
Create a Database
Select All
window, enter a
window, click
Next
.
and click
window, select
window, type
window, select
window, select
window, select No and click
Next
and click
and click
Custom Database
Global Database Name
Next
.
Next
and click
Add
and click
LISTENER
.
Next
Next
.
and click
.
Next
.
Next
.
in the
Listener Name
TCP
and click
Use the standard port number
.
Next
such as
Next
Next
and click
.
racdb
.
.
Next
.
and click
field
Next
.
Deployment Guide 33
10
11
12
13
14
15
16
17
8
In the
Database Credentials
a
Click
Use the same password for all accounts
b
Complete password selections and entries.
c
Click
Next
.
9
In the
Storage Options
In the
Database File Locations
In the
Recovery Configuration
a
Click
Specify Flash Recovery Area
b
Click
Browse
and select
c
Specify the flash recovery size.
d
Click
Next
.
In the
Database Content
In the
Database Services
In the
Initialization Parameters
Pool
value to
In the
In the
In the
NOTE: The seed database may take more than an hour to create.
500 MB
Database Storage
Creation Options
Summary
window, click OK to create the database.
window:
window, select
window, click
window:
/u03
.
window, click
window, click
window, if your cluster has more than four nodes, change the
, and click
Next
window, click
window, select
Cluster File System
Next
.
Next
.
Next
.
.
Next
.
Create Database
.
and click
Next
.
.
Shared
and click
Finish
.
NOTE: If you receive an Enterprise Manager Configuration Error during the seed database creation, click OK
to ignore the error.
When the database creation is completed, the
18
Click
Exit
.
A message appears indicating that the cluster database is starting on all the nodes.
19
On
each node
a
Determine the database instance that exists on that node by typing:
, perform the following steps:
srvctl status database -d <database name>
b
Add the
ORACLE_SID
echo "export ORACLE_SID=racdbx" >> /home/oracle/.bash_profile source /home/oracle/.bash_profile
where
racdbx
c
This example assumes that
34 Deployment Guide
Password Management
window appears.
environment variable entry in the user profile
is the database instance identifier assigned to the node.
racdb
is the global database name that you defined in DBCA.
oracle
by typing:

Creating the Seed Database Using ASM

This section contains procedures for creating the seed database using Oracle ASM and for verifying the seed database.
Perform the following steps:
1
Log in as
cluvfy stage -pre dbcfg -n node1,node2 -d $ORACLE_HOME -verbose
where
root
node1
, and type:
and
node2
are the public host names.
If your system is
not
configured correctly, see "Troubleshooting" for more information.
If your system is configured correctly, the following message appears:
Pre-check for database configuration was successful.
2
On the
first node
, as the user
oracle
, type
dbca &
to start the Oracle Database Creation Assistant
(DBCA).
3
In the
Welc om e
4
In the
Operations
5
In the
Node Selection
6
In the
Database Templates
7
In the
Database Identification
8
In the
Management Options
9
In the
Database Credentials
information (if required), and click
10
In the
Storage Options
11
In the
Create ASM Instance
a
In the
b
Select
c
In the
window, select
window, click
window, click
Oracle Real Application Cluster Database
Create a Database
Select All
window, click
window, enter a
window, click
and click
Custom Database
Global Database Name
Next
.
window, select a password option, enter the appropriate password
Next
.
window, click
Automatic Storage Management (ASM)
window, perform the following steps:
SYS password
field, type a password.
Create server parameter file (SPFILE)
Server Parameter Filename
field, type:
and click
Next
.
Next
.
.
and click
/dev/raw/spfile+ASM.ora
and click
Next
.
, such as
Next
racdb
, and click
and click
.
Next
.
Next
.
d
Click
Next
.
12
When a message appears indicating that DBCA is ready to create and start the ASM instance, click
OK
.
13
Under
ASM Disk Groups
, click
Create New
.
Deployment Guide 35
14
In the
Create Disk Group
a
Enter a name for the disk group to be created, such as
window, perform the following steps:
databaseDG
, select
and then select the disks to include in the disk group.
If you are using the RAW device interface, select
/dev/raw/ASM1
.
A window appears indicating that disk group creation is in progress.
b
If you are using the ASM library driver and you cannot access candidate disks, click
Discovery String
c
Click OK.
, type
ORCL:*
as the string, and then select
ORCL:ASM1
The first ASM disk group is created on your cluster.
External Redundancy
Change Disk
.
,
Next, the
15
Repeat step 14 for the remaining ASM disk group, using
16
In the
ASM Disk Groups
(for example,
17
In the
Database File Locations
18
In the
Recovery Configuration
step 15 (for example,
19
In
Database Services
20
In the
Initialization Parameters
a
Select
b
In
c
In the
d
Click
21
In the
Database Storage
22
In the
Creation Options
23
In the
Summary
NOTE: This procedure may take an hour or more to complete.
ASM Disks Groups
window, select the disk group that you would like to use for Database Storage
databaseDG
flashbackDG
window, configure your services (if required) and then click
Custom
.
Shared Memory Management
SGA Size
Next
and
PGA Size
.
window, click
window, select
window click OK to create the database.
When the database creation is completed, the
24
Click
Password Management
click
Exit
.
window appears.
) and click
window, select
window, click
Next
flashbackDG
.
Use Oracle-Managed Files
Browse
, select the flashback group that you created in
as the disk group name.
), change the Flash Recovery Area size as needed, and click
window, perform the following steps:
, select
Automatic
.
windows, enter the appropriate information.
Next
.
Create Database
and click
Finish
Database Configuration Assistant
to assign specific passwords to authorized users (if required). Otherwise,
A message appears indicating that the cluster database is being started on all nodes.
and click
Next
.
window appears.
Next
.
.
Next
.
36 Deployment Guide
25
Perform the following steps on
a
Determine the database instance that exists on that node by typing:
srvctl status database -d <database name>
b
Type the following commands to add the ORACLE_SID environment variable entry in the user profile:
echo "export ORACLE_SID=racdbx" >> /home/oracle/.bash_profile
source /home/oracle/.bash_profile
where
racdbx
is the database instance identifier assigned to the node.
This example assumes that
26
On
one node
srvctl status database -d dbname
where
If the database instances are running, confirmation appears on the screen.
If the database instances are
srvctl start database -d dbname
where
dbname
dbname
, type:
is the global identifier name that you defined for the database in DBCA.
is the global identifier name that you defined for the database in DBCA.
each node
racdb
is the global database name that you defined in DBCA.
not
running, type:
:

Securing Your System

oracle
To prevent unauthorized users from accessing your system, Dell recommends that you disable rsh after you install the Oracle software.
To disab l e rsh, type:
chkconfig rsh off

Setting the Password for the User oracle

Dell strongly recommends that you set a password for the user oracle to protect your system. Complete the following steps to create the oracle password:
1
Log in as
2
Ty p e
NOTE: Additional security setup may be performed according to the site policy, provided the normal database
operation is not disrupted.
root
.
passwd oracle
and follow the instructions on the screen to create the
Deployment Guide 37
oracle
password.

Configuring and Deploying Oracle Database 10g (Single Node)

This section provides information about completing the initial setup or completing the reinstallation procedures as described in "Installing and Configuring Red Hat Enterprise Linux." This section covers the following topics:
Configuring the Public Network
Configuring Database Storage
Installing the Oracle Database
Configuring the Listener
Creating the Seed Database

Configuring the Public Network

Ensure that your public network is functioning and that an IP address and host name are assigned to your system.

Configuring Database Storage

Configuring Database Storage Using ex3 File System
If you have additional storage device, perform the following steps:
1
Log in as
2
Ty p e :
cd /opt/oracle
root
.
3
Ty p e :
mkdir oradata recovery
Using the
4
sdb1
5
Using the if your storage device is
6
Verify the new partition by typing:
cat /proc/partitions
If you do not detect the new partition, type:
sfdisk -R /dev/sdb sfdisk -R /dev/sdc
7
Ty p e :
mke2fs -j /dev/sdb1 mke2fs -j /dev/sdc1
38 Deployment Guide
fdisk
if your storage device is
fdisk
utility, create a partition where you want to store your recovery files (for example,
utility, create a partition where you want to store your database files (for example,
sdb
).
sdc1
sdc
).
8
Edit the
/etc/fstab
file for the newly created file system by adding entries such as:
/dev/sdb1 /opt/oracle/oradata ext3 defaults 1 2 /dev/sdc1 /opt/oracle/recovery ext3 defaults 1 2
Ty p e :
9
mount /dev/sdb1 /opt/oracle/oradata mount /dev/sdc1 /opt/oracle/recovery
10
Ty p e :
chown -R oracle.dba oradata recovery
Configuring Database Storage Using Oracle ASM
The following example assumes that you have two storage devices (sdb and sdc) available to create a disk group for the database files, and a disk group to be used for flash back recovery and archive log files, respectively.
1
Log in as
2
Create a primary partition for the entire device by typing:
root
.
fdisk /dev/sdb
3
Create a primary partition for the entire device by typing:
fdisk /dev/sdc
Configuring ASM Storage Using the RAW Device Interface
1
Edit the
/etc/sysconfig/rawdevices
file and add the following lines:
/dev/raw/ASM1 /dev/sdb1
/dev/raw/ASM2 /dev/sdc1
2
Restart the RAW Devices Service by typing:
service rawdevices restart

Configuring Database Storage Using the Oracle ASM Library Driver

This section provides procedures for configuring the storage device using ASM.
NOTE: Before you configure the ASM Library Driver, disable SELinux.
To temporarily disable SELinux, perform the following steps:
Log in as
1
2
At the command prompt, type:
setenforce 0
root
.
Deployment Guide 39
To permanently disable SELinux, perform the following steps:
1
Open your
2
Locate the kernel command line and append the following option:
grub.conf
file.
selinux=0
For example:
kernel /vmlinuz-2.6.9-34.ELlargesmp ro root=LABEL=/ apic rhgb quiet selinux=0
3
Reboot your system.
4
Open a terminal window and log in as
5
Perform the following steps:
a
Ty p e :
root
.
service oracleasm configure
b
Type the following input for all the nodes:
Default user to own the driver interface [ ]:
oracle
Default group to own the driver interface [ ]:
Start Oracle ASM library driver on boot (y/n) [n]:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
6
In the terminal window, type the following:
dba
y
y
service oracleasm createdisk ASM1 /dev/sdb1
service oracleasm createdisk ASM2 /dev/sdc1
7
Repeat step 4 through step 6 for any additional ASM disks that you need to create.
8
Verify that the ASM disks are created and marked for ASM usage.
In the terminal window, type the following and press <Enter>:
service oracleasm listdisks
The disks you created in step 6 are listed in the terminal window.
For example:
ASM1
ASM2
40 Deployment Guide

Installing Oracle Database 10g

Perform the following steps to install Oracle 10g:
Log in as
1
2
As the user
3
Start the X Window System by typing:
startx
4
Open a terminal window and type:
xhost +
5
Log in as
6
Start the Oracle Universal Installer.
In the terminal window, type the following and press <Enter>:
<CD_mountpoint>/runInstaller
7
In the
8
In the
9
In the
/opt/oracle/product/10.2.0/db_1
10
Click
11
In the
12
When the
13
In the
14
In the
15
When prompted, open a terminal window and run
A brief progress window appears, followed by the
root
.
root
, mount the
oracle
Select Installation Method
Select Installation Type
Specify Home Details
Next
.
Product-Specific Prerequisite Checks
Wa rn ing
Select Configuration Option
Summary
.
message appears, ignore the message and click
window, click
Oracle Database 10g
window, click
window, click
window in the
window, click
Install
.
Advanced Installation
Enterprise Edition
Path
field, ensure that the path is:
window, click
Install Database Software Only
End of Installation
CD.
root.sh
Next
.
and then click
and click
.
Yes
.
window.
Next
Next
.
.
.
16
Click
Exit
and confirm by clicking
17
Log in as
18
Ty p e :
/opt/oracle/product/10.2.0/db_1/bin/localconfig add
The installation procedure is completed.
root
.
Yes
.
Deployment Guide 41

Installing the Oracle Database 10g 10.2.0.2 Patchset

Downloading and Extracting the Installation Software
1
Log in as
2
Create a folder for the patches and utilities at
3
Open a web browser and navigate to the Oracle Metalink website at
4
Log in to your Oracle Metalink account.
5
Search for the patch number 4547817 with Linux x86-64 (AMD64/EM64T) as the platform.
6
Download the patch to the
7
To unzip the downloaded zip file, type the following in a terminal window and press <Enter>:
unzip p4547817_10202_LINUX-x86-64.zip
Upgrading the Database Software
1
Open a terminal window.
2
Log in as
3
Ensure that ORACLE_HOME is set to
4
As the user
/etc/init.d/init.cssd stop
5
Start the Oracle Universal Installer. To do so, type the following in the terminal window, and press <Enter>:
/opt/oracle/patches/Disk1/runInstaller
oracle
.
/opt/oracle/patches
oracle
/opt/oracle/patches
.
directory.
/opt/oracle/product/10.2.0/db_1
root
, stop the
NOTE: This procedure may take a few minutes to complete.
cssd
process. To do so, type the following and press <Enter>:
.
metalink.oracle.com
.
.
The
6
7
8
9
Welc om e
Click
In the
In the
In the
Next
screen appears.
.
Specify Home Details
Specify Hardware Cluster Installation Mode
Summary
The Oracle Universal Installer scans your system, displays all the patches that are required to be installed, and installs them on your system. When the installation is completed, the screen appears.
Next, a message window appears, prompting you to run
10
In a terminal window, type the following and press <Enter>:
/opt/oracle/product/10.2.0/db_1/root.sh
42 Deployment Guide
screen, click
screen, click
Install
.
Next
.
screen, click
root.sh
Next
as user
.
root
End of Installation
.
11
Press <Enter> to accept the default answers to the questions generated by
NOTE: This procedure may take several minutes to complete.
12
When you complete executing
13
In the
End of Installation
14
Click
Yes
to exit the Oracle Universal Installer.
15
Restart the
cssd
process. To do so, type the following and press <Enter>:
root.sh
, go back to
screen, click
Exit
Execute Configuration Scripts
.
/etc/init.d/init.cssd start

Configuring the Listener

1
Log in as
2
Start the X Window System by typing:
root
.
startx
3
Open a terminal window and type:
xhost +
4
Log in as
5
Ty p e
6
Accept the default settings and click
oracle
netca
.
to start the Oracle Net Configuration Assistant.
Next
on all the screens to complete the listener configuration.

Creating the Seed Database

root.sh
.
window and click OK.
Creating the Seed Database Using ext3 File System
Perform the following steps to create a seed database with the DBCA:
1
Log in as
2
Start the Oracle DBCA by typing:
oracle
.
dbca
3
In the
Welc om e
4
In the
Operations
5
In the
Database Templates
6
In the
Database Identification
Global Database Name
7
In the
Management Options
8
In the
Database Credentials
9
In the
Storage Options
10
In the
Database File Locations
window, click
window, click
window, click
and the
window, complete password selections and entries and click
window, select
Next
.
Create a Database
Custom Database
and click
Next
and click
window, type the name of the database that you are creating in the
SID Prefix
window, click
window, click
fields, and click
Next
.
File System
Next
and click
.
Next
Next
.
Next
.
.
Next
.
.
Deployment Guide 43
11
In the
Recovery Configuration
window, click
Browse
, select the flashback recovery area that you created in "Configuring Database Storage Using ex3 File System" (for example, change the Flash Recovery Area size as needed, and click
12
In the
13
14
15
16
Database Content
In the
Initialization Parameters
In the
Database Storage
In the
Creation Options
In the
Confirmation
NOTE: The seed database creation may take more than an hour to complete.
window, click
window, click
window, click
window, click
Next
.
Next
Next
.
Create Database
window, click OK to create the database.
When the database creation procedure is completed, the
17
Click
Exit
.
18
Ty p e :
Next
.
and click
Password Management
export ORACLE_SID=dbname
where
dbname
19
To verify that the database is operating, perform the following steps:
a
Ty p e
b
Type the following query at the
is the global identifier name that you defined for the database in DBCA.
sqlplus "/ as sysdba"
to display the
SQL>
prompt:
SQL>
SELECT * FROM v$instance;
c
If the database is not running and you receive an error message, type prompt to start the database instance on the node.
.
Finish
prompt.
/opt/oracle/recovery
.
window appears.
startup
at the
SQL>
),
Creating the Seed Database Using Oracle ASM
If you configured your storage using Oracle ASM, perform the following steps to create a seed database with the DBCA:
1
As the user
oracle
, start DBCA by typing:
dbca &
2
In the
Welc om e
3
In the
Operations
4
In the
Database Templates
5
In the
Database Identification
6
In the
Management Options
7
In the
Database Credentials
password entries, and click
8
In the
Storage Options
window, click
window, click
window, click
window, click
Next
window, click
Next
.
Create a Database
Custom Database
window, enter a
window, click
Next
Use the Same Password for All Accounts
.
ASM
and click
and click
Next
.
and click
Global Database Name
.
Next
.
Next
.
such as
oradb
, complete
and click
44 Deployment Guide
Next
.
9
In the
Create ASM Instance
10
When a message appears indicating that DBCA is ready to create and start the ASM instance, click
OK
.
11
In the
ASM Disk Groups
12
In the
Create Disk Group
a
Enter a name for the disk group to be created, such as
window, enter the password for user
window, under
Available Disk Groups
SYS
, click
and click
Create New
Next
.
window, enter the storage information for the database files and click OK.
databaseDG
, select
External Redundancy
and select the disks to include in the disk group.
b
If you are using the RAW device interface, select
c
If you are using the ASM library driver and you cannot access the candidate disks, click
Disk Discovery String
and type
ORCL:*
as the string, and then select
/dev/raw/ASM1
.
ASM1
.
A window appears indicating that disk group creation is in progress.
d
If you are using the ASM library driver and the candidate disks are not listed, click
13
14
Discover String
Under
Available Disk Groups
In the
Disk Group
a
Enter a name for the disk group to be created, such as
Redundancy
b
If you are using the RAW device interface, select
and enter
ORCL:*
, click
as the string.
Create New
.
window, enter the information for the flashback recovery files and clickOK.
flashbackDG
, select
External
, and select the disks to include in the disk group.
/dev/raw/ASM2
.
A window appears indicating that disk group creation is in progress.
c
If you are using the ASM library driver and you cannot access the candidate disks, click
Disk Discovery String
15
In the
ASM Disk Groups
(for example,
16
In the
Database File Locations
click
Next
.
17
In the
Recovery Configuration
step 14 (for example,
18
In the
Database Content
19
In the
Initialization Parameters
20
In the
Database Storage
21
In the
Creation Options
databaseDG
, type
ORCL:*
as the string, and then select
ASM2
.
window, check the disk group that you would like to use for Database Storage
) and click
flashbackDG
window, click
window, click
window, select
Next
.
window, check
window, click
Use Common Location for All Database Files
Browse
, select the flashback group that you created in
), change the Flash Recovery Area size as needed, and click
Next
.
window, select
Next
Typical
and click
.
Create Database
Next
and click
.
Finish
.
.
Change
Change Disk
Change
, and
Next
,
.
Deployment Guide 45
22
In the
Confirmation
NOTE: Creating the seed database may take more than an hour.
When the database creation is completed, the
23
Click
Exit
.
24
When database creation is completed, type the following commands to add the ORACLE_SID environment variable entry in the
echo "export ORACLE_SID=oradb" >> /home/oracle/.bash_profile
source /home/oracle/.bash_profile
window click OK to create the database.
Password Management
oracle
user profile:
window appears.
This example assumes that
NOTE: See the section "Securing Your System" and follow the steps for additional security setup.
oradb
is the global database name that you defined in DBCA.

Adding and Removing Nodes

This section describes the steps to add a node to an existing cluster and the steps to remove a node from a cluster.
NOTE: The new node must have the same hardware and operating system configuration as the existing node(s).
To add a node to an existing cluster:
Add the node to the network layer.
Configure shared storage.
Add the node to Oracle Clusterware, database, and the database instance layers.
To remove a node from an existing cluster, reverse the process by removing the node from the database instance, the database, and the Oracle Clusterware layers.
For more information about adding an additional node to an existing cluster, see the Oracle Real Application Clusters 10g Administration document on the Oracle website at www.oracle.com.

Adding a New Node to the Network Layer

To add a new node to the network layer:
1
Install the Red Hat Enterprise Linux operating system on the new node. See "Installing and Configuring Red Hat Enterprise Linux."
2
Configure the public and private networks on the new node. See "Configuring the Public and Private Networks."
3
Verify that each node can detect the storage LUNs or logical disks. See "Verifying the Storage Configuration."
46 Deployment Guide

Configuring Shared Storage on the New Node

To extend an existing RAC database to your new nodes, configure storage for the new nodes so that the storage is the same as on the existing nodes. This section provides the appropriate procedures for ASM.
Configuring Shared Storage Using ASM
If you are using ASM, ensure that the new nodes can access the ASM disks with the same permissions as the existing nodes.
To configure the ASM disks:
1
Log in as
2
At the command prompt, type:
setenforce 0
To permanently disable SELinux:
Open your
1
2
Locate the kernel command line and append the following option:
selinux=0
For example:
kernel /vmlinuz-2.6.9-34.ELlargesmp ro root=LABEL=/ apic rhgb quiet selinux=0
3
Reboot your system.
4
Open a terminal window and log in as
5
Copy the new node.
6
If you are using the RAW device interface for ASM, type restart the RAW Devices Service.
7
Open a terminal window and perform the following steps on the new node:
a
b
root
.
grub.conf
/etc/sysconfig/rawdevices
Ty p e
service oracleasm configure
file.
root
.
file from one of the existing nodes to the same location on the
Type the following inputs for all the nodes:
Default user to own the driver interface [ ]:
oracle
service rawdevices restart
to
Default group to own the driver interface [ ]:
Start Oracle ASM library driver on boot (y/n) [n]:
dba
y
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
y
Deployment Guide 47
8
Ensure that the new node can access the ASM disks.
In the terminal, type the following and press <Enter>:
service oracleasm scandisks
9
Ensure that the ASM disks are available on the new node.
In the terminal window, type the following and press <Enter>:
service oracleasm listdisks
All available disks on the remaining nodes are listed.
For example:
ASM1
ASM2

Adding a New Node to the Oracle Clusterware Layer

1
Log in as
2
From the to start the Oracle Universal Installer.
3
In the
4
In the for the new node and click
If all the network and storage verification checks pass, the window appears.
oracle
/crs/oracle/product/10.2.0/crs/oui/bin
Welc om e
Specify Cluster Nodes to Add to Installation
into one of the existing nodes.
window, click
Next
Next
.
.
directory of this existing node, type
./addNode.sh
window, enter the public and private node names
Cluster Node Addition Summary
5
Click
Install
.
The
Cluster Node Addition Progress
6
When prompted to run run
/crs/oracle/product/10.2.0/crs/install/rootaddnode.sh
7
When
8
root.sh
In the
End of Cluster Node Addition
finishes running in the
rootaddnode.sh

Adding a New Node to the Database Layer

1
Log in as
2
From the to start the Oracle Universal Installer.
3
In the
48 Deployment Guide
oracle
/opt/oracle/product/10.2.0/db_1/oui/bin
Welc om e
into one of the existing nodes.
window, click
Next
window displays the status of the node addition process.
on the local node and
Execute Configuration Scripts
window, click
.
Exit
directory of this existing node, type
root.sh
.
and click
on the new node as user
window, click OK.
Yes
in the
Exit
window.
./addNode.sh
root
,
4
In the
Specify Cluster Nodes for Node Addition
Next
click
.
If all the verification checks pass, the
5
Click
Install
.
The
Cluster Node Addition Progress
6
When prompted, as user
root
Cluster Node Addition Summary
window displays the status of the node addition process.
run
/opt/oracle/product/10.2.0/db_1/root.sh
window, verify that the new node is selected and
window appears.
on the new node and
press <Enter> when asked to enter the full path name of the local bin directory.
7
When
8
root.sh
In the
End of Installation
finishes running in the
window, click
Execute Configuration Scripts
Exit
and click
Yes
when asked to confirm.
window, click OK.

Reconfiguring the Listener

This section describes the steps to reconfigure the listener, which is required for remote client connection to a database.
NOTE: The steps below assume that you are willing to stop the listener to reconfigure the existing listener.
Otherwise, the steps may be a little different than the steps below.
On one node only, perform the following steps:
1
Log in as
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
root
.
4
As user
oracle
, stop the listener by typing:
lsnrctl stop
5
When this is successful, type
6
Select
Cluster Configuration
7
In the Real Application Clusters,
8
In the
9
In the
10
In the menu and click
11
In the
Welc om e
Listener Configuration→ Listener
Listener Configuration→ Select Listener
Listener Configuration→ Select Protocols
window, select
Next
.
netca
to start the Net Configuration Assistant.
and click
Listener Configuration
Next
.
Active Nodes
window, select
window, click
and click
Reconfigure
Select All Nodes
window, select
window, select
Next
.
and click
LISTENER
TCP
and click
and click
Next
Next
.
.
from the pull down
Next
.
Deployment Guide 49
12
In the
13
14
15
Listener Configuration→ TCP/IP Protocol
Next
and click
In the
Listener Configuration→ More Listeners?
In the
Listener Configuration Done
Click
Finish
.
.
window, click
window, select
window, select No and click
Next
.
Use the standard port number of 1521
Next
.

Adding a New Node to the Database Instance Layer

1
On one of the existing nodes, as user
dbca &
2
In the
Welc om e
3
In the
Operations
4
In the
Instance Management
5
In the
List of Cluster Databases
If your user name is not operating system-authenticated, the DBCA prompts you for a user name and password for a database user with SYSDBA privileges.
6
Enter the user name
List of Cluster Database Instances
The RAC database that you selected and the status of each instance.
7
In the
List of Cluster Database Instances
8
In the
Instance Naming and Node Selection
window, select the new node name, and click
9
In the
Instance Storage
10
In the
Summary
A progress bar appears, followed by a message asking if you want to extend ASM to the new node(s).
window, click
window, click
window, click
sys
and the password, and click
window, click
window click OK to add the database instance.
oracle
Next
.
Instance Management
window, select the existing database.
Finish
, start DBCA by typing:
and click
Add Instance
window appears, showing the instances associated with the
window, click
window, enter the instance name at the top of the
Next
.
.
and click
Next
Next
.
.
Next
Next
.
.
11
Click
Yes
.
The following message appears:
Do you want to perform another operation?
12
Click No.
13
On any node, determine that the instance is successfully added by typing:
srvctl status database -d <database name>
NOTE: See the section "Securing Your System" and follow the steps for additional security setup.
50 Deployment Guide

Removing a Node From the Cluster

When you perform the procedures in this section, ensure that you select and remove the correct node from the cluster.
Deleting the Node From the Database Instance Layer
1
Log in as
2
From one of the remaining nodes, type:
dbca &
In the
3
4
In the
5
In the
6
In the
If your user name is not operating system-authenticated, the DBCA prompts you for a user name and password for a database user with SYSDBA privileges.
7
Enter the user name
The RAC database that you selected and the status of each instance.
8
Select the instance to delete and click
This instance cannot be the local instance from where you are running DBCA. If you select the local instance, the DBCA displays an click
oracle
Welc om e
Operations
Instance Management
List of Cluster Databases
.
window, click
window, click
sys
and the password, and click
Next
.
Instance Management
window, click
window, select a RAC database from which to delete an instance.
List of Cluster Database Instances
Next
Error
dialog. If this occurs, click OK, select another instance, and
Finish
.
and click
Delete an instance
Next
and click
.
Next
.
Next
.
window appears, showing the instances associated with the
.
If services are assigned to this instance, the
DBCA Services Management
window appears. Use this
window to reassign services to other instances in the cluster database.
9
In the
Summary
10
Verify the information about the instance deletion operation and click OK.
window, click OK.
A progress bar appears while DBCA removes the instance and its Oracle Net configuration. When the operation is completed, a dialog prompts whether you want to perform another operation.
11
Click No to exit.
12
Verify that the node was removed by typing:
srvctl config database -d <database name>
Deployment Guide 51
Reconfiguring the Listener
1
Ty p e
netca
.
2
In the
Real Application Clusters→ Configuration
click
Next
.
3
In the
Real Application Clusters→ Active Nodes
Next
click
4
In the
5
In the
6
In the
When a message click
7
In the
8
In the
9
Click
To Stop and Remove ASM From the Node That is Deleted
.
Welc om e
Listener Configuration→ Listener
window, select
Listener Configuration
window, select
Listener Configuration→ Select Listener
Are you sure you want to delete listener LISTENER?
Yes
.
Listener Configuration→ Listener Deleted
Listener Configuration Done
Finish
.
window, click
window, select
window, select the node that you want to delete and
window, select
window, click
Next
On one of the remaining nodes, perform the following steps:
1
Open a terminal window.
2
Ty p e :
srvctl stop asm -n <node_name>
and click
Delete
.
Cluster Configuration
Next
.
and click
LISTENER
Next
.
Next
.
and click
and
Next
.
appears,
where
<node_name>
3
Ty p e :
is the node you want to remove from the cluster.
srvctl remove asm -n <node_name>
where
<node_name>
Deleting a Node From the Database Layer
1
On the node being deleted, log in as
2
Type the following command, using the public name of the node you are deleting (for example, if you are removing
node3-pub
is the node you want to remove from the cluster.
):
srvctl stop nodeapps -n node3-pub
Ignore error CRS-0210 which complains about the listener.
3
On the node being deleted, log in as
4
If you wish to remove the Oracle Database software, type the following command:
rm -rf /opt/oracle/product/10.2.0/db_1/*
52 Deployment Guide
oracle
root
.
.
Removing a Node From the Oracle Clusterware Layer
1
On the node that you are deleting, as user
/crs/oracle/product/10.2.0/crs/install/rootdelete.sh remote nosharedvar
root
, disable CRS by typing the following command:
On one of the remaining nodes, as user
2
/crs/oracle/product/10.2.0/crs/install/rootdeletenode.sh <public nodename>, <node-number>
Where being deleted.
To determine the node number of any node, type the following command:
/crs/oracle/product/10.2.0/crs/bin/olsnodes -n
On the node that you are deleting, if you wish to remove the Oracle CRS software, type the
3
following command:
rm -rf /crs/oracle/product/10.2.0/crs/*
<public-nodename>
is public name and
root
, type the following command:
<node-number>
is the node number of the node

Reinstalling the Software

NOTICE: Reinstalling the software erases all information on the hard drives.
NOTICE: You must disconnect all external storage devices from the system before you reinstall the software.
NOTICE: Dell recommends that you perform regular backups of your database and individual nodes so that you do
not lose valuable data. Reinstall the node software only if you have no other options.
Installing the software using the Dell Deployment CD created a redeployment partition on your hard drive that contains all of the software images that were installed on your system. The redeployment partition allows for quick redeployment of the Oracle software.
Reinstalling the software by using this method requires that you boot the system to the redeployment partition. When the system boots to this partition, it automatically reinstalls the Red Hat Linux operating system.
To reinstall software using this method, perform the following steps:
1
Disconnect the external storage device.
2
Log in as
3
Edit the grub configuration file by typing:
vi /etc/grub.conf
root
on the system on which you want to reinstall the software.
and press <Enter>.
4
In the file, change the default to 3.
5
Save the file and restart your system.
Deployment Guide 53
For information about configuring the system for use, see "Configuring Red Hat Enterprise Linux" and continue through the remaining sections to reconfigure your system.

Additional Information

Supported Software Versions

Table 1-7 lists the supported software at the time of release. For the latest supported hardware and software, see the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g and download the Oracle Database 10g EM64T Version 2.0 Solution Deliverable List for the latest supported versions.
Table 1-7. Supported Software Versions
Software Component Supported Versions
Red Hat Enterprise Linux AS EM64T (Version 4) Update 3 kernel 2.6.9-34.ELsmp, 2.6.9-34.ELlargsmp
Oracle Database version 10.2.0.2
PowerPath for Linux 4.5.1
DKMS 2.0.11-1
QLogic HBA QLE2362 (QLA2322) 8.01.02-d4
QLogic HBA QLE2460 (QLA2400) 8.01.02-d4
QLogic HBA QLE2462 (QLA2400) 8.01.02-d4
Emulex HBA LP1000 & LP1150e (lpfc) 8.0.16.18
PERC 4e/Si, PERC 4e/Di (megaraid_mbox) 2.20.4.6
PERC 5/e, PERC 5/I (megaraid_sas) 00.00.02.00
Intel PRO/100 S NIC driver (e100) 6.1.16-k3-NAPI
Intel PRO/1000 XT/MT/MT DP NIC driver (e1000) 6.1.16-k3-NAPI
Broadcom NetXtreme BCM5704 (tg3) 3.43-rh
Broadcom NetXtreme BCM5708 (bnx2) 1.4.36b
54 Deployment Guide

Determining the Private Network Interface

To determine which interface device name is assigned to each network interface, perform the following steps:
1
Determine the types of NICs in your system.
See Table 1-8 to identify which integrated NICs are present in your system.
For add-in NICs, you may have Intel PRO/100 family or PRO/1000 family cards or Broadcom NetXtreme Gigabit cards. You may have to open your system and view the add-in cards to identify your card.
Table 1-8. Integrated NICs
System Integrated NICs Driver Name
PowerEdge 1950 Broadcom NetXtreme II BCM5708 bnx2
PowerEdge 2950 Broadcom NetXtreme II BCM5708 bnx2
PowerEdge 2900 Broadcom NetXtreme II BCM5708 bnx2
PowerEdge 1850 Intel PRO/1000 e1000
PowerEdge 2850 Intel PRO/1000 e1000
PowerEdge 6850 Broadcom NetXtreme BCM5704 tg3
2
Verify that a Broadcom NetXtreme Gigabit or Intel PRO/1000 family NIC is connected with a Cat 5e cable to the Gigabit Ethernet switch, which is your private NIC.
3
Determine the driver module your private NIC uses (see Table 1-8 above).
4
View the
more /etc/modprobe.conf
/etc/modprobe.conf
file by typing:
Several lines appear with the format interface number and
For example, the line
driver-module
alias eth1 tg3
alias ethx driver-module
, where x is the Ethernet
is the module you determined in step 3.
appears if your operating system assigned eth1 to
a Broadcom NetXtreme Gigabit NIC.
5
Determine which Ethernet interfaces (ethx) are assigned to the type of Gigabit NIC that is connected to the Gigabit switch.
If only one entry exists in
/etc/modprobe.conf
for your driver module type, then you have successfully
identified the private network interface.
6
If you have more than one of the same type of NIC in your system, experiment to determine which Ethernet interface is assigned to each NIC.
For each Ethernet interface, follow the steps in "Configuring the Private Network Using Bonding" for the correct driver module until you have identified the correct Ethernet interface.
Deployment Guide 55

Troubleshooting

Table 1-9 provides recommended actions for problems that you may encounter while deploying and using your Red Hat Enterprise Linux and Oracle software.
Table 1-9. Troubleshooting
Category Problem / Symptom Cause Recommended Corrective Action
Database Nodes that start up
correctly with Patchset 10.2.0.1 may fail to start up with ORA-4031 errors in Patchset
10.2.0.2.
Database Lock Manager
Service (LMS) crash with ORA-00600 error: internal error code, arguments: [kclastf_1], [2], [].
Database The instance can
terminate with ORA-600 error [kclcls_5] in the RAC instance.
Database ERROR IN KQLM-
BIVG SEE LCK TRACE FILE" [LT] [LB] KJUSERCLIENT­LOCK
Database LMD0 PROCESS
RECEIVED OS SIGNAL #11
Performance and stability
Red Hat Enterprise Linux exhibiting poor performance and instability. Excessive use of swap space.
RAC instances with greater than 4 Gb db_cache_sizes.
Due to Oracle bug
5071492. See the Oracle Metalink website at metalink.oracle.com.
Due to Oracle bug
4639236. See the Oracle Metalink website at metalink.oracle.com.
Due to Oracle bug
4690794. See the Oracle Metalink website at metalink.oracle.com.
Due to Oracle bug
5036588. See the Oracle Metalink website at metalink.oracle.com.
The Oracle System Global Area (SGA) exceeds the recommended size.
Set the variable set _ksmg_granule_size= 16777216 in the init.ora file.
Apply patch 5071492 available on the Oracle Metalink website at metalink.oracle.com.
Apply patch 4639236 available on the Oracle Metalink website at metalink.oracle.com.
Apply patch 4690794 available on the Oracle Metalink website at metalink.oracle.com.
Apply patch 5036588 available on the Oracle Metalink website at metalink.oracle.com.
• Ensure that the SGA size does not exceed 65% of total system RAM.
free
•Type RAM and reduce the values of
shared_pool_size
parameter file accordingly.
at a command prompt to determine total
db_cache_size
parameters in the Oracle
and
56 Deployment Guide
Table 1-9. Troubleshooting (continued)
Category Problem / Symptom Cause Recommended Corrective Action
Enterprise Manager
The Enterprise Manager agent goes down or fails.
The Enterprise Manager repository is not populated.
Type the following to recreate the configuration file and repository for the DB Console:
emca -config dbcontrol db repos recreate
For detailed instructions, see Oracle Metalink Note
330976.1.
Performance and stability
Unknown interface type warning appears in Oracle alert file.
Poor system performance.
The public interface is configured as cluster communications (private interface).
Force cluster communications to the private interface by performing the following steps
1
Log in as
2
Ty p e
oracle
.
sqlplus "/ as sysdba"
prompt. The
SQL>
prompt appears.
3
Enter the following lines at the alter system set cluster_interconnects=<
address node1
>’ scope=spfile sid=’<
alter system set cluster_interconnects=’<
IP address node2
>’ scope=spfile sid=’<
Continue entering lines for each node in the cluster.
4
Restart the database on all nodes by typing the following lines:
srvctl stop database –d <dbname> srvctl start database –d <dbname>
5
Open the
/opt/oracle/admin/<
dbname
file and verify that the private IP addresses are being used for all instances.
NETCA NETCA fails,
resulting in database creation errors.
NETCA NETCA cannot
configure remote nodes or a RAW device validation error occurs while running DBCA.
The public network, hostname, or virtual IP is not listed in the /etc/hosts.equiv file.
The /etc/hosts.equiv file either does not exist or does not include the assigned public or virtual IP addresses.
Before launching netca, ensure that a hostname is assigned to the public network and that the public and virtual IP addresses are listed in the /etc/hosts.equiv file.
Verify that the /etc/hosts.equiv file on each node contains the correct public and virtual IP address. Try to rsh to other public names and VIP addresses as the user oracle.
on
one node
at the command
SQL>
prompt:
private IP
SID1
private
>/bdump/alert_<
:
SID2
SID
>’
>.log
>’
Deployment Guide 57
Table 1-9. Troubleshooting (continued)
Category Problem / Symptom Cause Recommended Corrective Action
CRS CRS gives up
prematurely when trying to start.
CRS The Oracle
Clusterware installation procedure fails.
CRS CRS fails to start
when you reboot the nodes, or type
/etc/ini.d/ini t.crs start.
CRS When you run
root.sh, CRS fails to start.
CRS When you run
root.sh, CRS fails to start.
Due to Oracle bug
4698419. See the Oracle Metalink website at metalink.oracle.com.
EMC PowerPath device names are not uniform across the nodes.
The Cluster Ready Services CSS daemon cannot write to the quorum disk.
Check and make sure you have public and private node names defined and that you can ping the node names.
The OCR file and Voting disk are inaccessible.
Apply patch 4698419 available on the Oracle Metalink website at metalink.oracle.com.
Before you install Oracle Clusterware, restart PowerPath and ensure that the PowerPath device names are uniform across the nodes.
• Attempt to start the service again by rebooting the node or typing
/crs/oracle/product/10.2.0/crs/
• Verify that each node has access to the quorum disk and the user
• Check the last line in the file
$ORA_CRS_HOME/css/log/ocssd.log
• If you see
flush writes to (votingdisk)
following:
/etc/hosts
–The
IP addresses for all node hostnames, including the
virtual IP addresses. – You can ping the public and private hostnames. – The quorum disk is writable.
Attempt to start the service again by rebooting the node or by running root.sh from /crs/oracle/product/10.2.0/crs/ after correcting the networking issues.
Correct the I/O problem and attempt to start the service again by rebooting the node or by running root.sh from /crs/oracle/product/10.2.0/crs/.
root.sh
root
can write to the disk.
clssnmvWriteBlocks: Failed to
file on each node contains correct
from
.
.
, verify the
58 Deployment Guide
Table 1-9. Troubleshooting (continued)
Category Problem / Symptom Cause Recommended Corrective Action
CRS When you run
root.sh following reinstallation, CRS fails to start.
The OCR file and Voting disk have not been cleared and contain old information.
1
Clear the OCR and Voting disks by typing the following lines:
dd if=/dev/zero of=/dev/raw/ocr.dbf
dd if=/dev/zero of= /dev/raw/votingdisk
2
Attempt to start the service again by rebooting the node or by running
root.sh
from
/crs/oracle/product/10.2.0/crs/
CRS When you run
root.sh, CRS fails to start.
The user oracle does not have permissions on /var/tmp (specifically /var/tmp/.oracle).
1
Make
user
oracle
the owner of
chown oracle.oinstall
typing
/var/tmp/.oracle
2
Attempt to start the service again by rebooting the
root.sh
node or by running
from:
/crs/oracle/product/10.2.0/crs/
CRS When you run
root.sh, CRS fails to start.
Other CRS troubleshooting steps are attempted without success.
1
Enable debugging by adding the following line to
root.sh
:
set -x
2
Attempt to start the service again by running
/crs/oracle/product/10.2.0/crs/
from:
3
Check log files in the following directories to diagnose the issue:
$ORA_CRS_HOME/crs/log $ORA_CRS_HOME/crs/init $ORA_CRS_HOME/css/log $ORA_CRS_HOME/css/init $ORA_CRS_HOME/evm/log $ORA_CRS_HOME/evm/init $ORA_CRS_HOME/srvm/log
4
Check
/var/log/messages
for any error messages
regarding CRS init scripts.
5
Capture all log files for support diagnosis.
CRS Node continually
reboots.
The node does not have access to the quorum disk on shared storage.
1
Start Linux in single user mode.
2
Ty p e :
/etc/inet.d/init.crs disable
3
Verify that the quorum disk is available and the private interconnect is alive.
4
Reboot and type
/etc/inet.d/init.crs
enable
/var/tmp/.oracle
by
root.sh
Deployment Guide 59
Table 1-9. Troubleshooting (continued)
Category Problem / Symptom Cause Recommended Corrective Action
CRS Node continually
reboots.
The private interconnect is down.
1
Start Linux in single user mode.
2
Ty p e :
/etc/inet.d/init.crs disable
3
Verify that the node can ping over the private interconnect to the remaining nodes in the cluster.
4
Ty p e :
/etc/inet.d/init.crs enable
5
Reboot your system.
6
In some cases, the network has a latency of up to 30 seconds before it can ping the remaining nodes in the cluster after reboot. If this situation occurs, add the following line to the beginning of your
/etc/inet.d/init.crs
file and reboot your system:
/bin/sleep 30
DBCA There is no response
when you click OK in the DBCA
Java Runtime Environment timing issue.
Click again. If there is still no response, restart DBCA.
Summary window.
Software installation
You receive dd failure error messages while
Using copies, rather than the original Red Hat CDs.
When burning the CD images (ISOs), use the proper options such as -dao if using cdrecord command.
installing the software using Dell Deployment CD 1.
Software installation
When connecting to the database as a user other than
Required permissions are not set on the remote node.
On all remote nodes, as user root, type: chmod 6751
$ORACLE_HOME
oracle, you receive the error messages
ORA01034: ORACLE not available and Linux Error 13: Permission denied.
60 Deployment Guide
Table 1-9. Troubleshooting (continued)
Category Problem / Symptom Cause Recommended Corrective Action
Software installation
Oracle software fails to install on the nodes.
The nodes system clocks are not identical.
Perform one of the following procedures:
• Ensure that the system clock on the Oracle software installation node is set to a later time than the remaining nodes.
• Configure one of your nodes as an NTP server to synchronize the remaining nodes in the cluster.
Software installation
When you run root.sh, the utility fails to format the OCR disk.
The utility fails to format the OCR disk. This issue is documented in
Download and apply Oracle patch 4679769, found on the Oracle Metalink website at metalink.oracle.com.
Oracle Metalink under bug 4679769.
Networking The cluster
verification check fails.
Your public network IP address is not routable.
Assign a valid, routable public IP address.
For example:
192.168.xxx.xxx
Fibre Channel storage system
ASM Library Driver
You receive I/O errors and warnings when you load the Fibre Channel HBA driver module.
When you type
service
The HBA driver, BIOS, or firmware needs to be updated.
Check the Solution Deliverable List on the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g for the supported versions. Update as required the driver, BIOS, and firmware for the Fibre Channel HBAs.
SELinux is enabled. Disable SELinux by following the steps in the section
"Configuring Shared Storage for Oracle Clusterware."
oracleasm start, the
procedure fails.
Operating System
When you add a new peripheral
Kudzu is disabled. Manually run Kudzu after you add the new peripheral
to your system. device to your PowerEdge system, the operating system does not recognize the device.
Deployment Guide 61

Getting Help

Dell Support

For detailed information on the use of your system, see the documentation that came with your system components.
For white papers, Dell supported configurations, and general information, visit the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g.
For Dell technical support for your hardware and operating system software and to download the latest updates for your system, visit the Dell Support website at support.dell.com. Information about contacting Dell is provided in your system’s Installation and Troubleshooting Guide.
Dell Enterprise Training and Certification is now available; see www.dell.com/training for more information. This training service may not be offered in all locations.

Oracle Support

For training information on your Oracle software and application Clusterware, see the Oracle website at www.oracle.com or see your Oracle documentation for information about contacting Oracle.
Technical support, downloads, and other technical information are available at the Oracle MetaLink website at metalink.oracle.com
.

Obtaining and Using Open Source Files

The software contained on the Dell Deployment CD is an aggregate of third-party programs as well as Dell programs. Use of the software is subject to designated license terms. All software that is designated as "under the terms of the GNU GPL" may be copied, distributed, and/or modified in accordance with the terms and conditions of the GNU General Public License, Version 2, June 1991. All software that is designated as "under the terms of the GNU LGPL" (or "Lesser GPL") may be copied, distributed, and/or modified in accordance with the terms and conditions of the GNU Lesser General Public License, Version 2.1, February 1999. Under these GNU licenses, you are also entitled to obtain the corresponding source files by contacting Dell at 1-800-WWW-DELL. Please see SKU 420-4534 when making such request. You may be charged a nominal fee for the physical act of transferring a copy.
62 Deployment Guide

Index

A
adding and removing
nodes, 46
additional configuration
options
adding and removing
nodes, 46
additional information, 54
determining the private
network interface, 55
ASM
configuring database
storage, 39
B
bonding, 16
C
cluster
Fibre Channel hardware
connections, example, 10
cluster setup
Fibre Channel, 10
configuring
database storage
(single node), 38
database storage (single node)
using ASM, 39
database storage (single node)
using ex3, 38
configuring (continued)
Oracle Database 10g
(single node), 38 Oracle RAC 10g, 15 Red Hat Enterprise Linux, 9 shared storage, 20 shared storage for CRS, 23
configuring Oracle 10g, 10
verifying hardware and
software
configurations, 10
configuring Oracle Database
10g (single node), 38, 43
creating the seed database, 43
configuring Oracle RAC
10g, 15
creating the seed database, 35
configuring shared storage, 20
configuring shared storage
for CRS, 23
configuring the private and
public networks, 15
configuring the private
network, 16
configuring the public
network, 16
creating the seed
database, 35, 43
CRS
installing, 26
CRS configuration, 23
D
deploying Oracle RAC 10g, 15
determining the private
network interface, 55
documentation, 7
E
examples
Fibre Channel cluster hardware
connections, 10
F
Fibre Channel cluster
setup, 10
G
getting help, 62
H
hardware
Fibre Channel cluster
minimum requirements, 6
Fibre Channel
interconnections, 11
single-node minimum
requirements, 7
Index 63
hardware and software
configurations
Fibre Channel, 12
I
installing
CRS, 26 Oracle Database 10g, 28 Oracle Database 10g (single
node), 41 Oracle RAC 10g, 25 Red Hat Enterprise Linux, 8 using Dell Deployment CD, 8
L
license agreements, 7
Oracle RAC 10g
configuration, 15 CRS configuration, 23 installing, 25 shared storage
configuration, 20
P
passwords
setting, 37
private network
configuring, 15-16 determining the interface, 55
public network
configuring, 15-16
S
security, 37
seed database
creating, 35, 43 verifying, 37, 44
software
reinstalling, 53 requirements, 6, 54
software and hardware
requirements, 6
supported storage devices, 54
T
troubleshooting, 56
listener
configuration, 33, 43, 49
N
node
adding and removing, 46 removing, 51
O
Oracle Database 10g
installing, 28 installing (single node), 41 single node configuration, 38
64 Index
R
Red Hat
updating system packages, 9
Red Hat Enterprise Linux
installing, 8
reinstalling
software, 53
remote shell (rsh)
disabling, 37
removing a node, 51
V
verifying
hardware configuration, 10 seed database, 37, 44 software configuration, 10 storage configuration, 19
Dell™ PowerEdge™ 系统
Oracle Database 10g 64 位扩展
内存技术 (EM64T) 企业版

Linux 部署指南 2.1.1

www.dell.com | support.dell.com
注和注意
注:注表示可以帮助您更好地使用计算机的重要信息。
注意:注意表示可能会损坏硬件或导致数据丢失,并告诉您如何避免此类问题。
____________________
本说明文件中的信息如有更改,恕不另行通知。
© 2006 Dell Inc.
未经
Dell Inc.
本文中使用的商标: 册商标;
本文件中述及的其它商标和产品名称是指拥有相应商标和名称的公司或其制造的产品。 的其它商标和产品名称不拥有任何专有权。
2006 年 9
版权所有,翻印必究。
书面许可,严禁以任何形式进行复制。
Intel 和 Xeon 是 Intel Corporation
Dell、DELL
月修
徽标和
PowerEdge 是 Dell Inc.
的注册商标;
A01
的商标;
Red Hat 是 Red Hat, Inc.
EMC、PowerPath 和 Navisphere 是 EMC Corporation
的注册商标。
Dell Inc.
的注
对本公司的商标和产品名称之外
目录
Oracle RAC 10g
软件和硬件要求
许可协议 重要说明文件 开始之前
安装和配置
Red Hat Enterprise Linux
部署服务
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
. . . . . . . . . . . . . . . . . . . . .
使用 Deployment CD 安装 Red Hat Enterprise Linux 配置 Red Hat Enterprise Linux
. . . . . . . . . . . . . . . . . . . . . . . 73
使用 Red Hat Network 对系统软件包进行更新
验证群集硬件与软件配置
光纤信道群集设置 存储系统布线
Oracle RAC 10g
配置存储和网络
配置公共和专用网络 验证存储配置 禁用 SELinux
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . 79
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
使用 OCFS2 Oracle 群集件和数据库配置共享存储 使用 ASM Oracle 群集件和数据库配置共享存储
安装
Oracle RAC 10g
开始之前 安装 Oracle 群集件 安装 Oracle Database 10g 软件 RAC 部署后修复程序和增补软件 配置监听程序 使用 OCFS2 创建基础数据库 使用 ASM 创建基础数据库
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
. . . . . . . . . . . . . . . . . . . . . . 92
. . . . . . . . . . . . . . . . . . . . 93
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
. . . . . . . . . . . . . . . . . . . . . . . 97
. . . . . . . . . . . . . . . . . . . . . . . . 98
69
70
72
. . . . . . . . . . . 72
. . . . . . . . . . . . . 73
74
78
. . . . . . . . . 83
. . . . . . . . . . 86
89
保护系统
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
oracle 用户设置密码
101
. . . . . . . . . . . . . . . . . . . . . . . . . 101
目录 67
配置和部署
配置公共网络 配置数据库存储
Oracle Database 10g
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
(单个节点) . . . . . . . . . . . . . .
使用 Oracle ASM 库驱动程序配置数据库存储 安装 Oracle Database 10g
. . . . . . . . . . . . . . . . . . . . . . . . 105
安装 Oracle Database 10g 10.2.0.2 增补软件集 配置监听程序 创建基础数据库
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
101
. . . . . . . . . . . . 103
. . . . . . . . . . . . . 106
添加和删除节点
将新节点添加到网络层 在新节点上配置共享存储 将新节点添加到 Oracle 群集件层 将新节点添加到数据库层 重新配置监听程序 将新节点添加到数据库实例层 从群集中删除节点
重新安装软件
附加信息
支持的软件版本 确定专用网络接口
故障排除
获得帮助
Dell 支持 Oracle 支持
获取和使用开放源代码文件
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 111
. . . . . . . . . . . . . . . . . . . . . . . 111
. . . . . . . . . . . . . . . . . . . 112
. . . . . . . . . . . . . . . . . . . . . . . 113
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
. . . . . . . . . . . . . . . . . . . . . 114
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
. . . . . . . . . . . . . . . . . . . . . . . . .
110
118
119
121
126
126
索引 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
68 目录
本说明文件介绍有关在 企业版及
Red Hat Enterprise Linux CD 和 Oracle RAC 10g
本说明文件包括以下主题:
有关
www.dell.com/10g
Oracle Real Application Clusters (RAC)
注:如果您仅使用操作系统 CD 来安装操作系统,则本说明文件中的步骤可能不适用。
软件和硬件要求 安装和配置 验证群集硬件与软件配置 为
Oracle RAC
安装
Oracle RAC
配置和安装 添加和删除节点 重新安装软件 附加信息 故障排除 获得帮助 获取和使用开放源代码文件
支持的
Dell
Dell|Oracle
Red Hat® Enterprise Linux
配置存储和网络
Oracle Database 10
Oracle
配置的详情,请参阅“经
支持的配置上安装、配置、重新安装和使用
软件的信息。请将本说明文件与
(单个节点)
g
软件
Dell|Oracle
配合使用来安装软件。
CD
测试和验证的配置”网站
Oracle Database 10g
Dell Deployment
CD
Oracle RAC 10g
如果您购买了
验证群集硬件与软件配置
配置存储和网络
安装
Oracle RAC 10g
Oracle RAC 10g R2
部署服务
部署服务,
专业服务代表将为您提供以下帮助:
Dell
部署指南 69

软件和硬件要求

在系统上安装
Red Hat 网站
找到
Oracle CD
从“经
Dell|Oracle
Dell Deployment CD
列出了
1-1
Oracle RAC
支持的
Dell
软件之前:
rhn.redhat.com
下载 Red Hat CD
套件。
测试和验证的配置”网站
映像。将所有这些下载的
Oracle
配置的基本软件要求。表
映像。
www.dell.com/10g
映像刻录成
CD
至表
1-2
下载适用于要安装的解决方案的
CD
列出了硬件要求。有关驱动程
1-3
序和应用程序最低软件版本的详情,请参阅“支持的软件版本”。
软件组件 配置
Red Hat Enterprise Linux AS EM64T
Oracle Database 10g
EMC® PowerPath
软件要求
1-1.
(第
版) 更新
4
®
注:视用户数量、使用的应用程序、批处理进程以及其它因素而定,您可能需要一个超出最低硬件要求
的系统才能获得所需的性能。
注:所有节点的硬件配置必须完全相同。
最低硬件要求 — 光纤信道群集
1-2.
3
10.2
企业版,包括用于群集的
用于单个节点配置的企业版
4.5.1
RAC
选件
硬件组件 配置
Dell™ PowerEdge™
存储管理
Dell|EMC
千兆位以太网交换机 (两个)
Dell|EMC
[ASM]
光纤信道存储系统 有关支持的配置信息,请访问 “经
光纤信道交换机 (两个) 用于两个至六个节点的八个端口
系统 (使用自动
时为二至八个节点)
70 部署指南
®
Intel
1 GB 的 RAM
内部硬盘驱动器使用的
连接至
三个千兆位网络接口控制器
两个光学主机总线适配器
网站
www.dell.com/10
有关支持的配置信息,请访问 “经 网站
www.dell.com/10
用于七个或八个节点的十六个端口
Xeon®
PERC
处理器系列
,采用
Oracle
的两个
73 GB
g
g
群集文件系统第
PowerEdge
硬盘驱动器
(NIC)
(HBA)
端口
2 版 (OCFS2)
可扩充
RAID
(RAID 1)
端口
Dell|Oracle
Dell|Oracle
控制器
(PERC)
测试和验证的配置”
测试和验证的配置”
硬件组件 配置
PowerEdge
Dell|EMC
Dell|EMC
最低硬件要求 — 单个节点
1-3.
系统
光纤信道存储系统 (可选) 有关支持的配置信息,请访问 “经
光纤信道交换机 (可选) 八个端口
Intel Xeon
1 GB 的 RAM
连接至
两个
网站
处理器系列
PERC
端口
NIC
www.dell.com/10g
的两个
73 GB

许可协议

注:您的 Dell
配置包含
30 天的 Oracle
软件试用许可。如果您没有此产品的许可证,请与

重要说明文件

有关特定硬件组件的详情,请参阅随系统附带的说明文件。 有关
Oracle
产品信息,请参阅
Oracle CD
套件中的《如何开始》指南。

开始之前

在安装
Red Hat Enterprise Linux
Red Hat Enterprise Linux
要下载
1
2
3
4
映像,请执行以下步骤:
ISO
浏览至
Red Hat Network 网站
单击
Channels
(信道)。 在左侧菜单中,单击 在
Easy ISOs
屏幕将显示所有
(简易
Red Hat
季度更新
ISO
操作系统之前,请从
映像,并将这些映像刻录成
ISO
rhn.redhat.com
Easy ISOs
(简易
ISO
)页左侧菜单中,单击
产品的
ISO
映像。
Red Hat Network 网站
)。
(全部)。
All
硬盘驱动器
Dell|Oracle
CD。
(RAID 1)
测试和验证的配置”
Dell
rhn.redhat.com
销售代表联系。
下载
Channel Name
5
从“经
6
Dell|Oracle
(可提供的解决方案列表 将
7
映像刻录成
ISO
(信道名称)菜单中,单击与
测试和验证的配置”网站
)中列出的
(SDL)
www.dell.com/10g
Red Hat Enterprise Linux
CD。
Red Hat Enterprise Linux
下载
Solution Deliverable List
软件的
软件对应的
ISO
部署指南 71
ISO
映像。
安装和配置
注意:为确保正确地安装操作系统,在安装操作系统之前,应断开系统与所有外部存储设备的连接。
本节将向您介绍 库部署。
Red Hat Enterprise Linux
Red Hat Enterprise Linux AS
操作系统的安装以及操作系统的配置以实现
Oracle
数据
使用
Deployment CD
从系统中断开所有外部存储设备的连接。
1
找到您的
2
3
计算机会引导至
当屏幕显示部署菜单时,键入 1 以选择
4
(x86_64)
当显示另一个菜单要求选择部署映像源时,键入 1 以选择
5
(通过
出现提示时,将
6
系统将创建部署分区,并且将 一张
安装完成后,系统将自动重新引导并显示
7
来配置操作系统设置。此时,请勿创建任何操作系统用户。 出现提示时,指定
8
当出现
9
当出现
10
您可以启用防火墙。 作为 root 用户登录。
11
Dell Deployment CD
Dell Deployment CD 1
Deployment CD
注:完成此过程可能需要几分钟。
并引导至部署分区。
CD
Red Hat Setup Agent Welcome(Red Hat Setup Agent
Network Setup
Security Level
安装
Red Hat Enterprise Linux
插入
Dell Deployment CD
复制解决方案)。
Dell Deployment CD 2
用户密码。
root
(网络设置)窗口时,单击
(安全保护级别)窗口时,请禁用防火墙。在完成
以及
Red Hat Enterprise Linux AS EM64T
驱动器,然后重新引导系统。
CD
Oracle 10g R2 EE on Red Hat Enterprise Linux 4 U3
和每张
的内容复制到此分区。复制操作完成后,系统将自动弹出最后
CD
Red Hat 安装 CD
Red Hat Setup Agent
CD
Copy solution by Deployment CD
插入
欢迎)窗口中,单击
(下一步)。稍后将配置网络设置。
Next
CD
驱动器。
Oracle
Next
部署之后,
(下一步)
72 部署指南
配置
Red Hat Enterprise Linux
作为 root 用户登录。
1
Dell Deployment CD 2
2
mount /dev/cdrom /media/cdrom/install.sh
插入
驱动器,然后键入以下命令:
CD
中的内容将被复制到
CD
umount /dev/cdrom
键入
键入 cd /dell-oracle-deployment/scripts/standard
3
Dell Deployment CD
注:脚本将查找并验证安装的组件版本,并根据需要将组件更新为支持的级别。
键入 ./005-oraclesetup.py
4
键入 source /root/.bash_profile 以启动环境变量
5
键入 ./010-hwCheck.py
6
/usr/lib/dell/dell-deploy-cd
,然后从
安装的脚本的目录。
CD
,配置
,以验证
CPU、RAM
驱动器中取出
Red Hat Enterprise Linux
目录中。复制过程完成后,
CD
,浏览至含有从
便安装
磁盘大小符
Oracle
数据库的最低安装要求。
Oracle
如果脚本报告参数错误,请更新硬件配置,然后再次运行脚本 (请参阅表
连接外部存储设备。
7
使用
8
令重新载入
rmmod
1-2 和表 1-3
lpfc
modprobe
驱动程序:
以更新硬件配置)。
命令重新载入
HBA
驱动程序。如,对于
Emulex HBA
出以下命
rmmod lpfc
modprobe lpfc
对于
QLA HBA
使用
Red Hat Network
Red Hat
会定期发布软件更新来修正错误、解决安全题以及添加新功能。您可以通过
Red Hat Network (RHN)
访问“经
注:如果要在单个节点上部署 Oracle 数据库,请跳过以下各节并参阅“配置和部署 Oracle Database 10g
Dell|Oracle
(单个节点)”。
定载入的驱动程序 (lsmod | grep qla),并重新载入这些驱动程序。
对系统软件包进行更新
服务下载这些更新。在使用
测试和验证的配置”网站
www.dell.com/10g
将系统软件更新为最新版本之前,
RHN
,以获取支持的最新配置。
部署指南 73

验证群集硬件与软件配置

在开始群集设置之前,请验证个群集的硬件安装、通信连和节点软件配置。以下节提供了有关 硬件和软件光纤信道群集配置的设置信息。

光纤信道群集设置

专业服务代表为您完成了光纤信道群集的设置。请据本节所的内容,验证硬件连接以及硬
Dell
件和软件配置。
1-1 和 1-3
所示为群集要求的连接览,表
概述了群集连接。
1-4
光纤信道群集的硬件连接
1-1.
千兆位以太网交换机 (专用网络)
Dell|EMC 光纤信道 存储系统
客户机系统
LAN/WAN
PowerEdge 系统
Oracle 数据库)
Dell|EMC 光纤信道交换机 (SAN)
CAT 5e/6 (公共 NIC CAT 5e/6 (铜质千兆位 NIC
光缆
附加光缆
74 部署指南
群集组件 连接
每个
每个 存储系统
每个 交换机
每个千兆位以太网交换机 连接至每个
光纤信道硬件互连
1-4.
PowerEdge
Dell|EMC
Dell|EMC
系统节点 从公共
从专用千兆位
从冗余专用千兆位
CAT 6
从光学
HBA 1
光纤信道
光纤信道
连接至
连接至每个光纤信道交换机的一至四条光缆连接;例如,对于四个端口的配置:
连接至
连接至每个
连接至另一个千兆位以太网交换机的一条
NIC
电缆
HBA 0
LAN
SPA 端口 0 SPA 端口 1 SPB 端口 0 SPB 端口 1
Dell|EMC
连接至局域网
NIC
NIC
连接至光纤信道交换机
连接至光纤信道交换机
的两根
连接至光纤信道交换机 连接至光纤信道交换机 连接至光纤信道交换机 连接至光纤信道交换机
PowerEdge
PowerEdge
(LAN)
连接至千兆位以太网交换机的一根
连接至冗余千兆位以太网交换机的一根
CAT 5e 或 CAT 6
光纤信道存储系统的一至四条光缆连接
系统的
HBA
系统上的专用千兆位
的一根增强型
的一根光缆
0
的一根光缆
1
电缆
的一根
0
的一根
1
的一根
1
的一根
0
的一条光缆
NIC
CAT 5e 或 CAT 6
5 类 (CAT 5e) 或 CAT 6
CAT 5e或 CAT 6
CAT 5e
光缆 光缆
光缆 光缆
的一条
CAT 5e 或 CAT 6
连接
电缆
电缆
连接
验证是否已为群集完成以下任务:
所有硬件均已安装在机中。
所有硬件互连均已按照图
所有逻辑设备编号
(LUN)
1-1
1-3
1-4
独立磁盘冗余阵
所示行了安装。
(RAID)
分组和存储分组均已在
道存储系统上创建。
存储分组分配群集中的节点。
继续进行以下节之前,通过外观检查所有硬件和连情保安装正确
光纤信道硬件和软件配置
每个节点都必须包含表
每个节点均必须安装以下软件:
Red Hat Enterprise Linux
光纤信道
光纤信道存储系统必须具有以下配置:
创建并分配群集存储组至少三
LUN
大小
HBA
驱动程序
中说明的最低要求的硬件外组件。
1-2
软件(请参阅表
LUN
1-1
(请参阅表
5 GB
1-5
Dell|EMC
部署指南 75
光纤
用于群集存储分组的
1-5.
LUN
LUN
第一个
第二个
第三个
LUN
LUN
LUN
最小容量 分区数 用途
512 MB
大于数据库的大小
至少为第二个
LUN
的两倍

存储系统布线

您可以在接连接配置或具有四个端口的要。有关这两种配置,请参阅下列步骤。
直接连接光纤信道群集布线
1-2.
节点 1
1
0
HBA 端口 (2 个)
SAN
三个分区,每个
数据库
1
快闪恢复区域
1
连接配置中配置
SP 端口
128 MB
Oracle
群集存储系统,具体取决于您
HBA 端口 (2 个)
投票磁盘、 表
(OCR)
文件
群集注册
Oracle
和存储处理器
节点 2
1
0
SP-B
(SP)
76 部署指南
2
3
1
0
CX700 存储系统
1
0
2
3
SP-A
直接连接配置
要在接连接配置中配置节点(请参阅
1
2
3
4
从节点
1 上的 HBA0 至 SP-A
从节点
1 上的 HBA1 至 SP-B
从节点
2 上的 HBA0 至 SP-A
从节点
2 上的 HBA1 至 SP-B
),请执行以下步骤:
1-2
连接一条光缆
0
连接一条光缆
0
连接一条光缆
1
连接一条光缆
1
1-3. SAN
连接光纤信道群集布线
节点 1
01
sw0
HBA 端口 (2 个)
3
0
2
1
HBA 端口 (2 个)
SP 端口
节点 2
1
0
sw1
SP-B
CX700 存储系统
1
2
0
3
SP-A
部署指南 77
连接配置
SAN
要在具有四个端口的
SP-A 端 0
1
SP-A 端 1
2
SP-A 端 2
3
SP-A 端 3
4
SP-B 端 0
5
SP-B 端 1
6
SP-B 端 2
7
SP-B 端 3
8
从节点 从节点 从节点 从节点
1 上的 HBA0
1 上的 HBA1
2 上的 HBA0
2 上的 HBA1
9
10
11
12
连接配置中配置节点(请参阅
SAN
光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换
光纤信道交换机 至光纤信道交换机 至光纤信道交换机 至光纤信道交换
连接一条光缆
0
连接一条光缆
1
连接一条光缆
0
连接一条光缆
1
连接一条光缆
1
连接一条光缆
0
连接一条光缆
1
连接一条光缆
0
连接一条光缆
0
连接一条光缆
1
连接一条光缆
0
连接一条光缆
1
),请执行以下步骤:
1-3
Oracle RAC 10g
本节介绍了对运行基础
配置公共和专用网络
保护系统
验证存储配置
为群集就绪服务
Oracle RAC 10g
网络和存储,请按顺序执行以下过程。
(CRS) 和 Oracle
的数据库配置,要求按顺序执行以下一系列过程。要用最的时配置
配置存储和网络
(seed)
数据库的光纤信道群集行设置的信息和过程:
数据库配置共享存储
78 部署指南

配置公共和专用网络

本节将向您介绍配置公共和专用群集网络的步骤。
注:每个节点都需要一个唯一的公共和专用网际协议 (Internet Protocol, IP) 地址,以及一个附加公共 IP
址,该附加公共 IP 地址作为客户端连接和连接故障转移的虚拟 IP 地址。虚拟 IP 地址必须与公共 IP 属于 同一个子网。所有公共 IP 地址,包括虚拟 IP 地址,都应该向域名服务注册并且可路由。
据可用的
1-6. NIC
端口 三个可用端口 四个可用端口
NIC
1
2
3
4
配置公共网络
注:确保公共 IP 地址是有效且可路由的 IP 地址。
口的数目,按照
NIC
端口分配
公共
IP
专用
IP
专用
IP
无虚
中所示配置接口。
1-6
和虚拟
IP
(已绑定) 专用
(已绑定) 专用
公共
IP
(已绑定)
IP
(已绑定)
IP
IP
如果您尚未配置公共网络,请在每个节点上执行以下步骤行配置:
作为 root 用户登录。
1
编辑网络设备文件
2
/etc/sysconfig/network-scripts/ifcfg-eth#
配置文件:
DEVICE=eth0 ONBOOT=yes IPADDR=< 公共 IP 地址 > NETMASK=< 子网掩码 > BOOTPROTO=static HWADDR=<MAC 地址 > SLAVE=no
网络设备,并以下方
#
编辑
3
/etc/sysconfig/network
localhost.localdomain
如,节点
对应的行应如下所示:
1
HOSTNAME=node1.domain.com
键入:
4
service network restart
文件,如果要,用完全限定的公共节点名称替换
部署指南 79
键入 ifconfig
5
检查网络配置是否正确,请从群集外的某台
6
连接至每个节点以验证公共网络是否正常工作,然后键入 ssh <
7
程序
(
ssh)
利用绑定功能配置专用网络
,验证
命令是否发挥作用。
地址设置是否正确
IP
户机对每个公共
LAN
地址执行
IP
公共
IP> 以验证安全命令解
ping
命令。
在部署群集之前,应将专用群集网络设置为允许节点之间相互通信。此过程包括配置网络定以及为 群集中的每个节点分配专用
要为
Broadcom 或 Intel NIC
作为 root 用户登录。
1
/etc/modprobe.conf
2
地址和主机名。
IP
设置网络定并配置专用网络,请在每个节点上执行以下步骤:
文件中添加以下行:
alias bond0 bonding
为了获得可用,请编辑
3
miimon
默认值为 0,该值会禁用功能。开始时将该值
/etc/modprobe.conf
文件并设置测选
调整便改善性能,如以下示所示。键入:
options bonding miimon=100 mode=1
/etc/sysconfig/network-scripts/
4
目录中,创建或编辑
如,使用本网络参数时,文件会显示如下:
DEVICE=bond0 IPADDR=192.168.0.1 NETMASK=255.255.255.0 NETWORK=192.168.0.0 BROADCAST=192.168.0.255 ONBOOT=yes BOOTPROTO=none USERCTL=no
NETMASK、NETWORK 和 BROADCAST 这些可选的。
DEVICE=bondn 是必需定名称, n 指定了
IPADDR 是专用
要使用
bond0
地址
IP
作为虚拟设备,您必须指定要作为从设备定的设备。
ifcfg-bond0
100
配置文件。
毫秒,然后
80 部署指南
对于定成的每个设备,执行以下步骤:
5
a
在目录
/etc/sysconfig/network-scripts/
中,编辑
ifcfg-ethn
DEVICE=ethn HWADDR=<MAC 地址 > ONBOOT=yes TYPE=Ethernet USERCTL=no MASTER=bond0 SLAVE=yes BOOTPROTO=none
b
键入 service network restart 并忽略任何警告
在每个节点上,键入 ifconfig 以验证专用接口是否正常工作。
6
节点的专用
地址分配专用接口
IP
bond0
文件,包括以下几行:
每个节点上均已设置专用
7
地址后,请从一个节点
IP
ping
每个
地址保专用网络可以正常
IP
作。 连接至每个节点,并键入以下命令以验证专用网络和
8
ssh <
在每个节点上,添加以下行来修改
9
专用
IP>
/etc/hosts
文件:
正常工作:
ssh
127.0.0.1 localhost.localdomain localhost < 专用 IP node1> < 专用主机名 node1> < 专用 IP node2> < 专用主机名 node2>
< 公共 IP node1> < 公共主机名 node1> < 公共 IP node2> < 公共主机名 node2 >
< 虚拟 IP node1> < 虚拟主机名 node1> < 虚拟 IP node2> < 虚拟主机名 node2>
注:本步骤和以下步骤中的示例针对的是双节点配置;其它每个节点都应该添加以下行。
部署指南 81
在每个节点上,通过列出所有公共
10
地址或主机名来创建或修改
IP
如果对于每个节点来说,您有一个公共主机名、一个虚拟 几行:
<
公共主机名
<
公共主机名
虚拟
<
虚拟
<
node1> oracle node2> oracle
IP
或主机名
IP
或主机名
node1> oracle node2> oracle
/etc/hosts.equiv
地址和一个虚拟主机名,添加下列
IP
文件。如,
作为 oracle 登录并连接到每个节点,键入以下命令以验证程命令解释程序
11
(
rsh
命令是否正
)
常工作:
rsh <
中,
公共 主机名 nodex>
为节点号。
x

验证存储配置

配置群集时,在光纤信道存储系统上创建分区。要创建分区,所有节点均必须能够检测外部存储设备。 要验证是否每个节点都能检存储
对于
1
版本的
Dell|EMC
光纤信道存储系统,验证每个节点中是否均已安装了
PowerPath
(请参阅表
正确的存储组。有关说明,请参阅随
注:为您安装群集的 Dell 专业服务代表已执行此步骤。如果您在节点中重新安装软件,则必须执行
此步骤。
通过外观检查来验证存储设备和节点是否已正确连接至光纤信道交换机(请参阅
2
验证您是否已作为 root 用户登录。
3
在每个节点上,键入:
4
more /proc/partitions
节点将测和显示
注:列出的设备可能有所不同,视存储系统的配置方法而定。
或逻辑磁盘,以及在这些外部设备上创建的分区。
LUN
屏幕将显示一个列表,列出节点检测到的 列表中将列出
PowerPath
虚拟设备,如
或逻辑磁盘,请执行以下步骤:
LUN
),以及是否已在
1-7
Dell|EMC
LUN
/dev/emcpowera、/dev/emcpowerb
EMC Navisphere
光纤信道存储系统附带的说明文件。
或逻辑磁盘以及在这些外部设备上创建的分区。
EMC Navisphere®
软件中将每个节点分配
1-1 和表 1-4
/dev/emcpowerc
)。
/proc/partitions
5
对于所有节点,出现在文件中的所有如,
/dev/emcpowera、/
光纤信道
LUN
如,如果对节点行配置,使 道存储设备,
emcpowerc
可以识别
如果外部存储设备出现在
82 部署指南
文件中,保:
dev/emcpowerb
显示为
SCSI
可以别节点的
sda
LUN( PowerPath
/proc/partitions
PowerPath
虚拟设备都具类似的设备名称。
/dev/emcpowerc。
设备,且每个节点配置了相同
SCSI
RAID
驱动器
容器内部驱动器,
容器连接到具有三个逻辑磁盘的光纤信
RAID
虚拟设备)。
文件中,请重新引导节点。
LUN
emcpowera、emcpowerb
禁用
SELinux
Oracle
时禁用
作为 root 用户登录。
1
在命令提示下键入:
2
数据库,必须禁用
SELinux
setenforce 0
SELinux
,请执行以下步骤:
永久禁用
1
2
SELinux
grub.conf
找到内命令行,并添加以下选
,请在所有节点上执行以下步骤:
文件。
selinux=0
如:
kernel /vmlinuz-2.6.9-34.ELlargesmp ro root=LABEL=/ apic rhgb quiet selinux=0
重新引导系统。
3
使用
OCFS2 为 Oracle
在开始使用
OCFS2
http://oss.oracle.com/projects/ocfs2/files/RedHat/RHEL4/x86_64/1.2.3-1
键入以下命令找到内版本:
群集件和数据库配置共享存储
之前:
下载 RPM
uname –r
然后,下载适用于该内核版本的
http://oss.oracle.com/projects/ocfs2-tools/files/RedHat/RHEL4/x86_64/1.2.1-1
OCFS2
软件包。
下载 ocfs2-tools
软件包。
键入以下命令,安装
ocfs2 和 ocfs2-tools
软件包:
rpm –ivh *
部署指南 83
要使用
OCFS2
一个节点上,作为 root 用户登录。
1
请执行以下步骤:
2
a
配置存储设备:
键入以下命令启动
X Window
startx
b
终端中键入以下命令,
(
/etc/ocfs2/cluster.conf
)
ocfs2console
c
从菜单中,单击
Cluster
(群集)
如果群集处于脱机状态,控制台将启动群集。屏幕将出现一个显示此信息的信息窗口。 关此信息窗口。
系统:
ocfs2
Configure Nodes
默认群集名的
(配置节点)。
OCFS2
配置文件
屏幕将显示
d
要向群集添加节点,请单击
Node Configuration
(节点配置)窗口。
(添加)。入节点名称(与主机名相同)和专用
Add
留端默认值入所有详信息后,请单击 重复此步骤,将所有节点添加至群集。
e
添加所有节点后,请在 然后单击
f
从菜单中,单击 屏幕将显示
示信息
g
选择
在所有节点上键入以下命令,以便在启动时启用群集堆栈:
3
Finished
File
(关)。
Close
Propagate Cluster Configuration
(文件)
Node Configuration
Cluster
(群集)
(完成),然后单击
退出)。
Quit
(节点配置)窗口中,单击
Propagate Configuration
传播群集配置)窗口。请等候到窗口中显
(关)。
Close
/etc/init.d/o2cb enable
使用以下步骤更所有节点上的
4
a
键入以下命令,在所有节点上停止
O2CB_HEARTBEAT_THRESHOLD
服务:
O2CB
/etc/init.d/o2cb stop
b
在所有节点上,将
c
键入以下命令,在所有节点上启动
/etc/sysconfig/o2cb
中的
O2CB
O2CB_HEARTBEAT_THRESHOLD
服务:
/etc/init.d/o2cb start
定)。
OK
Apply
传播配置)。
IP
(应用),
值编辑
61
84 部署指南
对于光纤信道群集,在一个节点上,使用
5
a
键入以下命令,为个设备创建主分区:
在另外个外部存储设备上创建一个分区:
fdisk
fdisk /dev/emcpowerx
键入 h
,获取
b
键入以下命令,验证新分区是否存在:
用程序内的帮助。
fdisk
cat /proc/partitions
c
如果到新分区,键入:
sfdisk -R /dev/< 设备名称 >
注:以下步骤使用样本值 /u01/u02 /u03 作为安装点,并使用 u01u02 u03 作为标签
在任一个节点上,使用命令行公用程序
6
128 K
群集大小以及
个节点插(节点插指群集节点的数),如下所示:
4
mkfs.ocfs2
,将外部存储设备格式化
mkfs.ocfs2 -b 4K -C 128K -N 4 -L u01 /dev/emcpowera1 mkfs.ocfs2 -b 4K -C 128K -N 4 -L u02 /dev/emcpowerb1 mkfs.ocfs2 -b 4K -C 128K -N 4 -L u03 /dev/emcpowerc1
注:设置群集格式化参数的详情,请参阅
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html
在每个节点上,执行以下步骤:
7
a
为每个
OCFS2
分区创建安装点。要执行此过程,请键入以下命令创建目标分区目录和设置
所有
mkdir -p /u01 /u02 /u03 chown -R oracle.dba /u01 /u02 /u03
b
在每个节点上,修改
/etc/fstab
文件,添加以下用于光纤信道存储系统的行:
/dev/emcpowera1 /u01 ocfs2 _netdev,datavolume,nointr 0 0 /dev/emcpowerb1 /u02 ocfs2 _netdev,datavolume,nointr 0 0 /dev/emcpowerc1 /u03 ocfs2 _netdev,datavolume,nointr
为所有
c
在每个节点上,键入以下命令以装入
OCFS2
卷创建相应的条目。
/etc/fstab
文件中列出的所有卷:
mount -a -t ocfs2
d
个节点上,将以下命令添加至
/etc/rc.local
文件中:
mount -a -t ocfs2
4 K
0 0
块大小
部署指南 85
使用
ASM 为 Oracle
群集件和数据库配置共享存储
Oracle
本节介绍为
使用原始设备接口配置共享存储
1
群集件配置共享存储
Oracle
群集件配置共享存储的说明。
一个节点上,使用
fdisk /dev/emcpowerx
键入
fdisk
一个用于投票磁盘,另外一个用于
键入以下命令,验证这些新分区:
2
more /proc/partitions
在所有节点上,如果
sfdisk -R /dev/<
在所有节点上,执行以下步骤:
3
a
编辑
/etc/sysconfig/rawdevices
/proc/partitions
设备名称
/dev/raw/votingdisk /dev/emcpowera1 /dev/raw/ocr.dbf /dev/emcpowera2 /dev/raw/spfile+ASM.ora /dev/emcpowera3
b
键入 udevstart 以创建始设备。
c
键入 service rawdevices restart 以重新启动始设备服务。
注:如果 PowerPath 虚拟设备上的个分在各个节点之不一致,则对
/dev/sysconfig/rawdevices 配置文件相应地进行修
用程序在外部存储设备上创建个分区:
并创建三个
Oracle
文件中显示新分区,请键入:
150 MB
系统参数文件。
分区,中一个分区用于群集库,
>
文件,添加以下用于光纤信道群集的行:
使用
要使用
1
2
为数据库配置共享存储
ASM
配置群集,请在所有节点上执行以下步骤:
ASM
作为 root 用户登录。 在所有节点上,使用
a
键入以下命令,为个设备创建主分区:
用程序在另外个外部存储设备上创建一个分区:
fdisk
fdisk /dev/emcpowerx
键入 h
,获取
b
键入以下命令,验证新分区是否存在:
用程序内的帮助。
fdisk
cat /proc/partitions
如果到新分区,键入:
sfdisk -R /dev/<
注:要使用 ASM 配置共享存储,可以使用始设备接口或 Oracle ASM 库驱动程序来完成。
设备名称
>
86 部署指南
使用原始设备接口配置共享存储
编辑
1
/etc/sysconfig/rawdevices
文件,添加以下用于光纤信道群集的行:
/dev/raw/ASM1 /dev/emcpowerb1 /dev/raw/ASM2 /dev/emcpowerc1
键入以下命令以创建始设备:
2
udevstart
键入以下命令,重新启动始设备:
3
service rawdevices restart
要添加附加的
4
磁盘(例如,
ASM
),请编辑所有节点上的
ASM3
/etc/udev/scripts/raw-dev.sh
如下所示添加应的粗体条目。
MAKEDEV raw mv /dev/raw/raw1 /dev/raw/votingdisk mv /dev/raw/raw2 /dev/raw/ocr.dbf mv /dev/raw/raw3 /dev/raw/spfile+ASM.ora mv /dev/raw/raw4 /dev/raw/ASM1 mv /dev/raw/raw5 /dev/raw/ASM2 mv /dev/raw/raw6 /dev/raw/ASM3 chmod 660 /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora,ASM1,ASM2,ASM3} chown oracle.dba /dev/raw/{votingdisk,ocr.dbf,spfile+ASM.ora,ASM1,ASM2,ASM3}
要添加附加的
磁盘,在所有节点上键入 udevstart
ASM
,并重复步骤 4。
文件,
部署指南 87
使用
1
2
库驱动程序配置共享存储
ASM
作为 root 用户登录。 终端窗口,并在所有节点上执行以下步骤:
a
键入 service oracleasm configure
b
为所有节点键入以下输入:
Default user to own the driver interface
有驱动程序接口的默认用户)
[ ]
oracle
Default group to own the driver interface
Start Oracle ASM library driver on boot
有驱动程序接口的默认组)
(引导时启动
Fix permissions of Oracle ASM disks on boot
(y/n) [y]
一个节点上的终端窗口中,键入以下命令并
3
y
service oracleasm createdisk ASM1 /dev/emcpowerb1
service oracleasm createdisk ASM2 /dev/emcpowerc1
对所有要创建的
4
验证是否已为使用
5
终端窗口中,键入下列命令并
磁盘,重复执行步骤 3。
ASM
创建和标记
ASM
<Enter>
ASM
磁盘
键:
service oracleasm listdisks
屏幕将显示在步骤
中创建的磁盘。
3
如:
ASM1
ASM2
其它节点可以访问您在步骤
6
中创建的
3
ASM
其它每个节点上,终端窗口,键入以下命令并
service oracleasm scandisks
Oracle ASM
(引导时修复
<Enter>
磁盘
<Enter>
库驱动程序)
Oracle ASM
键:
键:
dba
[ ]
(y/n) [n]
磁盘权限
y
88 部署指南
安装
本节将说明安装
Dell
Oracle RAC 10g
Oracle RAC 10g
您创建基
(seed)
的步骤,中包括安装
数据库,以便在生产环境下部署群集之前先检查群集是否工作正常。
CRS
和安装
Oracle Database 10g

开始之前

避免在安装过程中出现故障,请为所有节点配置完全相同的系统时设置。 使节点系统时与网络时间协议
请执行以下过程之一:
Oracle
中一个节点配置为
数据库软件安装节点上的系统时设置为比其它节点的时钟稍晚一些。
NTP
(NTP)
服务器步。如果您无法访问
服务器,以步群集中的其它节点。
NTP
服务器,
软件。
安装
Oracle
作为 root 用户登录。
1
键入以下命令启动
2
startx
终端窗口,然后键入:
3
xhost +
装入
4
键入:
5
<CD_mountpoint>/cluvfy/runcluvfy.sh stage -pre crsinst
-n node1,node2 -r 10gR2 -verbose
中,
如果系统配置不正确排除关的题,然后重复执行上 runcluvfy.sh 命令。
如果系统配置正确,屏幕将显示以下信息:
Pre-check for cluster services setup was successful on all the nodes.
(对所有节点上群集服务设置的预检查成功。)
键入:
6
su - oracle
群集件
CD
系统:
X Window
Oracle Clusterware
node1 和 node2 是公共主机名。
部署指南 89
键入以下命令以启动
7
Oracle Universal Installer(Oracle
通用安装程序):
unset ORACLE_HOME
<CD_mountpoint>
/runInstaller
屏幕将显示以下信息:
Was ’rootpre.sh’ been run by root?(root 用户是否运行了 ’rootpre.sh’?) [y/n] (n)
键入 y 以继续
8
Welc om e
9
Specify Home Details
10
/crs/oracle/product/10.2.0/crs
Product-Specific Prerequisite Checks
11
Status
Specify Cluster Configuration
12
a
单击
b
Virtual Host Name
c
其它节点,重复执行步骤
d
默认群集名称为
(欢迎)窗口中单击
状态)列中显示
(添加)。
Add
Public Node Name
虚拟主机名)的名称,然后单击
Cluster Name
(群集名称)字段中,键入群集名称。
crs
(下一步)。
Next
(指定主目录详信息)窗口中,将
,然后单击
(下一步)。
Next
(产品特定先决条件检查)窗口中,确保各项系统检查的
Succeeded
(成),然后单击
Next
(指定群集配置)窗口中,添加将
公共节点名称)、
和步骤 b。
a
Private Node Name
OK
Oracle
主目录路径
(下一步)。
Oracle
(专用节点名称)和
定)。
群集件管理的节点。
e
单击
Specify Network Interface Usage
13
名称正确。 要修改接口,请执行以下步骤:
a
选择接口名称,然后单击
b
Edit private interconnect type
中,选择应的接口类型,然后单击
c
Specify Network Interface Usage
接口名称正确,然后单击
90 部署指南
(下一步)。
Next
(指定网络接口使用)窗口中,确保公共接口名称和专用接口
编辑)。
Edit
编辑专用类型)窗口的
定)。
OK
Interface Type
(接口类型
(指定网络接口使用)窗口中,确保公共接口名称和专用
(下一步)。
Next
Specify Oracle Cluster Registry (OCR) Location
14
中,执行以下步骤:
a
OCR Configuration(OCR
b
Specify OCR Location
(指定
/dev/raw/ocr.dbf
或者
(指定
配置)中,选择
置)字段中,键入:
OCR
Oracle
External Redundancy
群集注册表
(OCR)
置)窗口
(外部冗余)。
/u01/ocr.dbf
c
单击
Specify Voting Disk Location
15
a
b
(下一步)。
Next
OCR Configuration(OCR
Specify OCR Location
(如果使用
/dev/raw/votingdisk
或者
/u01/votingdisk
c
单击
Summary
16
在系统上安装
完成后,屏幕将显示
按照窗口中的说明操作,然后单击
17
注:格式化投票磁盘时,如果 root.sh 挂起,则应用 Oracle 增补软件 4679769后重复一步骤。
Configuration Assistants
18
Succeeded
,屏幕将显示
单击
19
20
Exit
在所有节点上,执行以下步骤:
a
通过键入以下命令,验证
(下一步)。
Next
要)窗口中,单击
Oracle
(成功)。
退出)。
群集件。
Execute Configuration scripts
End of Installation
olsnodes -n -v
OCFS2
)。
(指定投票磁盘位置)窗口中,执行以下步骤:
配置)中,选择
(指定
(如果使用
置)字段中,键入:
OCR
OCFS2
Install
)。
(安装)。
External Redundancy
(执行配置脚本)窗口。
定)。
OK
(配置助)窗口中,确保每个工具名称的
(安装结束)窗口。
Oracle
群集件安装:
(外部冗余)。
状态) 列中
Status
此时将显示群集中所有节点的公共节点名称列表。
b
键入:
crs_stat -t
屏幕将显示所有正在运行的
Oracle
群集件服务。
部署指南 91
安装
Oracle Database 10g
作为 root 用户登录,并键入:
1
软件
cluvfy stage -pre dbinst -n node1,node2 -r 10gR2 -verbose
中,
node1 和 node2 是公共主机名。
如果系统配置不正确,请参阅“故障排除”了解详情。
如果系统配置正确,屏幕将显示以下信息:
Pre-check for database installation was successful.
作为 root 用户,键入:
2
xhost +
(数据库安装预检查。)
作为 root 用户装入
3
作为 oracle 用户登录,并键入:
4
Oracle Database 10g
<CD_mountpoint>/runInstaller
Oracle Universal Installer
Welc om e
5
Select Installation Types
6
然后单击 在
Specify Home Details
7
Oracle
注:该步骤中的 Oracle 主目录名不得与在 CRS 安装过程中标识 Oracle 目录名相同。
不能将带有 RAC Oracle 10g 业版与 CRS 安装到相同的目录路中。
Specify Hardware Cluster Installation Mode
8
单击 在
Product-Specific Prerequisite Checks
9
Status
注:某些情况下,屏幕可能会示一条关 swap size交换区大小 告信息。忽略这
告信息,后单Yes (是)以继续
Select Configuration Option
10
(欢迎)窗口中单击
(下一步)。
Next
主目录路径为
Select All
(全选),然后单击
状态)列中显示
将会启动。
(选择安装类型)窗口中,选择
(指定主目录详信息)窗口的
/opt/oracle/product/10.2.0/db_1
Succeeded
安装数据库软件),然后单击 在
Summary
11
在群集上安装
要)窗口中,单击
Oracle
数据库软件。
CD
(下一步)。
Next
Enterprise Edition
路径字段中,验证完
Path
,然后单击
(指定硬件群集安装模式)窗口中,
(下一步)。
Next
(产品特定先决条件检查)窗口中,确保各项系统检查的
(成),然后单击
(下一步)。
Next
(选择配置选)窗口中,选择
(下一步)。
Next
Install
(安装)。
(企业版),
(下一步)。
Next
Install database Software only
接下来,屏幕将显示
按照窗口中的说明操作,然后单击
12
End of Installation
13
92 部署指南
Execute Configuration Scripts
定)。
OK
(安装完成)窗口中,单击
(执行配置脚本)窗口。
退出)。
Exit
部署后修复程序和增补软件
RAC
本节介绍部署
Oracle RAC 10g
复程序和增补软件信息。
重新配置
HBA
过 故障转移过程正常工作,请将
有关详情,请参阅 要
在除一个节点之外的所有节点上关数据库和
1
在当前行的节点上,作为 root 用户登录并键入:
2
计数误差以进行正确的
CSS
交换
EMC
存储处理器
105 秒Oracle 10g R2 10.2.0.1
CSS
Oracle Metalink 网站
时时,请执行以下步骤:
CSS
EMC PowerPath
发生故障时,切换到备用设备所
(SP)
版的默认
时时间增加到
metalink.oracle.com
故障转移
120 秒
CRS
crsctl set css misscount 120
重新引导所有节点以使
3
安装
Oracle Database 10g 10.2.0.2
下载并自解压安装软件
一个节点上,作为 oracle 用户登录。
1
增补软件和用程序创建一个文件
2
3
登录到您的
4
找以
5
该增补软件下载到
6
要解下载的压缩文件,在终端窗口中键入以下命令,然后
7
浏览器并浏览至
Web
Oracle Metalink
Linux x86-64 (AMD64/EM64T)
/opt/oracle/patches
设置生效
CSS
增补软件集
Oracle
户。
/
opt/oracle/patches
支持网站
metalink.oracle.com
作为平台增补软件
目录。
unzip p4547817_10202_LINUX-x86-64.zip
磁盘超时时
CSS
上的
PowerPath
。为
60
故障转移
PowerPath
Oracle Metalink Note 294430.1
Enter
> 键:
4547817
<
升级
1
2
Oracle
群集件安装
一个节点上,作为 root 用户登录。 关
Oracle
群集件。为此,请在终端窗口中键入以下命令,并按
crsctl stop crs
其余节点上,终端窗口,然后重复步骤
3
一个节点上,作为 oracle 用户登录。
4
终端窗口中,键入下列命令并
5
<
Enter
1
> 键:
和步骤 2。
export ORACLE_HOME=/crs/oracle/product/10.2.0/crs
<
Enter
> 键:
部署指南 93
启动
6
Oracle Universal Installer
。为此,请在终端窗口中键入以下命令,并按
cd /opt/oracle/patches/Disk1/ ./runInstaller
<
Enter
>
键:
Welc om e
单击
7
8
9
然后单击 在
10
Oracle Universal Installer
(欢迎)屏幕出现。
(下一步)。
Next
Specify Home Details
Specify Hardware Cluster Installation Mode
(下一步)。
Next
Summary
要)屏幕中,单击
(指定主目录详信息)屏幕中,单击
Install
通用安装程序)将扫描系统,显示所有要安装的增补软件,
Oracle
并将安装在系统上。安装完成后,将显示
注:完成此过程可能需要几分钟。
出现的信息窗口中显示的所有说明。
11
注:在执行本过程中的步骤 1 和步骤 2 ,请勿关闭 Oracle 群集件守护程序。
终端窗口。
12
作为 root 用户登录。
13
键入下列命令并
14
<
Enter
>
键:
$ORA_CRS_HOME/install/root102.sh
按照一个节点的方,在其余节点上重复步骤
15
一个节点上,返回
16
单击
17
单击
18
退出)。
Exit
)以退
Yes
End of Installation
Oracle Universal Installer(Oracle
(下一步)。
Next
(指定硬件群集安装模式)屏幕中,
(安装)。
End of Installation
12
至步骤
(安装结束)屏幕。
14
(安装结束)屏幕。
通用安装程序)。
升级
1
2
3
安装
RAC
一个节点上,终端窗口。 作为 oracle 用户登录。 从安装
Oracle
a
一个节点上,终端窗口。
b
作为 oracle 用户登录。
c
所有节点上的 在终端窗口中,键入下列命令并
$ORACLE_HOME/bin/srvctl stop nodeapps -n <
注:忽略可能示的告信息。
94 部署指南
数据库软件的一节点上,运行
Oracle
群集件节点应用程序。
<
Enter
Oracle Universal Installer(Oracle
键:
>
节点名称
>
通用安装程序)。
其它节点上重复执行步骤
4
一个节点上,终端窗口。
5
作为 oracle 用户登录。
6
终端窗口。
7
键入下列命令并
8
<Enter>
,并更改给定节点的节点名称。
3 (c)
键:
export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
启动
9
Oracle Universal Installer
。为此,请在终端窗口中键入以下命令,并按
cd /opt/oracle/patches/Disk1/ ./runInstaller
Welc om e
单击
10
11
12
然后单击 在
13
Oracle Universal Installer(Oracle
并将安装在系统上。安装完成后,将显示
,将显示一个信息窗口,提示您作为
终端窗口。
14
键入下列命令并
15
(欢迎)屏幕出现。
(下一步)。
Next
Specify Home Details
Specify Hardware Cluster Installation Mode
(下一步)。
Next
Summary
要)屏幕中,单击
(指定主目录详信息)屏幕中,单击
(安装)。
Install
通用安装程序)将扫描系统,显示所有要安装的增补软件,
End of Installation
root 用户
<
Enter
>
键:
/opt/oracle/product/10.2.0/db_1/root.sh
按照一个节点的方,在其余节点上重复步骤
16
安装完成后,将显示
注:完成此过程可能需要几分钟。
End of Installation
17
单击
18
一个节点上,终端窗口。
19
作为 oracle 用户登录。
20
键入下列命令并
21
)以退
Yes
End of Installation
(安装结束)屏幕中,单击
Oracle Universal Installer(Oracle
<
Enter
> 键:
srvctl start nodeapps -n <
(安装结束)屏幕。
节点名称
<
Enter
(下一步)。
Next
(指定硬件群集安装模式)屏幕中,
(安装结束)屏幕。
root.sh
15。
和步骤
14
退出)。
Exit
通用安装程序)。
>
>
键:
中,<节点名称> 为节点的公共主机名。
部署指南 95
在所有其它节点上,通过出以下命令关
22
CRS
crsctl stop crs
作为 oracle 用户,从应用增补软件集的节点,
23
/opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a
如,要将从节点
复制到节点 2,键入以下命令:
1
复制到群集中的所有其它节点。
scp /opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a node2:/opt/oracle/product/10.2.0/db_1/rdbms/lib/libknlopt.a
注:作为 root 用户执行此步骤。
通过在每个节点上键入以下命令,重新修改所有节点上的
24
Oracle
二进制文件:
cd /opt/oracle/product/10.2.0/db_1/rdbms/lib make -f ins_rdbms.mk ioracle

配置监听程序

本节将介绍配置监听程序的步骤,与数据库建立远户机连接时要使用此程序。在一个节点上,执行以下步骤:
作为 root 用户登录。
1
键入以下命令启动
2
X Window
startx
终端窗口,然后键入:
3
xhost +
作为 oracle 用户,键入 netca 以启动
4
选择
5
6
7
8
9
10
11
Cluster Configuration
TOPSNodes
Welc om e
然后单击 在
Listener Configuration
选择
Add
Listener Configuration
Listener Name
Listener Configuration
然后单击 在
Listener Configuration
窗口中单击
(欢迎)窗口中,选择
(下一步)。
Next
(添加),然后单击
监听程序名称)字段中键入 LISTENER
(下一步)。
Next
Use the standard port number of 1521
Listener Configuration
12
选择 在
13
单击
14
),然后单击
No
Listener Configuration Done
(完成)。
Finish
系统:
Net Configuration Assistant
(群集配置),然后单击
Select All Nodes
Listener Configuration
监听程序配置)
(下一步)。
Next
监听程序配置)
监听程序配置)
监听程序配置)
(选择全部节点),然后单击
(使用标准端
监听程序配置)
(下一步)。
Next
Next
Listener
监听程序)窗口中,
Listener Name
Select Protocols
TCP/IP Protocol(TCP/IP
1521
More Listeners?
监听程序配置完成)窗口中,单击
(网络配置助手)。
(下一步)。
Next
监听程序配置),
监听程序名称)窗口中,
,然后单击
(下一步)。
Next
(选择协议)窗口中,选择
协议)窗口中,选择
),然后单击
Next
是否多监听程序)窗口中,
(下一步)。
Next
(下一步)。
TCP
(下一步)。
96 部署指南
使用
OCFS2
一个节点上,作为 oracle 用户键入以下命令启动数据库配置助
1
创建基础数据库
dbca -datafileDestination /u02
(DBCA)
Welc om e
2
Oracle Real Application Cluster
Operations
3
(欢迎)窗口中,选择
(操作)窗口中,单击
(下一步)。 在
Node Selection
4
Database Templates
5
然后单击 在
6
Next
Database Identification
(节点选择)窗口中,单击
(下一步)。
(全数据库名称)(如 racdb 在
Management Options
7
Database Credentials
8
a
单击
Use the same password for all accounts
b
完成密码选择和输入。
c
单击
Storage Options
9
然后单击 在
Database File Locations
10
Recovery Configuration
11
a
单击
b
单击
c
指定快闪恢大小
d
单击
Database Content
12
Database Services
13
Initialization Parameters
14
请将
Shared Pool
Database Storage
15
Creation Options
16
然后单击
(下一步)。
Next
(存储选)窗口中选择
(下一步)。
Next
Specify Flash Recovery Area
(浏览),然后选择
Browse
(下一步)。
Next
(数据库内容)窗口中,单击
(数据库服务)窗口中,单击
共享池)的值改为 500 MB
(数据库存储)窗口中,单击
(创建选)窗口中,选择
(完成)。
Finish
Oracle Real Application Cluster Database
数据库),然后单击
Create a Database
Select All
(数据库模板)窗口中,单击
(数据库标识)窗口中,输入
),然后单击
Next
管理)窗口中,单击
(下一步)。
(下一步)。
Next
(创建数据库),然后单击
(全选),然后单击
Custom Database
Global Database Name
(下一步)。
Next
(自定数据库),
(数据库证)窗口中:
(对所有帐户使用相同密码)。
Cluster File System
(数据库文件置)窗口中单击
(群集文件系统),
(下一步)。
Next
复配置)窗口中:
(指定快闪恢复区)。
/u03
(下一步)。
Next
(下一步)。
Next
参数)窗口中,如果群集个以上节点,
,然后单击
(下一步)。
Next
Create database
(下一步)。
Next
(创建数据库),
(下一步)。
Next
Next
部署指南 97
17
Summary
注:创建基础 (seed) 数据库可能需要一个多小时
注:如果在创建基础数据库的过程中到信息 Enterprise Manager Configuration Error
配置错误),则单OK(确定)以忽略错误
要)窗口中,单击
定)创建数据库。
OK
数据库创建完成后,屏幕上将显示
单击
18
退出)。
Exit
Password Management
(密码管理)窗口。
屏幕上会显示一则消息,提示在所有节点上启动群集数据库。
在每个节点上,执行以下步骤:
19
a
键入以下命令,确定该节点上存在的数据库实例:
srvctl status database -d <
b
键入以下命令,在 oracle 用户配置文件中,添加
数据库名称
>
ORACLE_SID
环境变量条目:
echo "export ORACLE_SID=racdbx" >> /home/oracle/.bash_profile source /home/oracle/.bash_profile
中,
c
例假
使用
创建基础数据库
ASM
本节包含利用
Oracle ASM
分配节点的数据库实例标识符
racdbx
racdb
您在
创建基
DBCA
中定的全数据库名称。
(seed)
数据库和验证基数据库的过程。
请执行以下步骤:
作为 root 用户登录,并键入:
1
cluvfy stage -pre dbcfg -n node1,node2 -d $ORACLE_HOME -verbose
中,
node1 和 node2 是公共主机名。
如果系统配置不正确,请参阅“故障排除”了解详情。
如果系统配置正确,屏幕将显示以下信息:
Pre-check for database configuration was successful.
(数据库配置预检查成功。)
一个节点上,作为 oracle 用户,键入 dbca & 以启动
2
Welc om e
3
Oracle Real Application Cluster
Operations
4
(下一步)。 在
Node Selection
5
Database Templates
6
然后单击
98 部署指南
(欢迎)窗口中,选择
(操作)窗口中,单击
(节点选择)窗口中,单击
(数据库模板)窗口中,单击
(下一步)。
Next
Oracle
Oracle Real Application Cluster Database
数据库),然后单击
Create a Database
Select All
(下一步)。
Next
(创建数据库),然后单击
(全选),然后单击
Custom Database
数据库创建助
(自定数据库),
(下一步)。
Next
(DBCA)
Next
Database Identification
7
(全数据库名称),如 racdb 在
Management Options
8
Database Credentials
9
(如果要),然后单击 在
Storage Options
10
(自动存储管理
Create ASM Instance
11
a
SYS password(SYS
b
选择
c
Server Parameter Filename
Create server parameter file (SPFILE)
(存储选)窗口中,单击
[ASM]
/dev/raw/spfile+ASM.ora
d
单击
当显示的息表明
12
ASM Disk Groups(ASM
13
Create Disk Group
14
a
为要创建的磁盘入名称(如 databaseDG
(下一步)。
Next
DBCA
(创建磁盘组)窗口中,执行以下步骤:
(外部冗余),然后选择要包括在磁盘组中的磁盘。 如果使用的是原始设备接口,请选择
屏幕上显示一个窗口,提示正在创建磁盘组。
(数据库标识)窗口中,输入
,然后单击
Next
管理)窗口中,单击
Global Database Name
(下一步)。
(下一步)。
Next
(数据库证)窗口中,选择密码选项,输入相应的密码信息
(下一步)。
Next
Automatic Storage Management (ASM)
),然后单击
(创建
ASM
(下一步)。
Next
)窗口中,执行以下步骤:
密码)字段中,键入密码。
(创建服务器参数文件
[SPFILE]
)。
(服务器参数文件名)字段中,键入:
已就绪,可以创建和启动
磁盘组)下,单击
Create New
/dev/raw/ASM1。
时,单击
ASM
),选择
定)。
OK
(新建)。
External Redundancy
b
如果您使用的
Change Disk Discovery String
然后选择
c
单击
定)。
OK
在群集上创建第一个
接下来,屏幕将显示
15
16
17
18
对另一个 在
如, 在
(使用 在
ASM
ASM Disk Groups(ASM
databaseDG
Database File Locations
Oracle
Recovery Configuration
创建的回闪组(如, (下一步)。
库驱动程序,且无法访问候磁盘,请单击
ASM
(更改磁盘查字符串),键入 ORCL:* 作为字符串
ORCL:ASM1
磁盘组。
ASM
ASM Disks Groups(ASM
磁盘组重复执行步骤
磁盘组)窗口中,选择要用于数据库存储的磁盘
),然后单击
(数据库文件置)窗口中,选择
管理文件),然后单击
复配置)窗口中,单击
flashbackDG
磁盘组)窗口。
,并将 flashbackDG 作为磁盘组名称。
14
(下一步)。
Next
(下一步)。
Next
Browse
Use Oracle-Managed Files
(浏览),选择您在步骤
),然后根据需要更改快闪恢复区域的大小并单击
部署指南 99
15
Next
Database Services
19
(下一步)。 在
Initialization Parameters
20
a
选择
Custom
b
Shared Memory Management
c
SGA Size(SGA
d
单击
Next
Database Storage
21
Creation Options
22
23
然后单击 在
Finish
Summary
注:完成此过程可能需要一个小时或更多时间
(数据库服务)窗口中,配置服务(如果要),然后单击
参数)窗口中,执行以下步骤:
(自定义)。
大小)和
(下一步)。
PGA Size(PGA
(数据库存储)窗口中,单击
(创建选)窗口中,选择
(完成)。
要)窗口中,单击
共享内存管理)中,选择
大小)窗口中,应的信息。
(下一步)。
Next
Create database
定)创建数据库。
OK
Automatic
(自动)。
(创建数据库),
Next
完成数据库创建后,屏幕上将显示
单击
24
Password Management
否则,单击
Exit
退出)。
Database Configuration Assistant
(密码管理),向授权用户分配特定的密码(如果要)。
屏幕上会显示一则消息,提示在所有节点上启动群集数据库。
在每个节点上执行以下步骤:
25
a
键入以下命令,确定该节点上存在的数据库实例:
srvctl status database -d <
b
键入以下命令,在
用户配置文件中添加
oracle
数据库名称
ORACLE_SID
>
echo "export ORACLE_SID=racdbx" >> /home/oracle/.bash_profile
source /home/oracle/.bash_profile
中,
例假
在一个节点上,键入:
26
分配节点的数据库实例标识符
racdbx
racdb
您在
DBCA
中定的全数据库名称。
srvctl status database -d dbname
中,
dbname
您在
DBCA
中为数据库定义的全局标识名称。
如果行数据库实,屏幕将显示确认信息。
如果未运行数据库实键入:
srvctl start database -d dbname
(数据库配置助)窗口。
环境变量条目:
中,
dbname
100 部署指南
您在
DBCA
中为数据库定义的全局标识名称。
Loading...