Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, OpenManage, and PowerEdge are trademarks of Dell Inc.; EMC, PowerPath, and Navisphere
are registered trademarks of EMC Corporation; Intel and Xeon are registered trademarks of Intel Corporation; Red Hat is a registered trademark
of Red Hat, Inc.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
Configuring the Public Network
Configuring Database Storage
Configuring Shared Storage Using ASM
Installing Oracle Database 10g
Applying the 10.1.0.5 Patchset
Configuring the Listener
Creating the Seed Database
Setting the Password for the User oracle
Adding a New Node to the Network Layer
Configuring Shared Storage on the New Node
Configuring Shared Storage With ASM
Adding a New Node to the Clusterware Layer
Adding a New Node to the Database Layer
Removing a Node From the Cluster
Reinstalling the Software
Additional Information
Supported Software Versions
Configuring Automatic Reboot for a Hung Operating System
Determining the Private Network Interface
This document provides information about installing, configuring, reinstalling, and using Oracle
(RAC)
Database 10g Enterprise Edition with Real Application Clusters
supported configuration.
software on your Dell|Oracle
NOTE:
The following topics are covered:
•Software and hardware requirements
•Installing and configuring Red Hat
•Verifying cluster hardware and software configurations
•Configuring networking and storage for Oracle RAC 10
•Deploying Oracle RAC 10g database and patchsets on multiple nodes and creating a seed database
•Configuring and deploying Oracle Database 10
•Adding and removing nodes
•Reinstalling the software
•Additional information
•Troubleshooting
•Getting help
•Obtaining and using open source files
For more information on Dell’s supported configurations for Oracle Database 10g, see the Dell|Oracle
Tested and Validated Configurations website at www.dell.com/10g.
Use this document in conjunction with the Dell™ Deployment CD to install your software. If you install
your operating system using only the operating system CDs, the instructions in this document may not be applicable.
®
Enterprise Linux
g
g
(single node)
Oracle RAC 10g Deployment Service
If you purchased the Oracle RAC 10g Deployment Service, your Dell Professional Services representative
will assist you with the following:
•Verifying the cluster hardware and software configurations
•Configuring networking and storage
g
•Installing Oracle RAC 10
Release 1
Deployment Guide5
Page 6
Software and Hardware Requirements
Before you install the Oracle RAC software on your system, follow the instructions in the Deploying
Dell-Tested and Validated Configurations for Oracle Database document shipped with your kit, to:
•Download the Red Hat CDs from the Red Hat website located at
•Locate your Oracle CD kit, or download the Oracle software from Oracle's website located at
www.oracle.com
•Download the Dell Deployment
Configurations website at www.dell.com/10g
.
CD images from
the Dell|Oracle Tested and Validated
, and burn the Dell Deployment CDs using the CD
images.
Table 1-1 lists basic software requirements for Dell’s supported configurations for Oracle. Table 1-2 and
Table 1-3 list the hardware requirements. For detailed information on the minimum software versions
for drivers and applications, see "Supported Software Versions."
Table 1-1. Software Requirements
Software ComponentConfiguration
Red Hat Enterprise Linux AS (Version 4) operating
system for Intel
Oracle 10g Release 1 for 32-bit LinuxVersion 10.1.0.5
EMC® PowerPath®
(Fibre Channel clusters only)
®
32-bit technology (x86)
Quarterly Update 3
• Enterprise Edition, including the RAC option for clusters
• Enterprise Edition for single-node configuration
Version 4.5.1
rhn.redhat.com
.
NOTE: Depending on the number of users, the applications you use, your batch processes, and other factors,
you may need a system that exceeds the minimum hardware requirements in order to achieve the desired
performance.
NOTE: The hardware configuration of all the cluster nodes must be identical.
Dell PowerEdge™ 1750, 1850, 2600,
2650, 2800, 2850, 4600, 6600, 6650,
6800, and 6850 systems [two to eight
nodes using Oracle Cluster File
System (OCFS2) or Automatic Storage
Management (ASM)]
3-GHz Intel Xeon
1 GB of random-access memory (RAM)
PowerEdge Expandable RAID Controller (PERC) for internal hard
drives
Two 36-GB hard drives (RAID 1) connected to a PERC
Three Gigabit network interface controller (NIC) ports
See the Dell|Oracle Tested and Validated Configurations website
at www.dell.com/10g for information on supported configurations
at www.dell.com/10g for information on supported configurations
16 ports for seven or eight nodes
3-GHz Intel Xeon processor
1 GB of RAM
Two 36-GB hard drives (RAID 1) connected to a PERC
Two NIC ports
See the Dell|Oracle Tested and Validated Configurations
website at www.dell.com/10g for information on supported
configurations
License Agreements
NOTE: Your Dell configuration includes a 30-day trial license of the Oracle software. If you do not have a license
for this product, contact your Dell sales representative.
Important Documentation
For more information on specific hardware components, see the documentation that came with your system.
For Oracle product information, see the How to Get Started guide in the Oracle CD kit.
Deployment Guide7
Page 8
Installing and Configuring Red Hat Enterprise Linux
NOTICE: To ensure that the operating system is installed correctly, disconnect all external storage devices
from the system before you install the operating system.
This section describes the installation of the Red Hat Enterprise Linux AS operating system and
the configuration of the operating system for Oracle deployment.
Installing Red Hat Enterprise Linux Using the Deployment CDs
1
Disconnect all external storage devices from the system.
2
Locate your Dell Deployment CDs and original Red Hat Enterprise Linux AS 4 with Update 3 CDs.
3
Insert
Dell Deployment CD 1
The system boots to the
4
When prompted for Tested and Validated Configurations, type 4 and press <Enter> to select
Oracle 10g R1 EE on Red Hat Enterprise Linux 4 32bit Update 3
5
When prompted for Solution Deployment Image source, type 1 to select
Deployment CD
6
When prompted, insert
into the CD drive.
A deployment partition is created and the contents of the CDs are copied to it. When the copy operation
is completed, the system automatically ejects the last CD and boots to the deployment partition.
When the installation is completed, the system automatically reboots and the Red Hat Setup Agent
appears.
and press <Enter>.
into the CD drive and reboot the system.
Dell Deployment CD 1
Dell Deployment CD 2
.
and subsequently the Red Hat Installation CDs
.
Copy solution by
7
In the
Red Hat Setup Agent Welcome
8
When prompted, specify a
9
When the
you cannot configure the network bonding in this window.
10
When the
completing the Oracle deployment.
11
Log in as
8Deployment Guide
Network Setup
Security Level
root
.
root
window appears, click
window appears, disable the firewall. You may enable the firewall after
window, click
password.
Next
to configure your operating system settings.
Next
. You will configure network settings later as
Page 9
Configuring Hugemem Kernel
The Red Hat Enterprise Linux 4 hugemem kernel is required to configure the Oracle relational database
management system (RDBMS) to increase the size of the buffer cache above the default 1.7 GB value.
Using Dell Deployment CD 1, the Red Hat Enterprise Linux 4 hugemem kernel is installed by default.
Change the default boot parameters in the bootloader configuration file /etc/grub.conf to enable this option.
NOTE: Dell recommends that the hugemem kernel be used only on systems with more than 16 GB of RAM.
This kernel has some overhead which may degrade the performance on systems with less memory.
Configuring Red Hat Enterprise Linux
Log in as root on all the nodes and perform the following procedure:
1
Insert the
If you are using a CD, type:
/media/cdrom/install.sh
If you are using a DVD, type:
/media/cdrecorder/install.sh
Dell Deployment CD 2
into the CD drive.
The contents of the CD are copied to the
/usr/lib/dell/dell-deploy-cd
directory.
When the copy procedure is completed, remove the CD from the CD drive by typing:
umount /dev/cdrom
2
Navigate to the directory containing the scripts installed from the Dell Deployment CD by typing:
cd /dell-oracle-deployment/scripts/standard
NOTE: Scripts discover and validate installed component versions and, when required, update components
to supported levels.
3
Configure the Red Hat Enterprise Linux for Oracle installation by typing:
./005-oraclesetup.py
4
Start the environment variables by typing:
source /root/.bash_profile
5
Verify that the processor, RAM, and disk sizes meet the minimum Oracle installation requirements
by typing:
./010-hwCheck.py
If the script reports that a parameter failed, update your hardware configuration and run the script again.
Deployment Guide9
Page 10
6
If you are deploying the cluster using OCFS2, perform the following steps:
a
Install OCFS2 Red Hat Package Managers (RPMs) by typing:
./340-rpms_ocfs.py
b
To ensure smooth mounting of OCFS2, type:
./350-ocfs_networkwait.py
Connect the external storage.
7
Updating Your System Packages Using Red Hat Network
Red Hat periodically releases software updates to fix bugs, address security issues, and add new features.
You can download these updates through the Red Hat Network (RHN) service. See the Dell|Oracle
Tested and Validated Configurations website at www.dell.com/10g for the latest supported
configurations before you use RHN to update your system software to the latest revisions.
NOTE: If you are deploying Oracle Database 10g on a single node, skip the following sections and see "Configuring
and Deploying Oracle Database 10g (Single Node)."
Verifying Cluster Hardware and Software Configurations
Before you begin the cluster setup, verify the hardware installation, communication interconnections,
and node software configuration for the entire cluster. The following sections provide setup information
for hardware and software Fibre Channel cluster configurations.
Fibre Channel Cluster Setup
Your Dell Professional Services representative completed the setup of your Fibre Channel cluster. Verify
the hardware connections, and the hardware and software configurations as described in this section.
Figure 1-1 shows an overview of the connections required for the cluster, and Table 1-4 summarizes
the cluster connections.
10Deployment Guide
Page 11
Figure 1-1. Hardware Connections for a Fibre Channel Cluster
Dell|EMC Fibre Channel storage systems
public network
PowerEdge systems
(Oracle database)
Gb Ethernet switches (private network)
Dell|EMC Fibre Channel switches (SAN)
LAN/WAN
Cat 5e (integrated NIC)
Cat 5e (copper gigabit NIC)
fiber optic cables
additional fiber optic cables
100
SP-A
HBA 0HBA 1
switch 0
switch 1
1
SP-B
NOTE: The arrangement of storage processors, HBAs, and Fibre Channel switches shown
above is used for illustrative purposes and may vary for different network configurations.
One enhanced category 5 (Cat 5e) cable from public NIC to local area network
(LAN)
One Cat 5e cable from private Gigabit NIC to Gigabit Ethernet switch
One Cat 5e cable from a redundant private Gigabit NIC to a redundant Gigabit
Ethernet switch
One fiber optic cable from HBA 0 to Fibre Channel switch 0
One fiber optic cable from HBA 1 to switch 1
Two Cat 5e cables connected to the LAN
One to four optical connections to each Fibre Channel switch; for example,
for a four-port configuration:
• One optical cable from SPA port 0 to Fibre Channel switch 0
• One optical cable from SPA port 1 to Fibre Channel switch 1
• One optical cable from SPB port 0 to Fibre Channel switch 1
• One optical cable from SPB port 1 to Fibre Channel switch 0
One to four optical connections to the Dell|EMC Fibre Channel storage system
One optical connection to each PowerEdge system’s HBA
One Cat 5e connection to the private Gigabit NIC on each PowerEdge system
One Cat 5e connection to the remaining Gigabit Ethernet switch
Verify that the following tasks have been completed for your cluster:
•All hardware is installed in the rack.
•All hardware interconnections are set up as shown in Figure 1-1 and listed in Table 1-4.
•All logical unit numbers (LUNs), redundant array of independent disks (RAID) groups, and storage
groups are created on the Dell|EMC Fibre Channel storage system.
•Storage groups are assigned to the nodes in the cluster.
NOTICE: Before you perform the procedures in the following sections, ensure that the system hardware and
cable connections are installed correctly.
12Deployment Guide
Page 13
Fibre Channel Hardware and Software Configurations
•Each node must include the following minimum hardware peripheral components:
–One or two hard drives (36-GB minimum) in the internal hard-drive bay
–Three Gigabit NIC ports
–Two Fibre Channel HBAs
•Each node must have the following software installed:
–Red Hat Enterprise Linux software (see Table 1-1)
–Fibre Channel HBA driver
–OCFS2 module for the kernel and the configuration tools for OCFS2
NOTE: OCFS supports two kinds of kernel, namely hugemem and Symmetric MultiProcessing (SMP).
Choose the OCFS type according to your kernel.
•The Fibre Channel storage must be configured with the following:
–A minimum of three LUNs created and assigned to the cluster
–A minimum LUN size of 5 GB
Configuring Networking and Storage for Oracle RAC 10g
This section provides information on setting up a Fibre Channel cluster running a seed database
and includes the following procedures:
•Configuring the Public and Private Networks
•Securing Your System
•Verifying the Storage Configuration
•Configuring Shared Storage Using OCFS2
•Configuring Shared Storage With ASM
Configuring Oracle RAC 10g database is complex and requires an ordered list of procedures.
To configure networking and storage in a minimal amount of time, perform the following procedures
in a sequence.
Configuring the Public and Private Networks
This section presents steps to configure the public and private cluster networks.
NOTE: Each node requires a unique public and private Internet Protocol (IP) address and an additional public
IP address to serve as the virtual IP address for the client connections and connection failover. The virtual IP
address must belong to the same subnet as the public IP. All public IP addresses, including the virtual IP address,
must be registered with DNS.
Deployment Guide13
Page 14
Depending on the number of NIC ports available, configure the network interfaces as shown in Table 1-5.
Table 1-5. NIC Port Assignments
NIC PortThree Ports AvailableFour Ports available
1Public IP and virtual IPPublic IP
2Private IP (bonded)Private IP (bonded)
3Private IP (bonded)Private IP (bonded)
4NAVirtual IP
NOTE: The Oracle installer requires that the public interface name and the bond name for the private interface be
the same on all the cluster nodes. If the public interfaces are different, a workaround is to use bonding to abstract
the network interfaces and use this for Oracle installation.
Configuring the Public Network
If you have not already configured your public network, configure it by performing the following
procedure on each node:
1
Log in as
2
Edit the network device file
root
.
/etc/sysconfig/network-scripts/ifcfg-eth#
, where # is the number
of the network device, and configure the file as follows:
For example, the line for the first node would be as follows:
HOSTNAME=node1.domain.com
Ty p e :
4
service network restart
Verify that the IP addresses are set correctly by typing:
5
ifconfig
14Deployment Guide
file, and, if necessary, replace
localhost.localdomain
Page 15
6
Check your network configuration by pinging each public IP address from a client on the LAN outside
the cluster.
7
Connect to each node to verify that the public network is functioning and verify that the secure shell
(ssh) is working by typing:
ssh <public IP>
Configuring the Private Network Using Bonding
Before you deploy the cluster, configure the private cluster network to allow the nodes to communicate
with each other. This involves configuring network bonding and assigning a private IP address and host
name to each node in the cluster. To set up network bonding for Broadcom or Intel NICs and to
configure the private network, perform the following procedure on each node:
1
Log in as
2
Add the following line to the
root
.
/etc/modprobe.conf
file:
alias bond0 bonding
For high availability, edit the
3
The default value for
miimon
/etc/modprobe.conf
file and set the option for link monitoring.
is 0, which disables link monitoring. Change the value to
100 milliseconds initially, and adjust it as needed to improve performance, as shown in the following
example. Type:
options bonding miimon=100 mode=1
In the
4
/etc/sysconfig/network-scripts/
directory, create or edit the
ifcfg-bond0
configuration file.
For example, using sample network parameters, the file would appear as follows:
, verify that the private interface is functioning by typing:
and ignore any warnings.
ifconfig
The private IP address for the node should be assigned to the private interface bond0.
7
When the private IP addresses are set up on every node, ping each IP address from
that the private network is functioning.
8
Connect to each node and verify that the private network and
ssh
are functioning correctly by typing:
ssh <private IP>
9
On
each node,
modify the
/etc/hosts
file by adding the following lines:
127.0.0.1 localhost.localdomain localhost
<private IP node1> <private hostname node1>
<private IP node2> <private hostname node2>
one node
to ensure
<public IP node1> <public hostname node1>
<public IP node2> <public hostname node2>
<virtual IP node1> <virtual hostname node1>
<virtual IP node2> <virtual hostname node2>
NOTE: The examples in this and the following step are for a two-node configuration; add lines for each
additional cluster node.
16Deployment Guide
Page 17
10
On
each node
, create or modify the
/etc/hosts.equiv
file by listing all of your public IP addresses or host
names. For example, if you have one public host name, one virtual IP address, and one virtual host
name for each node, add the following lines:
<virtual IP or hostname node1>oracle
<virtual IP or hostname node2>oracle
11
Log in as
rsh <public hostname nodex>
where
oracle
x
is the node number.
, and connect to each node to verify that remote shell (
rsh
) is working by typing:
,
Securing Your System
To prevent unauthorized users from accessing your system, Dell recommends that you disable rsh after
you install the Oracle software. Disable rsh by typing:
chkconfig rsh off
Verifying the Storage Configuration
While configuring the clusters, create partitions on your Fibre Channel storage. In order to create the
partitions, all cluster nodes must be able to detect the external storage devices. To verify that each node
can detect each storage LUN or logical disk, perform the following steps:
1
For Dell|EMC Fibre Channel storage, verify that the EMC Navisphere® agent and the correct version
of PowerPath (see Table 1-6) are installed on each node, and that each node is assigned to the correct
storage group in your Navisphere agent software. See the documentation that came with your
Dell|EMC Fibre Channel storage for instructions.
NOTE: The Dell Professional Services representative who installed your cluster performed this step. If you
reinstall the software on a node, you must complete this step.
2
Visually verify that the storage devices and cluster nodes are connected correctly to the Fibre Channel
switch (see Figure 1-1 and Table 1-4).
3
Verify that you are logged in as
root
.
Deployment Guide17
Page 18
4
On
each node
, type:
more /proc/partitions
The node detects and displays the LUNs or logical disks, as well as the partitions created on those
external devices.
NOTE: The listed devices vary depending on how your storage is configured.
A list of the LUNs or logical disks that are detected by the node is displayed, as well as the partitions
that have been created on those external devices. PowerPath pseudo devices appear in the list, such as
/dev/emcpowera, /dev/emcpowerb
5
In the
/proc/partitions
file, ensure that:
, and
/dev/emcpowerc
.
•All PowerPath pseudo devices appear in the file with similar device paths. For example,
/dev/emcpowera, dev/emcpowerb
, and
/dev/emcpowerc
.
•The Fibre Channel LUNs appear as small computer system interface (SCSI) devices, and each
cluster node is configured with the same number of LUNs.
For example, if the node is configured with a SCSI drive or RAID container attached to
a Fibre Channel storage device with three logical disks,
or internal drive, and
emcpowera, emcpowerb
, and
sda
identifies the node’s RAID container
emcpowerc
identifies the LUNs (or PowerPath
pseudo devices).
If the external storage devices do not appear in the /proc/partitions file:
On
1
all the nodes
, stop the PowerPath service by typing:
service naviagent stop
service PowerPath stop
2
On
all the nodes
•For QLogic HBAs:
rmmod qla2300
modprobe qla2300
•For Emulex HBAs:
rmmod lpfc
modprobe lpfc
3
On
all the nodes
service PowerPath start
service naviagent start
4
Confirm that all the nodes detect the external storage devices by typing:
more /proc/partitions
18Deployment Guide
, reload the HBA driver to synchronize the kernel's partition tables by typing:
, restart the PowerPath service by typing:
Page 19
Configuring Shared Storage Using OCFS2
Shared storage can be configured using either OCFS2 or ASM. This section provides procedures
for configuring shared storage using OCFS2.
1
Log in as
2
Perform the following steps:
a
b
c
root
on the
first node
.
Start the X Window System by typing:
startx
Generate the OCFS2 configuration file (
of
ocfs2
by typing the following in a terminal:
/etc/ocfs2/cluster.conf
) with a default cluster name
ocfs2console
From the menu, click
Cluster→ Configure Nodes
.
If the cluster is offline, the console will start it. A message window appears displaying that information. Close the message window.
Node Configuration
The
d
To add nodes to the cluster, click
window appears.
Add
. Enter the node name (same as the host name) and the
private IP. Retain the default value of the port number. After entering all the details mentioned,
click
OK
. Repeat this step to add all the nodes to the cluster.
e
3
When all the nodes are added, click
Window
f
From the menu, click
.
Cluster→ Propagate Configuration
Propagate Cluster Configuration Window
on the window and then click
g
Select
On
all the nodes
File→ Quit
.
, enable the cluster stack on startup by typing:
Close
Apply
and then click
appears. Wait until the message
.
/etc/init.d/o2cb enable
Change the O2CB_HEARTBEAT_THRESHOLD value on
4
a
Stop the O2CB service on
all the nodes
by typing:
/etc/init.d/o2cb stop
b
Edit the O2CB_HEARTBEAT_THRESHOLD value in
c
Start the O2CB service on
all the nodes
by typing:
/etc/init.d/o2cb start
Close
in the
Node Configuration
.
all the nodes
using the following steps:
/etc/sysconfig/o2cb
Finished
to 61 on
appears
all the nodes
.
Deployment Guide19
Page 20
5
On the
first node
storage devices with
a
Create a primary partition for the entire device by typing:
fdisk /dev/emcpowerx
, for a Fibre Channel cluster, create one partition on each of the other two external
fdisk
:
Ty p e h for help within the
b
Verify that the new partition exists by typing:
fdisk
utility.
cat /proc/partitions
If you do not see the new partition, type:
sfdisk -R /dev/<device name>
NOTE: The following steps use the sample values /u01 and /u02 for mount points and u01 and u02 as labels.
6
On
any one node
slots (node slots refer to the number of cluster nodes) using the command line utility
, format the external storage devices with 4 K block size, 128 K cluster size, and 4 node
The shared database partitions can either be configured as raw devices or can be configured using
the ASMLib software.
Deployment Guide21
Page 22
Configuring Shared Storage Using ASMLib
1
To configure your cluster using ASM, perform the following steps on
a
Log in as
b
Configure the ASM kernel module by typing:
root
.
/etc/init.d/oracleasm configure
The following message appears on the screen:
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM
library driver. The following questions will determine whether the
driver is loaded on boot and what permissions it will have. The
current values will be shown in brackets ('[]'). Hitting <ENTER>
without typing an answer will keep that current value. Ctrl-C will
abort.
A message appears prompting you to enter the default user owning the driver interface.
Ty p e
oracle
as mentioned below:
Default user to own the driver interface []: oracle
A message appears prompting you to enter the default group owning the driver interface.
Ty p e
dba
as mentioned below:
Default group to own the driver interface []: dba
all the nodes
:
A message appears prompting you to load the oracleasm driver on boot. To load the driver, type y
as mentioned below:
Start Oracle ASM library driver on boot (y/n) [n]: y
A message appears prompting you to fix permissions of Oracle ASM disks on boot. Type y as
mentioned below:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:y
The following messages appear on the screen:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
22Deployment Guide
Page 23
2
Label the partitions created earlier as ASM disks on
any one node
.
# /etc/init.d/oracleasm createdisk ASM1 /dev/emcpowerb1
Marking disk "/dev/emcpowerb1" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk ASM2 /dev/emcpowerc1
Marking disk "/dev/emcpowerc1" as an ASM disk: [ OK ]
3
Scan the ASM disks on
all the other nodes
.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
4
On
all the nodes
, verify that all the ASM disks are visible by typing:
# /etc/init.d/oracleasm listdisks
A list of all the configured ASM disks appears.
5
To add an additional ASM disk (for example, ASM3), edit the
This section describes the steps required to install Oracle RAC 10g version 10.1.0.3, which involves
installing CRS and installing the Oracle Database 10g software. Dell recommends that you create a seed
database to verify that the cluster works correctly before you deploy it in a production environment.
Installing CRS
1 Log in as root on the first node
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
4
Mount the
5
Ty p e :
su - oracle
Start the Oracle Universal Installer by typing:
6
unset ORACLE_HOME
If you are using a CD, type:
/media/cdrom/runInstaller
If you are using a DVD, type:
Oracle Cluster Ready Services
.
CD.
/media/cdrecorder/runInstaller
In the
7
8
9
10
11
24Deployment Guide
Welc om e
In the
Specify File Locations
/opt/oracle/product/10.1.0/crs_1
In the
Language Selection
In the
Cluster Configuration
enter the public and private node names for each node, and click
The cluster name must be unique throughout the enterprise.
In the
Specify Network Interface Usage
Do not use
or
NOTE: The public and private NIC assignments that you select in this step must be identical and available
on all the nodes.
, and then click
window, click
window, select a language and click
Next
.
window, verify that the Oracle home path is
and click
window, enter a global cluster name or accept the default name
Next
.
Next
.
Next
.
Next
.
window, click each interface type and select
public, private
crs
,
,
Page 25
12
In the
Oracle Cluster Registry
/dev/raw/ocr.dbf
(
NOTE: If you have used a shared OCFS2 partition for the OCR and the Voting Disk, enter the appropriate path.
13
In the
Votin g Disk
(
/dev/raw/votingdisk
14
In the
Summary
) and click
window, enter a complete path for the partition to use for storing the Voting Disk
) and click
window, click
window, enter the complete path of the OCR disk location
Next
.
Next
.
Install
.
When the installation is completed, a message appears indicating that you must run the
on all the nodes. The
15
When prompted, open a new terminal window.
16
From the same terminal window in step 15, as the user
root.sh
script automatically configures the cluster.
root
, run the
root.sh
script on each node,
beginning with the local node.
17
18
Wait for
In the
In the
root.sh
to finish running on each node before you run it on the next node.
Setup Privileges
window, click OK.
End of Installation
window, click
Exit
and confirm by clicking
Yes
.
Installing the Oracle Database 10g Software
1 Log in as root on the first node
2
Mount the
3
Start the Oracle Universal Installer as the user
Oracle Database 10g CD 1
If you are using a CD, type:
.
.
oracle
:
root.sh
script
/media/cdrom/runInstaller
If you are using a DVD, type:
/media/cdrecorder/runInstaller
In the
4
5
Welc om e
In the
Specify File Locations
/opt/oracle/product/10.1.0/db_1
NOTE: The Oracle home in this step must be different from the Oracle home name that you identified during
the CRS installation. You cannot install the Oracle 10g Enterprise Edition with RAC into the same home that
you used for CRS
6
In the
Specify Hardware Cluster Installation Mode
7
In the
Select Installation Type
window, click
.
Next
.
window, verify that the complete Oracle home path is
and click
window, select
Next
.
window, click
Enterprise Edition
Select All
and click
and click
Next
.
The status of various prerequisite checks being performed are displayed. When the checks are
completed, you may receive a warning for version mismatch of
Wa rn ing
8
In the
option and click
Next
.
Select Database Configuration
window, select
Do not create a starter database
openmotif
package. Check the
Deployment Guide25
Next
.
and click
Next
.
Page 26
9
In the
Summary
10
When prompted, open a new terminal window.
11
Run
root.sh
a
Press <Enter> to accept the default value for the local
window, click
on the
first node
Install
.
The Virtual Internet Protocol Configuration Assistant (VIPCA) starts.
b
On the first VIPCA window, click
c
In the
List of Available Network Interfaces
four NIC ports, the port reserved for the virtual IP address (see "Configuring the Public and
Private Networks"), and click
NOTE: The public and private NIC assignments that you select in this step must be identical and available
on all nodes.
In the
d
Virtual IPs for Cluster Nodes
Next
mask for each node displayed and click
The virtual IP address must be the same as you entered in the
mask must be the same as the public mask.
e
Click
Finish
in the summary window.
A progress window appears.
f
When the configuration is completed, click OK and click
g
Run
root.sh
on each of the other nodes in your cluster.
root.sh
Wait for
to finish running on
.
bin
directory.
Next
.
window, select your public NIC or, if you have
.
window, enter an unused public virtual IP address and subnet
Next
.
each node
/etc/hosts.equiv
Exit
to exit the VIPCA.
before you run it on the next node.
file, and the subnet
12
Click OK in the
13
Click
Exit
Setup Privileges
in the
End of Installation
window.
window and confirm by clicking
Applying the 10.1.0.5 Patchset
1
Download the 10.1.0.5 patchset (
2
Copy the patchset to the folder
3
Unzip the patchset by typing:
unzip p4505133_10105_LINUX.ZIP
Change the ownership of the
4
chown -R oracle.dba /oracle_cds/10.1.0.5
Run the installer from the
5
It patches all the nodes that are a part of the RAC cluster. The 10.1.0.5 patchset patches the CRS
as well as the database home.
NOTE: The 10.1.0.5 patchset supports rolling upgrades for the CRS of all the member nodes.
26Deployment Guide
p4505133_10105_LINUX.ZIP
/oracle_cds/10.1.0.5
10.1.0.5
first node
directory by typing:
only.
on the
) from the Oracle MetaLink website.
first node
Yes
.
.
Page 27
Patching CRS to 10.1.0.5
1 Log in as oracle on the first node
2
Start the Oracle installer by typing:
.
/oracle_cds/10.1.0.5/Disk1/runInstaller
In the
3
4
Welc om e
In the
Specify File Locations
window, click
Next
.
window, ensure that the source path points to the
of the 10.1.0.5 staging area.
5
In the
Destination
the path points to the CRS home and click
6
In the
Selected Nodes
displayed and click
7
In the
Summary
section select the CRS home name from the drop-down menu. Ensure that
Next
.
window, ensure that all the member nodes of the 10.1.0.3 installation are
Next
.
window, click
Install
.
The installer will prompt you to stop the CRS services and run the
8
Log in as
9
Exit the installer after you run this script from all the nodes.
10
On
a
root
on
all the nodes
each node
, perform the following steps:
and run the
root10105.sh
script from the CRS home location.
Verify the CRS installation by typing the following command from the
/opt/oracle/product/10.1.0/crs_1/bin
directory:
olsnodes -n -v
root10105.sh
products.xml
script.
file
A list of the public node names of all nodes in the cluster appears.
b
List all the services that are running by typing:
crs_stat
Patching the Database to 10.1.0.5 Patchset
1 Log in as oracle on the first node.
Stop the Oracle Notification Services (ONS) before upgrading the patchset by typing:
2
onsctl stop
Start the Oracle installer by typing:
3
/oracle_cds/10.1.0.5/Disk1/runInstaller
In the
4
5
Welc om e
In the
Specify File Locations
window, click
Next
.
window, ensure that the source path points to the
of the 10.1.0.5 staging area.
6
In the
Destination
section, select the database home name from the drop-down menu. Make sure
that the path points to the database home of the 10.1.0.3 installation and click
products.xml
Next
.
Deployment Guide27
file
Page 28
7
In the
Selected Nodes
displayed and click
8
In the
Summary
The installer prompts you to run the
9
Log in as
10
Exit the installer after running this script from all the nodes.
root
window, ensure that all the member nodes of the 10.1.0.3 installation are
Next
.
window, click
on
each node
Install
.
root.sh
and run the
script on all the nodes after the process is completed.
root.sh
script from the database home location.
Configuring the Listener
This section describes the steps to configure the listener, which is required for remote client connection
to a database.
On any one node, perform the following procedure:
Log in as
1
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
root
.
As the user
4
source /home/oracle/.bash_profile
5
Start the Net Configuration Assistant by typing:
netca
6
Select
7
On the
8
On the
9
On the
10
On the
and click
11
On the
12
On the
and click
13
On the
14
On the
15
Click
oracle
Cluster Configuration
TOPSNodes
Welc om e
Listener Configuration, Listener
Listener Configuration, Listener Name
Next
Listener Configuration, Select Protocols
Listener Configuration, TCP/IP Protocol
Next
Listener Configuration, More Listeners?
Listener Configuration Done
Finish
.
, run:
window, click
window, select
.
.
and click
Select All Nodes
Listener Configuration
window, click
Next
.
window, select
window, type
window, select
window, select
window, select No and click
and click
and click
Add
Next
.
Next
.
Next
.
and click
LISTENER
Next
.
in the
Listener Name
TCP
and click
Use the standard port number of 1521
Next
Next
.
.
field
28Deployment Guide
Page 29
Creating the Seed Database
This section contains procedures for creating the seed database using either OCFS2 or ASM and for
verifying the seed database.
Creating the Seed Database Using OCFS2
1
On the
the Database Configuration Assistant (DBCA).
2
In the
3
In the
4
In the
5
In the
6
In the
7
In the
8
In the
password selections and entries, and click
9
In the
10
In the
11
In the
specify the flash recovery size, and then click
12
In the
13
In the
14
In the
Pool
15
In the
16
In the
17
In the
first node
Welc om e
Operations
Node Selection
Database Templates
Database Identification
Management Options
Database Credentials
Storage Options
, as the user
oracle
window, select
window, click
window, click
window, click
window, click
window, click
window, select
window, enter a
Database File Locations
Recovery Configuration
Database Content
Database Services
window, click
window, click
window, click
Initialization Parameters
value to 500 MB, and click
Database Storage
Creation Options
Summary
window click OK to create the database.
window, click
window, check
, type
dbca -datafileDestination /u01
Oracle Real Application Cluster Database
Create a Database
Select All
Custom Database
Next
and click
and click
Next
Next
.
.
and click
Global Database Name
.
Next
such as
Use the Same Password for All Accounts
Next
.
Cluster File System
window, click
Next
Next
Next
.
Specify flash recovery area
Next
.
.
.
and click
Next
, click
and click
.
racdb
.
Browse
to start
Next
.
and click
, complete
and select
window, if your cluster has more than four nodes, change the
Next
.
Next
.
Create Database
and click
Finish
.
Shared
Next
/u02
.
,
NOTE: The creation of the seed database may take more than an hour.
NOTE: If you receive an Enterprise Manager Configuration Error during the seed database creation, click OK
to ignore the error.
When the database creation is completed, the
Password Management
window appears.
Deployment Guide29
Page 30
18
Click
Exit
.
A message appears indicating that the cluster database is being started on all nodes.
19
On
each node
a
Determine which database instance exists on that node by typing:
srvctl status database -d <database name>
b
Add the ORACLE_SID environment variable entry in the
is the database instance identifier assigned to the node.
This example assumes that
each node
racdb
:
oracle
user profile by typing
is the global database name that you defined in DBCA.
Deployment Guide31
Page 32
27
On
any one node
srvctl status database -d dbname
where
dbname
If the database instances are running, confirmation appears on the screen.
If the database instances are
srvctl start database -d dbname
where
dbname
, type:
is the global identifier name that you defined for the database in DBCA.
not
running, type:
is the global identifier name that you defined for the database in DBCA.
RAC Post Deployment Fixes and Patches
This section provides the required fixes and patch information for deploying Oracle RAC 10g.
Reconfiguring the CSS Miscount for Proper EMC PowerPath Failover
When an HBA, switch, or EMC storage processor (SP) failure occurs, the total PowerPath failover time
to an alternate device may exceed 105 seconds. The default cluster synchronization service (CSS) disk
time-out for Oracle 10g R1 version 10.1.0.3 is 45 seconds. To ensure that the PowerPath failover procedure
functions correctly, increase the CSS time-out to 120 seconds.
To increase the CSS time-out:
1
Shut down the database and CRS on all the nodes except on one node.
2
On the running node, log in as the user
/opt/oracle/product/10.1.0/crs_1/bin/crsctl set css misscount 120
root
and type:
3
Reboot all nodes for the CSS setting to take effect.
For more information, see Oracle MetaLink Note 294430.1 on the Oracle MetaLink website at
metalink.oracle.com.
Setting the Password for the User oracle
Dell strongly recommends that you set a password for the user oracle to protect your system. Complete
the following steps to create the password for the user oracle:
1
Log in as
2
Create the password for the user
on the screen:
passwd oracle
32Deployment Guide
root
.
oracle
by typing the following and performing the instructions
Page 33
Configuring and Deploying Oracle Database 10g (Single Node)
This section provides information about completing the initial setup or completing the reinstallation
procedures as described in "Installing and Configuring Red Hat Enterprise Linux." This section covers
the following topics:
•Configuring the Public Network
•Configuring Database Storage
•Installing Oracle Database 10g
•Configuring the Listener
•Creating the Seed Database
•Setting the Password for the User oracle
Configuring the Public Network
Ensure that your public network is functioning and that an IP address and host name are assigned
to your system.
Configuring Database Storage
Configuring Database Storage Using ext3 File System
If you have additional storage, perform the following steps:
1
Log in as
2
Ty p e :
cd /opt/oracle
root
.
Ty p e :
3
mkdir oradata recovery
Using
fdisk
4
if your storage device is
5
Using
if your storage device is
6
Verify the new partition by typing:
cat /proc/partitions
If you do not see the new partition, type:
sfdisk -R /dev/sdb
sfdisk -R /dev/sdc
, create a partition where you want to store your database files (for example,
sdb
).
fdisk
, create a partition where you want to store your recovery files (for example,
sdc
).
sdb1
sdc1
Deployment Guide33
Page 34
7
Ty p e :
mke2fs -j /dev/sdb1
mke2fs -j /dev/sdc1
8
Modify the
9
Ty p e :
/etc/fstab
file by adding an entry for the newly created file system.
mount /dev/sdb1 /opt/oracle/oradata
mount /dev/sdc1 /opt/oracle/recovery
10
Ty p e :
chown oracle.dba oradata recovery
Configuring Shared Storage Using ASM
The partitions can be configured as raw devices or can be configured using the ASMLib software. It is
assumed that you have two storage devices (sdb and sdc) available to create a disk group for the database
files, and a disk group to be used for flashback recovery and archive log files, respectively.
Configuring Shared Storage Using ASMLib
1
To configure your cluster using ASM, perform the following steps on
a
Log in as
b
Configure the ASM kernel module by typing:
root
.
/etc/init.d/oracleasm configure
all the nodes
:
The following message appears on the screen:
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM
library driver. The following questions will determine whether
the driver is loaded on boot and what permissions it will have.
The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value.
Ctrl-C will abort.
A message appears prompting you to enter the default user owning the driver interface.
Ty p e
oracle
Default user to own the driver interface []: oracle
A message appears prompting you to enter the default group owning the driver interface.
Ty p e
dba
as mentioned below:
Default group to own the driver interface []: dba
34Deployment Guide
as mentioned below:
Page 35
A message appears prompting you to load the oracleasm driver on boot. To load the driver, type y
as mentioned below:
Start Oracle ASM library driver on boot (y/n) [n]: y
A message appears prompting you to fix permissions of Oracle ASM disks on boot. Type y as
mentioned below:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:y
The following messages appear on the screen:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
c
Label the partitions created earlier as ASM disks.
# /etc/init.d/oracleasm createdisk ASM1 /dev/emcpowerb1
Marking disk "/dev/emcpowerb1" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk ASM2 /dev/emcpowerc1
Marking disk "/dev/emcpowerc1" as an ASM disk: [ OK ]
2
Scan the ASM disks on
all the other nodes
.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
3
On
all the nodes
, verify that all the ASM disks are visible by typing:
# /etc/init.d/oracleasm listdisks
A list of all the configured ASM disks appears.
Configuring Shared Storage Using Raw Devices
1
Log in as
2
Type the following commands to change the names of the raw character devices to make them
window, check the disk group that you would like to use for database storage
) and click
window, check
window, click
window, click
window, select
window, click
window, select
window for the flashback recovery files and clickOK.
Next
.
Browse
), and click
Next
.
Next
.
Create Database
flashbackDG
/dev/raw/ASM2
Use Common Location for All Database Files
, select the flashback group that you created in
Next
.
Typical
and click
and click
Password Management
, select
Next
.
Finish
.
window appears.
External Redundancy
).
,
oracle
,
user
source /home/oracle/.bash_profile
This example assumes that
oradb
is the global database name that you defined in DBCA.
Setting the Password for the User oracle
Dell strongly recommends that you set a password for the user oracle to protect your system. Complete
the following steps to create the password for the user oracle:
1
Log in as
2
Create the password for the user
appear on the screen:
passwd oracle
40Deployment Guide
root
.
oracle
by typing the following and performing the instructions that
Page 41
Adding and Removing Nodes
This section describes the steps to add a node to an existing cluster and the steps to remove a node from
a cluster.
To add a node to an existing cluster:
•Add the node to the network layer.
•Configure shared storage.
•Add the node to the clusterware, database, and database instance layers.
To remove a node from an existing cluster, reverse the process by removing the node from the database
instance, the database, and the clusterware layers.
For more information about adding an additional node to an existing cluster, see the document titled
Oracle Real Application Clusters 10g Administration located on the Oracle website at www.oracle.com.
Adding a New Node to the Network Layer
To add a new node to the network layer:
Install the Red Hat Enterprise Linux operating system on the new node. See "Installing and Confi-
1
guring Red Hat Enterprise Linux."
2
Configure the public and private networks on the new node. See "Configuring the Public and Private
Networks."
3
Verify that each node can detect the storage LUNs or logical disks. See "Verifying the Storage
Configuration."
Configuring Shared Storage on the New Node
To extend an existing RAC database to your new nodes, configure storage for the new nodes so that the
storage is the same as on the existing nodes. This section provides the appropriate procedures for either
ASM or OCFS2.
Configuring Shared Storage With ASM
Configuring Shared Storage for CRS
To configure shared storage with ASM, perform the following steps:
On the new node, verify the new partitions by typing:
more /proc/partitions
Deployment Guide41
Page 42
If the new partitions do not appear in the /proc/partitions file, type:
sfdisk -R /dev/<device name>
Start the raw devices by typing:
1
udevstart
2
Edit the
/etc/sysconfig/rawdevices
file and add the following lines for a Fibre Channel cluster:
The shared database partitions can either be configured as raw devices or can be configured using
the ASMLib software.
Configuring Shared Storage Using ASMLib
To configure your cluster using ASM, perform the following steps on the new node:
Log in as
1
2
Configure the ASM kernel module by typing:
root
.
/etc/init.d/oracleasm configure
The following message appears on the screen:
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
A message appears prompting you to enter the default user owning the driver interface. Type
as mentioned below:
Default user to own the driver interface []: oracle
A message appears prompting you to enter the default group owning the driver interface. Type
as mentioned below:
Default group to own the driver interface []: dba
A message appears prompting you to load the oracleasm driver on boot. To load the driver, type y
as mentioned below:
42Deployment Guide
oracle
dba
Page 43
Start Oracle ASM library driver on boot (y/n) [n]: y
A message appears prompting you to fix permissions of Oracle ASM disks on boot. Type y
as mentioned below:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:y
The following messages appear on the screen:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
3
Scan the ASM disks by typing:
/etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
Verify that all the ASM disks are visible by typing:
4
/etc/init.d/oracleasm listdisks
A list of all the configured ASM disks appears.
Configuring Shared Storage Using Raw Devices
Log in as root on the new node and perform the following procedure:
1
Edit the
/etc/sysconfig/rawdevices
file and add the following lines for a Fibre Channel cluster:
If you are using OCFS2 for either CRS, quorum, or database files, ensure that the new nodes can access
the cluster file systems in the same way as the existing nodes.
1
Edit the
/etc/fstab
file on the new node and add OCFS2 volume information exactly as it appears
Create OCFS2 mount points on the new node as they exist on the existing nodes
(for example, /u01, /u02, and /u03).
Deployment Guide43
Page 44
3
Stop all the database instances by typing the following command as the user
the existing nodes:
srvctl stop database -d <database name>
oracle
on one of
Stop CRS and unmount all the OCFS2 partitions by typing the following command on
4
/etc/init.d/init.crs stop
umount -a -t ocfs2
To add the new node to the OCFS2 configuration file
5
steps on one of the
a
Start the X Window System by typing:
existing nodes
:
/etc/ocfs2/cluster.conf
, perform the following
startx
b
Generate the OCFS2 configuration file (
/etc/ocfs2/cluster.conf
) with a default cluster name
of ocfs2 by typing the following in a terminal window:
ocfs2console
c
From the menu, click
Cluster→ Configure Nodes
.
If the cluster is offline, the console will start it. A message window appears displaying that information. Close the message window.
The
Node Configuration Window
d
To add a node to the cluster, click
appears.
Add
. Enter the new node name (same as the host name) and the
private IP. Retain the default value of the port number. After entering all the details mentioned,
click
OK
.
e
Click
Apply
and then click
f
From the menu, click
Propagate Cluster Configuration Window
on the window and then click
Close
in the
Node Configuration Window
Cluster→ Propagate Configuration
appears. Wait until the message
Close
.
.
.
all the nodes
Finished
:
appears
g
Select
File→ Quit
6
On the
new node
/etc/init.d/o2cb enable
Change the O2CB_HEARTBEAT_THRESHOLD value on the
7
a
Stop the O2CB service on
/etc/init.d/o2cb stop
b
Edit the O2CB_HEARTBEAT_THRESHOLD value in
c
Start the O2CB service on
/etc/init.d/o2cb start
44Deployment Guide
.
, enable the cluster stack on startup by typing:
all the nodes
by typing:
/etc/sysconfig/o2cb
all the nodes
by typing:
new node
using the following steps:
to 61 on
all the nodes
.
Page 45
8
Restart the O2CB service on
all the existing nodes
/etc/init.d/o2cb stop
/etc/init.d/o2cb start
On
9
all the nodes
, mount all the volumes listed in the
mount -a -t ocfs2
On
10
the new node
, add the following command to the
mount -a -t ocfs2
On
11
all the nodes
other than the newly added one, start CRS and the database by performing
the following steps:
a
As the user
/
etc/init.d/init.crs start
b
As the user
root
, type:
oracle
, type:
srvctl start database -d <database_name>
Adding a New Node to the Clusterware Layer
1 Log in as oracle on one of the existing nodes
2
Start the Oracle Universal Installer from the
addNode.sh
3
In the
Welc om e
4
In the
Specify Cluster Nodes for Node Addition
for the new node and click
window, click
Next
Next
.
.
If all the network and storage verification checks pass, the
/opt/oracle/product/10.1.0/crs_1/oui/bin
by typing:
/etc/fstab
/etc/rc.local
file by typing:
file:
.
directory by typing:
window, enter the public and private node names
Node Addition Summary
window appears.
5
Click
Next
.
The
Cluster Node Addition Progress
6
When prompted, run
When
rootaddnode.sh
7
When prompted, run
root.sh
When
8
In the
End of Cluster Node Addition
9
From the
/opt/oracle/product/10.1.0/crs_1/oui/bin
rootaddnode.sh
finishes running, click OK.
root.sh
on the
finishes running, click OK.
window displays the status of the cluster node addition process.
on the local node.
new node
window, click
(for example) the following line:
racgons add_config node3-pub:4948
In this example,
node3
is being added to an existing two-node cluster.
.
Exit
.
directory
on one of the existing nodes
Deployment Guide45
, type
Page 46
Adding a New Node to the Database Layer
1 Log in as oracle on one of the existing nodes
2
Start the Oracle Universal Installer from the
addNode.sh
3
In the
Welc om e
4
In the
Specify Cluster Nodes for Node Addition
If all the verification checks pass, the
5
Click
Next
Cluster Node Addition Progress
The
6
When prompted, run
root.sh
When
7
In the
End of Cluster Node Addition
8
From the
the following command as the user
window, click
Next
.
Node Addition Summary
.
window displays the status of the cluster node addition process.
root.sh
on the new node.
finishes running, click OK.
window, click
/opt/oracle/product/10.1.0/db_1/bin
root
./vipca -nodelist node1-pub,node2-pub,node3-pub
.
/opt/oracle/product/10.1.0/db_1/oui/bin
window, click the new node and click
Exit
.
directory
on one of the existing nodes
:
window appears.
directory by typing:
Next
.
, type
In this example,
node3
is being added to an existing two-node cluster.
VIPCA starts.
a
On the first VIPCA window, click
b
In the
List of Available Network Interfaces
NOTE: The public and private NIC assignments that you select in this step must be identical and available
on all nodes.
In the
c
node and click
d
Click
IP Address
Finish
window, enter an unused public virtual IP address and subnet mask for the new
Next
.
in the summary window.
Next
.
window, select your public NIC and click
Next
A progress window appears.
e
When the configuration is completed, click OK and click
Exit
to exit the VIPCA.
.
46Deployment Guide
Page 47
Adding a New Node to the Database Instance Layer
1
On
one of the existing nodes
, start DBCA as the user
oracle
by typing:
dbca
2
In the
3
4
5
Welc om e
In the
Operations
In the
Instance Management
In the
List of Cluster Databases
window, select
window, click
window, click
Oracle Real Application Cluster Database
Instance Management
Add Instance
and click
and click
Next
Next
.
.
window, select the existing database.
and click
Next
If your user name is not operating-system authenticated, the DBCA prompts you for a user name
and password for a database user with SYSDBA privileges.
6
Enter the user name
The
List of Cluster Database Instances
sys
and the password, and click
window appears, showing the instances associated with
Next
.
the RAC database that you selected and the status of each instance.
7
Click
Next
.
8
In the
Adding an Instance
select the new node name, and click
9
In the
10
11
Services
In the
Instance Storage
In the
Summary
window, click
window click OK to add the database instance.
window, enter the instance name at the top of the window,
Next
.
Next
.
window, click
Finish
.
A progress bar appears, followed by a message asking if you want to perform another operation.
.
12
Click No to exit DBCA.
13
On
any one node
, type the following to determine if the database instance has been successfully added:
srvctl status database -d <database name>
Removing a Node From the Cluster
Deleting a Node From the Database Instance Layer
Log in as oracle on the first node and perform the following procedure:
1
Ty p e :
dbca
In the
2
3
4
Welc om e
In the
Operations
In the
Instance Management
window, click
window, click
Next
.
Instance Management
window, click
Delete Instance
and click
and click
Next
.
Next
.
Deployment Guide47
Page 48
5
In the
List of Cluster Databases
window, select a RAC database from which to delete an instance.
If your user name is not operating-system authenticated, DBCA prompts you for a user name and
password for a database user with SYSDBA privileges.
6
Enter the user name
The
List of Cluster Database Instances
sys
and the password, and click
window appears, showing the instances associated with
Next
.
the RAC database that you selected and the status of each instance.
7
Select the instance to delete and click
Finish
.
This instance cannot be the local instance from where you are running DBCA. If you select the local
instance, the DBCA displays an
click
Finish
.
Error
dialog. If this occurs, click OK, select another instance, and
If services are assigned to this instance, the
DBCA Services Management
window appears. Use this
window to reassign services to other instances in the cluster database.
8
Verify the information about the instance deletion operation and click OK.
A progress bar appears while DBCA removes the instance and its Oracle Net configuration. When
the operation is completed, a dialog asks whether you want to perform another operation.
9
Click No to exit.
10
Verify that the node was removed by typing:
srvctl config database -d <database name>
Deleting a Node From the Database Layer
1
On the node being deleted, log in as
2
Type the following command, using the public name of the node you are deleting
(
node3-pub
for example):
oracle
.
srvctl stop nodeapps -n node3-pub
On the node being deleted, log in as
3
4
Type the following command, using the public name of the node you are deleting
NOTICE: Reinstalling the software erases all data on the hard drives.
NOTICE: You must disconnect all external storage devices from the system before you reinstall the software.
NOTICE: Dell recommends that you perform regular backups of your database and individual nodes so that you do
not lose valuable data. Reinstall the node software only if you have no other options.
Installing the software using the Dell Deployment CD created a redeployment partition on your hard drive
that contains all of the software images that were installed on your system. The redeployment partition
allows for quick redeployment of the Oracle software.
Reinstalling the software through the redeployment partition requires that you boot the system to
the partition. When the system boots to this partition, it automatically reinstalls the Red Hat Linux
operating system.
To reinstall software from the redeployment partition, perform the following steps:
1
Disconnect the external storage.
2
Log in as
3
Edit the GRand Unified Bootloader (GRUB) configuration file by typing
and pressing <Enter>.
4
In the file, change the
5
Save the file and restart your system.
For information about configuring the system for use, see "Configuring Hugemem Kernel" and continue
through the remaining sections to reconfigure your system.
root
on the system on which you want to reinstall the software.
Default
to 3.
vi /etc/grub.conf
.
50Deployment Guide
Page 51
Additional Information
Supported Software Versions
NOTE: For this release of Dell supported configurations for Oracle, Emulex HBAs are not supported.
Table 1-6 lists the supported software at the time of release. For the latest supported hardware and
software, see the Dell|Oracle Tested and Validated Configurations website at www.dell.com/10g and
download the Oracle Database 10g EM64T x86 Version 1.2 Solution Deliverable List for the latest
supported versions.
Table 1-6. Supported Software Versions
Software ComponentSupported Versions
Red Hat Enterprise Linux AS (Version 4) Quarterly
Update 3 for Intel x86 operating system
Intel PRO/1000 MT NIC drivers (e1000)6.1.16-k3-NAPI
Broadcom NetXtreme BCM5704 NIC drivers(5703,
5701)(tg3)
3.43-rh
Configuring Automatic Reboot for a Hung Operating System
Install managed system software for Red Hat Enterprise Linux by performing the following steps:
Log in with administrator privileges to the system where you want to install the managed
1
system components.
2
Exit any open application programs and disable any virus-scanning software.
3
Start the X Window System by typing:
startx
4
Open a terminal window and type:
xhost +
Insert the
5
6
Mount the CD by typing
mount /dev/cdrom
7
Click
8
Click
9
Read and accept the software license agreement to continue.
The setup program provides both an
Setup
manage your system. The
Dell PowerEdge Installation and Server Management
CD into the CD drive on the system.
start.sh
located in the root directory of the CD to start the setup program.
Next
on the
Welcome to Dell OpenManage Systems Management Installation
Express Setup
option and a
Custom Setup
option. The
option (recommended) automatically installs all of the software components necessary to
Custom Setup
option allows you to select which software components
you want to install.
The rest of this procedure is based on the
Administrator User's Guide
10
Click
Express Setup
11
Read the information on the
for information about the
.
Installation Summary
Express Setup
Custom Setup
screen, and then click
option. See the
option.
Dell OpenManage™ Server
Next
.
The setup program automatically installs all of the managed system software for your hardware
configuration.
12
When the installation is completed, click
Finish
.
window.
Express
52Deployment Guide
Page 53
See the Dell OpenManage Server Administrator User's Guide for instructions about uninstalling
the managed system software.
To configure the automatic reboot option, perform the following steps:
1
Ty p e :
omconfig system recovery action=reboot
This command sets the automatic reboot timer to a default setting of 480 seconds—the time delay
before the timer automatically reboots an unresponsive system.
2
To change the timer setting to a different value, type:
omconfig system recovery timer=<seconds>
To verify the system reboot timer settings, type:
3
omreport system recovery
Determining the Private Network Interface
To determine which interface device name is assigned to each network interface, perform the following steps:
Determine which types of NICs are in your system.
1
See Table 1-7 to identify the integrated NICs that are present in your system. For add-in NICs,
you may have Intel PRO/100 family or PRO/1000 family cards or Broadcom NetXtreme Gigabit cards.
You may have to open your system and view the add-in cards to determine which you have.
Table 1-7. Integrated NICs
SystemIntegrated NICs
PowerEdge 1750Broadcom NetXtreme Gigabit (2)
PowerEdge 1850Intel PRO/1000 (2)
PowerEdge 2600Intel PRO/1000
PowerEdge 2650Broadcom NetXtreme Gigabit (2)
PowerEdge 2800Intel PRO/1000 (2)
PowerEdge 2850Intel PRO/1000 (2)
PowerEdge 4600Broadcom NetXtreme Gigabit (2)
PowerEdge 6600Broadcom NetXtreme Gigabit (2)
PowerEdge 6650Broadcom NetXtreme Gigabit (2)
PowerEdge 6800Broadcom NetXtreme Gigabit (2)
PowerEdge 6850Broadcom NetXtreme Gigabit (2)
2
Verify that a Broadcom NetXtreme Gigabit or Intel PRO/1000 family NIC is connected with a Cat 5e
cable to the Gigabit Ethernet switch. This is your private NIC.
Deployment Guide53
Page 54
3
Determine which driver module your private NIC uses.
The Broadcom NetXtreme Gigabit uses
4
View the
/etc/modprobe.conf
file by typing:
tg3
, and the Intel PRO/1000 family uses
more /etc/modprobe.conf
e1000
.
Several lines appear with the format
interface number and
For example, the line
driver-module
alias eth1 tg3
alias ethX driver-module
, where X is the Ethernet
is the module you determined in step 3.
appears if your operating system assigned eth1
to a Broadcom NetXtreme Gigabit NIC.
5
Determine which Ethernet interfaces (ethX) have been assigned to the type of Gigabit NIC that is
connected to the Gigabit switch.
If there is only one entry in
modules.conf
for your driver module type, then you have successfully
identified the private network interface.
6
If you have more than one of the same type of NIC in your system, experiment to determine which
Ethernet interface is assigned to each NIC.
For each Ethernet interface, follow the steps in "Configuring the Private Network Using Bonding"
for the correct driver module until you have identified the correct Ethernet interface.
54Deployment Guide
Page 55
Troubleshooting
Table 1-8 provides recommended actions for problems that you may encounter while deploying and
using your Red Hat Enterprise Linux and the Oracle software.
Red Hat Enterprise
Linux exhibiting
poor performance
and instability.
Excessive use of
swap space.
Unknown interface
type warning
appears in Oracle
alert file.
Poor system
performance.
The Oracle System Global
Area (SGA) exceeds the
recommended size.
The public interface is
configured as cluster
communications (private
interface).
• Ensure that the SGA size does not exceed 65%
of total system RAM.
•Type
free
at a command prompt to determine
total RAM and reduce the values of
db_cache_size and shared_pool_size parameters
in the Oracle parameter file accordingly.
Force cluster communications to the private
interface by performing the following steps
one node
:
1
Log in as
2
Ty p e
at the command prompt.
The
3
Enter the following lines at the
alter system set
cluster_interconnects=’
<private IP address node1>’
scope=spfile sid=’<SID1>’
alter system set
cluster_interconnects=’
<private IP address node2>’
scope=spfile sid=’<SID2>’
Continue entering lines for each node in
the cluster.
NETCA fails,
resulting in
database creation
errors.
configure remote
nodes or a raw
device validation
error occurs while
running DBCA.
when you reboot
the nodes or type:
/etc/init.d/
init.crs
start
root.sh, CRS fails
to start.
root.sh, CRS fails
to start.
The public network, host
name, or virtual IP is not
listed in the
/etc/hosts.equiv file.
The /etc/hosts.equiv file
either does not exist or
does not include the
assigned public or virtual
IP addresses.
The Cluster Ready Services
CSS daemon cannot write
to the quorum disk.
Check and make sure you
have public and private
node names defined and
that you can ping the node
names.
The OCR file and Voting
Disk are inaccessible.
Before launching netca, ensure that a host name is
assigned to the public network and that the public
and virtual IP addresses are listed in the
/etc/hosts.equiv file.
Verify that the /etc/hosts.equiv file on each node
contains the correct public and virtual IP address.
Try to rsh to other public names and virtual IP
addresses as oracle user.
• Attempt to start the service again by rebooting
the node or typing
/opt/oracle/product/10.1.0/crs_1/
• Verify that each node has access to the quorum
disk and the
• Check the last line in the file
$ORA_CRS_HOME/css/log/ocssd.log
• If you see
to flush writes to (votingdisk)
verify the following:
–The
– You can ping the public and private host
– The quorum disk is writable.
Attempt to start the service again by rebooting
the node or by running root.sh from
/opt/oracle/product/10.1.0/crs_1/ after correcting
the networking issues.
Correct the I/O problem and attempt to start the
service again by rebooting the node or by running
root.sh from /opt/oracle/product/10.1.0/crs_1/.
clssnmvWriteBlocks: Failed
/etc/hosts
correct IP addresses for host names of all
the nodes, including the virtual IP addresses.
The node does not have
access to the quorum disk
on shared storage.
1
Start Linux in single user mode.
2
Ty p e :
/etc/init.d/init.crs disable
3
Verify that the quorum disk is available for read
and write. If it is not available, check hardware
connections and ensure that OCFS volumes are
mounted.
4
Reboot and type
/etc/init.d/init.crs
enable
DBCAThere is no
response when you
Java Runtime Environment
timing issue.
Click again. If there is still no response,
restart DBCA.
click OK in the
DBCA Summary
window.
DBCAWhile creating the
seed database using
Known intermittent issue. Click Ignore; the seed database is created
normally.
DBCA on OCFS
volumes, you get
error ORA-60,
ORA-06512,
or ORA-34740.
Software
installation
You receive dd
failure error
Using copies, rather than
the original Red Hat CDs.
Use the original Red Hat CDs included with
your system.
messages while
installing the
software using
Dell Deployment
CD 1.
Software
installation
When connecting
to the database as
a user other than
Required permissions
are not set on the remote
node.
On all remote nodes, as the user root, type:
chmod 6751 $ORACLE_HOME
oracle, you
receive the error
messages:
ORA01034:
ORACLE not
available and
Linux Error
13:
Permission
denied.
You receive I/O
errors and warnings
when you load the
Fibre Channel HBA
driver module.
You receive the
error message
ORA-04031
unable to
allocate
4180 bytes of
shared memory.
message appears:
mount.ocfs2:
Transport
endpoint is
not connected
while
mounting
/dev/emcpower
a1 on /u01/
The HBA driver, BIOS, or
firmware needs to be
updated.
The default memory
allocation for an 8-node
cluster is too small.
The private interconnect is
not up at the mount time.
Check the Solution Deliverable List for the
supported versions on the Dell|Oracle Tested and
Validated Configurations website at
www.dell.com/10g. Update as required the driver,
BIOS, and firmware for the Fibre Channel HBAs.
In the Initialization Parameters Window, change
the value of the Shared Pool to 500 MB from the
default value of 95 MB and click Next.
Ignore the error message. The mount problem is
handled in the deployment procedure.
Getting Help
Dell Support
For detailed information on the use of your system, see the documentation that came with your system
components.
For white papers, Dell supported configurations, and general information, visit the Dell and Oracle
website at www.dell.com/oracle.
For Dell technical support for your hardware and operating system software and to download the latest
updates for your system, visit the Dell Support website at support.dell.com. Information about
contacting Dell is provided in your system’s Installation and Troubleshooting Guide.
Dell Enterprise Training and Certification is now available; see www.dell.com/training for more
information. This training service may not be offered in all locations.
Deployment Guide59
Page 60
Oracle Support
For training information for your Oracle software and application clusterware, see the Oracle website at
www.oracle.com or see your Oracle documentation for information on contacting Oracle.
Technical support, downloads, and other technical information are available on the Oracle MetaLink
website at metalink.oracle.com
.
Obtaining and Using Open Source Files
The software contained on the Dell Deployment CD is an aggregate of third-party programs as well as
Dell programs. Use of the software is subject to designated license terms. All software that is designated
as "under the terms of the GNU GPL" may be copied, distributed, and/or modified in accordance with
the terms and conditions of the GNU General Public License, Version 2, June 1991. All software that is
designated as "under the terms of the GNU LGPL" (or "Lesser GPL") may be copied, distributed, and/or
modified in accordance with the terms and conditions of the GNU Lesser General Public License,
Version 2.1, February 1999. Under these GNU licenses, you are also entitled to obtain the corresponding
source files by contacting Dell at 1-800-WWW-DELL. Please refer to SKU 420-4534 when making such
request. You may be charged a nominal fee for the physical act of transferring a copy.
60Deployment Guide
Page 61
Index
A
adding and removing
nodes, 41
additional configuration
options
adding and removing
nodes, 41
additional information, 51
configuring automatic
reboot, 52
determining the private
network interface, 53
ASM
configuring database
storage, 34
ASM configuration, 21
B
bonding, 15
C
cluster
Fibre Channel hardware
connections, example, 11
cluster setup
Fibre Channel, 10
configuring
ASM, 21
database storage
(single node), 33
database storage (single node)
using ASM, 34
database storage (single node)
using ex3, 33
OCFS, 19
Oracle Database 10g
(single node), 33
Oracle RAC 10g, 13
Red Hat Enterprise Linux, 9
shared storage using ASM, 21
shared storage using OCFS, 19
Configuring the Oracle ASM library driver.(正在配置 Oracle ASM 库
驱动程序。)
82部署指南
ASMLib
软件进行配置。
Page 83
This will configure the on-boot properties of the Oracle ASM library
driver.(这将配置 Oracle ASM 库驱动程序的引导时属性。) The following
questions will determine whether the driver is loaded on boot and
what permissions it will have.(以下问题将确定是否在引导时载入驱动程序,且确定驱动程序具有何种权限。) The current values will be shown in
brackets ('[]').(方括号 ('[]') 中将显示当前值。) Hitting <ENTER>
without typing an answer will keep that current value.(不键入应答,
而点击 ENTER,将保持当前值。) Ctrl-C will abort.(按 Ctrl-C 组合键将中断
操作。)
屏幕将出现一条信息,提示您输入拥有驱动程序接口的默认用户。按以下所示键入 oracle
:
Default user to own the driver interface(拥有驱动程序接口的默认用户)
[]: oracle
屏幕将出现一条信息,提示您输入拥有驱动程序接口的默认组。按以下所示键入 dba
:
Default group to own the driver interface(拥有驱动程序接口的默认组)
[]: dba
屏幕将出现一条信息,提示您在引导时载入
所示键入
y:
oracleasm
驱动程序。要载入驱动程序,请按以下
Start Oracle ASM library driver on boot(引导时启动 Oracle ASM 库驱动
程序)(y/n) [n]: y
屏幕将出现一条信息,提示您在引导时修复
Oracle ASM
磁盘的权限。按以下所示键入 y
:
Fix permissions of Oracle ASM disks on boot(引导时修复 Oracle ASM
磁盘的权限)(y/n) [y]: y
Creating /dev/oracleasm mount point:(正在创建 /dev/oracleasm 安装点:)
[ OK ](确定)
Loading module "oracleasm":(正在载入模块“oracleasm”:) [ OK ](确定)
Mounting ASMlib driver filesystem:(正在安装 ASMlib 驱动程序文件系统:)
[ OK ](确定)
Scanning system for ASM disks:(正在扫描系统中的 ASM 磁盘:)
[ OK ](确定)
部署指南83
Page 84
在任意一个节点上,将先前创建的分区标记为
2
ASM
磁盘。
# /etc/init.d/oracleasm createdisk ASM1 /dev/emcpowerb1
Marking disk "/dev/emcpowerb1" as an ASM disk:(将磁盘“/dev/emcpowerb1”标记为 ASM 磁盘:) [ OK ](确定)
# /etc/init.d/oracleasm createdisk ASM2 /dev/emcpowerc1
Marking disk "/dev/emcpowerc1" as an ASM disk:(将磁盘“/dev/emcpowerc1”标记为 ASM 磁盘:) [ OK ](确定)
扫描所有节点上的
3
ASM
磁盘。
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks:(正在扫描系统中的 ASM 磁盘:)
[ OK ](确定)
Configuring the Oracle ASM library driver.(正在配置 Oracle ASM 库驱动
程序。)
This will configure the on-boot properties of the Oracle ASM library
driver.(这将配置 Oracle ASM 库驱动程序的引导时属性。) The following
questions will determine whether the driver is loaded on boot and
what permissions it will have.(以下问题将确定是否在引导时载入驱动程序,且确定驱动程序具有何种权限。) The current values will be shown in
brackets ('[]').(方括号 ('[]') 中将显示当前值。) Hitting <ENTER>
without typing an answer will keep that current value.(不键入应答,
而点击 ENTER,将保持当前值。) Ctrl-C will abort.(按 Ctrl-C 组合键将中断
操作。)
部署指南95
Page 96
屏幕将出现一条信息,提示您输入拥有驱动程序接口的默认用户。按以下所示键入 oracle
:
Default user to own the driver interface(拥有驱动程序接口的默认用户)
[]: oracle
屏幕将出现一条信息,提示您输入拥有驱动程序接口的默认组。按以下所示键入 dba
Default group to own the driver interface(拥有驱动程序接口的默认组)
[]: dba
屏幕将出现一条信息,提示您在引导时载入
y
所示键入
:
oracleasm
驱动程序。要载入驱动程序,请按以下
Start Oracle ASM library driver on boot(引导时启动 Oracle ASM 库驱动
程序)(y/n) [n]: y
屏幕将出现一条信息,提示您在引导时修复
Oracle ASM
磁盘的权限。按以下所示键入 y
Fix permissions of Oracle ASM disks on boot(引导时修复 Oracle ASM
磁盘的权限)(y/n) [y]: y
Creating /dev/oracleasm mount point:(正在创建 /dev/oracleasm 安装点:)
[ OK ](确定)
Loading module "oracleasm":(正在载入模块“oracleasm”:) [ OK ](确定)
Mounting ASMlib driver filesystem:(正在安装 ASMlib 驱动程序文件系统:)
[ OK ](确定)
Scanning system for ASM disks:(正在扫描系统中的 ASM 磁盘:)
[ OK ](确定)
c
将先前创建的分区标记为
ASM
磁盘。
# /etc/init.d/oracleasm createdisk ASM1 /dev/emcpowerb1
Marking disk "/dev/emcpowerb1" as an ASM disk:(将磁盘“/dev/emcpowerb1”标记为 ASM 磁盘:) [ OK ](确定)
# /etc/init.d/oracleasm createdisk ASM2 /dev/emcpowerc1
Marking disk "/dev/emcpowerc1" as an ASM disk:(将磁盘“/dev/emcpowerc1”标记为 ASM 磁盘:) [ OK ](确定)
:
:
扫描所有节点上的
2
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks:(正在扫描系统中的 ASM 磁盘:)
[ OK ](确定)