Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, OpenManage, and PowerEdge are trademarks of Dell Inc.; EMC, PowerPath, and Navisphere
are registered trademarks of EMC Corporation; Intel and Xeon are registered trademarks of Intel Corporation; Red Hat is a registered trademark
of Red Hat, Inc.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
Configuring the Public Network
Configuring Database Storage
Configuring Shared Storage Using ASM
Installing Oracle Database 10g
Applying the 10.1.0.5 Patchset
Configuring the Listener
Creating the Seed Database
Setting the Password for the User oracle
Adding a New Node to the Network Layer
Configuring Shared Storage on the New Node
Configuring Shared Storage With ASM
Adding a New Node to the Clusterware Layer
Adding a New Node to the Database Layer
Removing a Node From the Cluster
Reinstalling the Software
Additional Information
Supported Software Versions
Configuring Automatic Reboot for a Hung Operating System
Determining the Private Network Interface
This document provides information about installing, configuring, reinstalling, and using Oracle
(RAC)
Database 10g Enterprise Edition with Real Application Clusters
supported configuration.
software on your Dell|Oracle
NOTE:
The following topics are covered:
•Software and hardware requirements
•Installing and configuring Red Hat
•Verifying cluster hardware and software configurations
•Configuring networking and storage for Oracle RAC 10
•Deploying Oracle RAC 10g database and patchsets on multiple nodes and creating a seed database
•Configuring and deploying Oracle Database 10
•Adding and removing nodes
•Reinstalling the software
•Additional information
•Troubleshooting
•Getting help
•Obtaining and using open source files
For more information on Dell’s supported configurations for Oracle Database 10g, see the Dell|Oracle
Tested and Validated Configurations website at www.dell.com/10g.
Use this document in conjunction with the Dell™ Deployment CD to install your software. If you install
your operating system using only the operating system CDs, the instructions in this document may not be applicable.
®
Enterprise Linux
g
g
(single node)
Oracle RAC 10g Deployment Service
If you purchased the Oracle RAC 10g Deployment Service, your Dell Professional Services representative
will assist you with the following:
•Verifying the cluster hardware and software configurations
•Configuring networking and storage
g
•Installing Oracle RAC 10
Release 1
Deployment Guide5
Software and Hardware Requirements
Before you install the Oracle RAC software on your system, follow the instructions in the Deploying
Dell-Tested and Validated Configurations for Oracle Database document shipped with your kit, to:
•Download the Red Hat CDs from the Red Hat website located at
•Locate your Oracle CD kit, or download the Oracle software from Oracle's website located at
www.oracle.com
•Download the Dell Deployment
Configurations website at www.dell.com/10g
.
CD images from
the Dell|Oracle Tested and Validated
, and burn the Dell Deployment CDs using the CD
images.
Table 1-1 lists basic software requirements for Dell’s supported configurations for Oracle. Table 1-2 and
Table 1-3 list the hardware requirements. For detailed information on the minimum software versions
for drivers and applications, see "Supported Software Versions."
Table 1-1. Software Requirements
Software ComponentConfiguration
Red Hat Enterprise Linux AS (Version 4) operating
system for Intel
Oracle 10g Release 1 for 32-bit LinuxVersion 10.1.0.5
EMC® PowerPath®
(Fibre Channel clusters only)
®
32-bit technology (x86)
Quarterly Update 3
• Enterprise Edition, including the RAC option for clusters
• Enterprise Edition for single-node configuration
Version 4.5.1
rhn.redhat.com
.
NOTE: Depending on the number of users, the applications you use, your batch processes, and other factors,
you may need a system that exceeds the minimum hardware requirements in order to achieve the desired
performance.
NOTE: The hardware configuration of all the cluster nodes must be identical.
Dell PowerEdge™ 1750, 1850, 2600,
2650, 2800, 2850, 4600, 6600, 6650,
6800, and 6850 systems [two to eight
nodes using Oracle Cluster File
System (OCFS2) or Automatic Storage
Management (ASM)]
3-GHz Intel Xeon
1 GB of random-access memory (RAM)
PowerEdge Expandable RAID Controller (PERC) for internal hard
drives
Two 36-GB hard drives (RAID 1) connected to a PERC
Three Gigabit network interface controller (NIC) ports
See the Dell|Oracle Tested and Validated Configurations website
at www.dell.com/10g for information on supported configurations
at www.dell.com/10g for information on supported configurations
16 ports for seven or eight nodes
3-GHz Intel Xeon processor
1 GB of RAM
Two 36-GB hard drives (RAID 1) connected to a PERC
Two NIC ports
See the Dell|Oracle Tested and Validated Configurations
website at www.dell.com/10g for information on supported
configurations
License Agreements
NOTE: Your Dell configuration includes a 30-day trial license of the Oracle software. If you do not have a license
for this product, contact your Dell sales representative.
Important Documentation
For more information on specific hardware components, see the documentation that came with your system.
For Oracle product information, see the How to Get Started guide in the Oracle CD kit.
Deployment Guide7
Installing and Configuring Red Hat Enterprise Linux
NOTICE: To ensure that the operating system is installed correctly, disconnect all external storage devices
from the system before you install the operating system.
This section describes the installation of the Red Hat Enterprise Linux AS operating system and
the configuration of the operating system for Oracle deployment.
Installing Red Hat Enterprise Linux Using the Deployment CDs
1
Disconnect all external storage devices from the system.
2
Locate your Dell Deployment CDs and original Red Hat Enterprise Linux AS 4 with Update 3 CDs.
3
Insert
Dell Deployment CD 1
The system boots to the
4
When prompted for Tested and Validated Configurations, type 4 and press <Enter> to select
Oracle 10g R1 EE on Red Hat Enterprise Linux 4 32bit Update 3
5
When prompted for Solution Deployment Image source, type 1 to select
Deployment CD
6
When prompted, insert
into the CD drive.
A deployment partition is created and the contents of the CDs are copied to it. When the copy operation
is completed, the system automatically ejects the last CD and boots to the deployment partition.
When the installation is completed, the system automatically reboots and the Red Hat Setup Agent
appears.
and press <Enter>.
into the CD drive and reboot the system.
Dell Deployment CD 1
Dell Deployment CD 2
.
and subsequently the Red Hat Installation CDs
.
Copy solution by
7
In the
Red Hat Setup Agent Welcome
8
When prompted, specify a
9
When the
you cannot configure the network bonding in this window.
10
When the
completing the Oracle deployment.
11
Log in as
8Deployment Guide
Network Setup
Security Level
root
.
root
window appears, click
window appears, disable the firewall. You may enable the firewall after
window, click
password.
Next
to configure your operating system settings.
Next
. You will configure network settings later as
Configuring Hugemem Kernel
The Red Hat Enterprise Linux 4 hugemem kernel is required to configure the Oracle relational database
management system (RDBMS) to increase the size of the buffer cache above the default 1.7 GB value.
Using Dell Deployment CD 1, the Red Hat Enterprise Linux 4 hugemem kernel is installed by default.
Change the default boot parameters in the bootloader configuration file /etc/grub.conf to enable this option.
NOTE: Dell recommends that the hugemem kernel be used only on systems with more than 16 GB of RAM.
This kernel has some overhead which may degrade the performance on systems with less memory.
Configuring Red Hat Enterprise Linux
Log in as root on all the nodes and perform the following procedure:
1
Insert the
If you are using a CD, type:
/media/cdrom/install.sh
If you are using a DVD, type:
/media/cdrecorder/install.sh
Dell Deployment CD 2
into the CD drive.
The contents of the CD are copied to the
/usr/lib/dell/dell-deploy-cd
directory.
When the copy procedure is completed, remove the CD from the CD drive by typing:
umount /dev/cdrom
2
Navigate to the directory containing the scripts installed from the Dell Deployment CD by typing:
cd /dell-oracle-deployment/scripts/standard
NOTE: Scripts discover and validate installed component versions and, when required, update components
to supported levels.
3
Configure the Red Hat Enterprise Linux for Oracle installation by typing:
./005-oraclesetup.py
4
Start the environment variables by typing:
source /root/.bash_profile
5
Verify that the processor, RAM, and disk sizes meet the minimum Oracle installation requirements
by typing:
./010-hwCheck.py
If the script reports that a parameter failed, update your hardware configuration and run the script again.
Deployment Guide9
6
If you are deploying the cluster using OCFS2, perform the following steps:
a
Install OCFS2 Red Hat Package Managers (RPMs) by typing:
./340-rpms_ocfs.py
b
To ensure smooth mounting of OCFS2, type:
./350-ocfs_networkwait.py
Connect the external storage.
7
Updating Your System Packages Using Red Hat Network
Red Hat periodically releases software updates to fix bugs, address security issues, and add new features.
You can download these updates through the Red Hat Network (RHN) service. See the Dell|Oracle
Tested and Validated Configurations website at www.dell.com/10g for the latest supported
configurations before you use RHN to update your system software to the latest revisions.
NOTE: If you are deploying Oracle Database 10g on a single node, skip the following sections and see "Configuring
and Deploying Oracle Database 10g (Single Node)."
Verifying Cluster Hardware and Software Configurations
Before you begin the cluster setup, verify the hardware installation, communication interconnections,
and node software configuration for the entire cluster. The following sections provide setup information
for hardware and software Fibre Channel cluster configurations.
Fibre Channel Cluster Setup
Your Dell Professional Services representative completed the setup of your Fibre Channel cluster. Verify
the hardware connections, and the hardware and software configurations as described in this section.
Figure 1-1 shows an overview of the connections required for the cluster, and Table 1-4 summarizes
the cluster connections.
10Deployment Guide
Figure 1-1. Hardware Connections for a Fibre Channel Cluster
Dell|EMC Fibre Channel storage systems
public network
PowerEdge systems
(Oracle database)
Gb Ethernet switches (private network)
Dell|EMC Fibre Channel switches (SAN)
LAN/WAN
Cat 5e (integrated NIC)
Cat 5e (copper gigabit NIC)
fiber optic cables
additional fiber optic cables
100
SP-A
HBA 0HBA 1
switch 0
switch 1
1
SP-B
NOTE: The arrangement of storage processors, HBAs, and Fibre Channel switches shown
above is used for illustrative purposes and may vary for different network configurations.
One enhanced category 5 (Cat 5e) cable from public NIC to local area network
(LAN)
One Cat 5e cable from private Gigabit NIC to Gigabit Ethernet switch
One Cat 5e cable from a redundant private Gigabit NIC to a redundant Gigabit
Ethernet switch
One fiber optic cable from HBA 0 to Fibre Channel switch 0
One fiber optic cable from HBA 1 to switch 1
Two Cat 5e cables connected to the LAN
One to four optical connections to each Fibre Channel switch; for example,
for a four-port configuration:
• One optical cable from SPA port 0 to Fibre Channel switch 0
• One optical cable from SPA port 1 to Fibre Channel switch 1
• One optical cable from SPB port 0 to Fibre Channel switch 1
• One optical cable from SPB port 1 to Fibre Channel switch 0
One to four optical connections to the Dell|EMC Fibre Channel storage system
One optical connection to each PowerEdge system’s HBA
One Cat 5e connection to the private Gigabit NIC on each PowerEdge system
One Cat 5e connection to the remaining Gigabit Ethernet switch
Verify that the following tasks have been completed for your cluster:
•All hardware is installed in the rack.
•All hardware interconnections are set up as shown in Figure 1-1 and listed in Table 1-4.
•All logical unit numbers (LUNs), redundant array of independent disks (RAID) groups, and storage
groups are created on the Dell|EMC Fibre Channel storage system.
•Storage groups are assigned to the nodes in the cluster.
NOTICE: Before you perform the procedures in the following sections, ensure that the system hardware and
cable connections are installed correctly.
12Deployment Guide
Fibre Channel Hardware and Software Configurations
•Each node must include the following minimum hardware peripheral components:
–One or two hard drives (36-GB minimum) in the internal hard-drive bay
–Three Gigabit NIC ports
–Two Fibre Channel HBAs
•Each node must have the following software installed:
–Red Hat Enterprise Linux software (see Table 1-1)
–Fibre Channel HBA driver
–OCFS2 module for the kernel and the configuration tools for OCFS2
NOTE: OCFS supports two kinds of kernel, namely hugemem and Symmetric MultiProcessing (SMP).
Choose the OCFS type according to your kernel.
•The Fibre Channel storage must be configured with the following:
–A minimum of three LUNs created and assigned to the cluster
–A minimum LUN size of 5 GB
Configuring Networking and Storage for Oracle RAC 10g
This section provides information on setting up a Fibre Channel cluster running a seed database
and includes the following procedures:
•Configuring the Public and Private Networks
•Securing Your System
•Verifying the Storage Configuration
•Configuring Shared Storage Using OCFS2
•Configuring Shared Storage With ASM
Configuring Oracle RAC 10g database is complex and requires an ordered list of procedures.
To configure networking and storage in a minimal amount of time, perform the following procedures
in a sequence.
Configuring the Public and Private Networks
This section presents steps to configure the public and private cluster networks.
NOTE: Each node requires a unique public and private Internet Protocol (IP) address and an additional public
IP address to serve as the virtual IP address for the client connections and connection failover. The virtual IP
address must belong to the same subnet as the public IP. All public IP addresses, including the virtual IP address,
must be registered with DNS.
Deployment Guide13
Depending on the number of NIC ports available, configure the network interfaces as shown in Table 1-5.
Table 1-5. NIC Port Assignments
NIC PortThree Ports AvailableFour Ports available
1Public IP and virtual IPPublic IP
2Private IP (bonded)Private IP (bonded)
3Private IP (bonded)Private IP (bonded)
4NAVirtual IP
NOTE: The Oracle installer requires that the public interface name and the bond name for the private interface be
the same on all the cluster nodes. If the public interfaces are different, a workaround is to use bonding to abstract
the network interfaces and use this for Oracle installation.
Configuring the Public Network
If you have not already configured your public network, configure it by performing the following
procedure on each node:
1
Log in as
2
Edit the network device file
root
.
/etc/sysconfig/network-scripts/ifcfg-eth#
, where # is the number
of the network device, and configure the file as follows:
For example, the line for the first node would be as follows:
HOSTNAME=node1.domain.com
Ty p e :
4
service network restart
Verify that the IP addresses are set correctly by typing:
5
ifconfig
14Deployment Guide
file, and, if necessary, replace
localhost.localdomain
6
Check your network configuration by pinging each public IP address from a client on the LAN outside
the cluster.
7
Connect to each node to verify that the public network is functioning and verify that the secure shell
(ssh) is working by typing:
ssh <public IP>
Configuring the Private Network Using Bonding
Before you deploy the cluster, configure the private cluster network to allow the nodes to communicate
with each other. This involves configuring network bonding and assigning a private IP address and host
name to each node in the cluster. To set up network bonding for Broadcom or Intel NICs and to
configure the private network, perform the following procedure on each node:
1
Log in as
2
Add the following line to the
root
.
/etc/modprobe.conf
file:
alias bond0 bonding
For high availability, edit the
3
The default value for
miimon
/etc/modprobe.conf
file and set the option for link monitoring.
is 0, which disables link monitoring. Change the value to
100 milliseconds initially, and adjust it as needed to improve performance, as shown in the following
example. Type:
options bonding miimon=100 mode=1
In the
4
/etc/sysconfig/network-scripts/
directory, create or edit the
ifcfg-bond0
configuration file.
For example, using sample network parameters, the file would appear as follows:
, verify that the private interface is functioning by typing:
and ignore any warnings.
ifconfig
The private IP address for the node should be assigned to the private interface bond0.
7
When the private IP addresses are set up on every node, ping each IP address from
that the private network is functioning.
8
Connect to each node and verify that the private network and
ssh
are functioning correctly by typing:
ssh <private IP>
9
On
each node,
modify the
/etc/hosts
file by adding the following lines:
127.0.0.1 localhost.localdomain localhost
<private IP node1> <private hostname node1>
<private IP node2> <private hostname node2>
one node
to ensure
<public IP node1> <public hostname node1>
<public IP node2> <public hostname node2>
<virtual IP node1> <virtual hostname node1>
<virtual IP node2> <virtual hostname node2>
NOTE: The examples in this and the following step are for a two-node configuration; add lines for each
additional cluster node.
16Deployment Guide
10
On
each node
, create or modify the
/etc/hosts.equiv
file by listing all of your public IP addresses or host
names. For example, if you have one public host name, one virtual IP address, and one virtual host
name for each node, add the following lines:
<virtual IP or hostname node1>oracle
<virtual IP or hostname node2>oracle
11
Log in as
rsh <public hostname nodex>
where
oracle
x
is the node number.
, and connect to each node to verify that remote shell (
rsh
) is working by typing:
,
Securing Your System
To prevent unauthorized users from accessing your system, Dell recommends that you disable rsh after
you install the Oracle software. Disable rsh by typing:
chkconfig rsh off
Verifying the Storage Configuration
While configuring the clusters, create partitions on your Fibre Channel storage. In order to create the
partitions, all cluster nodes must be able to detect the external storage devices. To verify that each node
can detect each storage LUN or logical disk, perform the following steps:
1
For Dell|EMC Fibre Channel storage, verify that the EMC Navisphere® agent and the correct version
of PowerPath (see Table 1-6) are installed on each node, and that each node is assigned to the correct
storage group in your Navisphere agent software. See the documentation that came with your
Dell|EMC Fibre Channel storage for instructions.
NOTE: The Dell Professional Services representative who installed your cluster performed this step. If you
reinstall the software on a node, you must complete this step.
2
Visually verify that the storage devices and cluster nodes are connected correctly to the Fibre Channel
switch (see Figure 1-1 and Table 1-4).
3
Verify that you are logged in as
root
.
Deployment Guide17
4
On
each node
, type:
more /proc/partitions
The node detects and displays the LUNs or logical disks, as well as the partitions created on those
external devices.
NOTE: The listed devices vary depending on how your storage is configured.
A list of the LUNs or logical disks that are detected by the node is displayed, as well as the partitions
that have been created on those external devices. PowerPath pseudo devices appear in the list, such as
/dev/emcpowera, /dev/emcpowerb
5
In the
/proc/partitions
file, ensure that:
, and
/dev/emcpowerc
.
•All PowerPath pseudo devices appear in the file with similar device paths. For example,
/dev/emcpowera, dev/emcpowerb
, and
/dev/emcpowerc
.
•The Fibre Channel LUNs appear as small computer system interface (SCSI) devices, and each
cluster node is configured with the same number of LUNs.
For example, if the node is configured with a SCSI drive or RAID container attached to
a Fibre Channel storage device with three logical disks,
or internal drive, and
emcpowera, emcpowerb
, and
sda
identifies the node’s RAID container
emcpowerc
identifies the LUNs (or PowerPath
pseudo devices).
If the external storage devices do not appear in the /proc/partitions file:
On
1
all the nodes
, stop the PowerPath service by typing:
service naviagent stop
service PowerPath stop
2
On
all the nodes
•For QLogic HBAs:
rmmod qla2300
modprobe qla2300
•For Emulex HBAs:
rmmod lpfc
modprobe lpfc
3
On
all the nodes
service PowerPath start
service naviagent start
4
Confirm that all the nodes detect the external storage devices by typing:
more /proc/partitions
18Deployment Guide
, reload the HBA driver to synchronize the kernel's partition tables by typing:
, restart the PowerPath service by typing:
Configuring Shared Storage Using OCFS2
Shared storage can be configured using either OCFS2 or ASM. This section provides procedures
for configuring shared storage using OCFS2.
1
Log in as
2
Perform the following steps:
a
b
c
root
on the
first node
.
Start the X Window System by typing:
startx
Generate the OCFS2 configuration file (
of
ocfs2
by typing the following in a terminal:
/etc/ocfs2/cluster.conf
) with a default cluster name
ocfs2console
From the menu, click
Cluster→ Configure Nodes
.
If the cluster is offline, the console will start it. A message window appears displaying that information. Close the message window.
Node Configuration
The
d
To add nodes to the cluster, click
window appears.
Add
. Enter the node name (same as the host name) and the
private IP. Retain the default value of the port number. After entering all the details mentioned,
click
OK
. Repeat this step to add all the nodes to the cluster.
e
3
When all the nodes are added, click
Window
f
From the menu, click
.
Cluster→ Propagate Configuration
Propagate Cluster Configuration Window
on the window and then click
g
Select
On
all the nodes
File→ Quit
.
, enable the cluster stack on startup by typing:
Close
Apply
and then click
appears. Wait until the message
.
/etc/init.d/o2cb enable
Change the O2CB_HEARTBEAT_THRESHOLD value on
4
a
Stop the O2CB service on
all the nodes
by typing:
/etc/init.d/o2cb stop
b
Edit the O2CB_HEARTBEAT_THRESHOLD value in
c
Start the O2CB service on
all the nodes
by typing:
/etc/init.d/o2cb start
Close
in the
Node Configuration
.
all the nodes
using the following steps:
/etc/sysconfig/o2cb
Finished
to 61 on
appears
all the nodes
.
Deployment Guide19
5
On the
first node
storage devices with
a
Create a primary partition for the entire device by typing:
fdisk /dev/emcpowerx
, for a Fibre Channel cluster, create one partition on each of the other two external
fdisk
:
Ty p e h for help within the
b
Verify that the new partition exists by typing:
fdisk
utility.
cat /proc/partitions
If you do not see the new partition, type:
sfdisk -R /dev/<device name>
NOTE: The following steps use the sample values /u01 and /u02 for mount points and u01 and u02 as labels.
6
On
any one node
slots (node slots refer to the number of cluster nodes) using the command line utility
, format the external storage devices with 4 K block size, 128 K cluster size, and 4 node
The shared database partitions can either be configured as raw devices or can be configured using
the ASMLib software.
Deployment Guide21
Configuring Shared Storage Using ASMLib
1
To configure your cluster using ASM, perform the following steps on
a
Log in as
b
Configure the ASM kernel module by typing:
root
.
/etc/init.d/oracleasm configure
The following message appears on the screen:
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM
library driver. The following questions will determine whether the
driver is loaded on boot and what permissions it will have. The
current values will be shown in brackets ('[]'). Hitting <ENTER>
without typing an answer will keep that current value. Ctrl-C will
abort.
A message appears prompting you to enter the default user owning the driver interface.
Ty p e
oracle
as mentioned below:
Default user to own the driver interface []: oracle
A message appears prompting you to enter the default group owning the driver interface.
Ty p e
dba
as mentioned below:
Default group to own the driver interface []: dba
all the nodes
:
A message appears prompting you to load the oracleasm driver on boot. To load the driver, type y
as mentioned below:
Start Oracle ASM library driver on boot (y/n) [n]: y
A message appears prompting you to fix permissions of Oracle ASM disks on boot. Type y as
mentioned below:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:y
The following messages appear on the screen:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
22Deployment Guide
2
Label the partitions created earlier as ASM disks on
any one node
.
# /etc/init.d/oracleasm createdisk ASM1 /dev/emcpowerb1
Marking disk "/dev/emcpowerb1" as an ASM disk: [ OK ]
# /etc/init.d/oracleasm createdisk ASM2 /dev/emcpowerc1
Marking disk "/dev/emcpowerc1" as an ASM disk: [ OK ]
3
Scan the ASM disks on
all the other nodes
.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
4
On
all the nodes
, verify that all the ASM disks are visible by typing:
# /etc/init.d/oracleasm listdisks
A list of all the configured ASM disks appears.
5
To add an additional ASM disk (for example, ASM3), edit the
This section describes the steps required to install Oracle RAC 10g version 10.1.0.3, which involves
installing CRS and installing the Oracle Database 10g software. Dell recommends that you create a seed
database to verify that the cluster works correctly before you deploy it in a production environment.
Installing CRS
1 Log in as root on the first node
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
4
Mount the
5
Ty p e :
su - oracle
Start the Oracle Universal Installer by typing:
6
unset ORACLE_HOME
If you are using a CD, type:
/media/cdrom/runInstaller
If you are using a DVD, type:
Oracle Cluster Ready Services
.
CD.
/media/cdrecorder/runInstaller
In the
7
8
9
10
11
24Deployment Guide
Welc om e
In the
Specify File Locations
/opt/oracle/product/10.1.0/crs_1
In the
Language Selection
In the
Cluster Configuration
enter the public and private node names for each node, and click
The cluster name must be unique throughout the enterprise.
In the
Specify Network Interface Usage
Do not use
or
NOTE: The public and private NIC assignments that you select in this step must be identical and available
on all the nodes.
, and then click
window, click
window, select a language and click
Next
.
window, verify that the Oracle home path is
and click
window, enter a global cluster name or accept the default name
Next
.
Next
.
Next
.
Next
.
window, click each interface type and select
public, private
crs
,
,
12
In the
Oracle Cluster Registry
/dev/raw/ocr.dbf
(
NOTE: If you have used a shared OCFS2 partition for the OCR and the Voting Disk, enter the appropriate path.
13
In the
Votin g Disk
(
/dev/raw/votingdisk
14
In the
Summary
) and click
window, enter a complete path for the partition to use for storing the Voting Disk
) and click
window, click
window, enter the complete path of the OCR disk location
Next
.
Next
.
Install
.
When the installation is completed, a message appears indicating that you must run the
on all the nodes. The
15
When prompted, open a new terminal window.
16
From the same terminal window in step 15, as the user
root.sh
script automatically configures the cluster.
root
, run the
root.sh
script on each node,
beginning with the local node.
17
18
Wait for
In the
In the
root.sh
to finish running on each node before you run it on the next node.
Setup Privileges
window, click OK.
End of Installation
window, click
Exit
and confirm by clicking
Yes
.
Installing the Oracle Database 10g Software
1 Log in as root on the first node
2
Mount the
3
Start the Oracle Universal Installer as the user
Oracle Database 10g CD 1
If you are using a CD, type:
.
.
oracle
:
root.sh
script
/media/cdrom/runInstaller
If you are using a DVD, type:
/media/cdrecorder/runInstaller
In the
4
5
Welc om e
In the
Specify File Locations
/opt/oracle/product/10.1.0/db_1
NOTE: The Oracle home in this step must be different from the Oracle home name that you identified during
the CRS installation. You cannot install the Oracle 10g Enterprise Edition with RAC into the same home that
you used for CRS
6
In the
Specify Hardware Cluster Installation Mode
7
In the
Select Installation Type
window, click
.
Next
.
window, verify that the complete Oracle home path is
and click
window, select
Next
.
window, click
Enterprise Edition
Select All
and click
and click
Next
.
The status of various prerequisite checks being performed are displayed. When the checks are
completed, you may receive a warning for version mismatch of
Wa rn ing
8
In the
option and click
Next
.
Select Database Configuration
window, select
Do not create a starter database
openmotif
package. Check the
Deployment Guide25
Next
.
and click
Next
.
9
In the
Summary
10
When prompted, open a new terminal window.
11
Run
root.sh
a
Press <Enter> to accept the default value for the local
window, click
on the
first node
Install
.
The Virtual Internet Protocol Configuration Assistant (VIPCA) starts.
b
On the first VIPCA window, click
c
In the
List of Available Network Interfaces
four NIC ports, the port reserved for the virtual IP address (see "Configuring the Public and
Private Networks"), and click
NOTE: The public and private NIC assignments that you select in this step must be identical and available
on all nodes.
In the
d
Virtual IPs for Cluster Nodes
Next
mask for each node displayed and click
The virtual IP address must be the same as you entered in the
mask must be the same as the public mask.
e
Click
Finish
in the summary window.
A progress window appears.
f
When the configuration is completed, click OK and click
g
Run
root.sh
on each of the other nodes in your cluster.
root.sh
Wait for
to finish running on
.
bin
directory.
Next
.
window, select your public NIC or, if you have
.
window, enter an unused public virtual IP address and subnet
Next
.
each node
/etc/hosts.equiv
Exit
to exit the VIPCA.
before you run it on the next node.
file, and the subnet
12
Click OK in the
13
Click
Exit
Setup Privileges
in the
End of Installation
window.
window and confirm by clicking
Applying the 10.1.0.5 Patchset
1
Download the 10.1.0.5 patchset (
2
Copy the patchset to the folder
3
Unzip the patchset by typing:
unzip p4505133_10105_LINUX.ZIP
Change the ownership of the
4
chown -R oracle.dba /oracle_cds/10.1.0.5
Run the installer from the
5
It patches all the nodes that are a part of the RAC cluster. The 10.1.0.5 patchset patches the CRS
as well as the database home.
NOTE: The 10.1.0.5 patchset supports rolling upgrades for the CRS of all the member nodes.
26Deployment Guide
p4505133_10105_LINUX.ZIP
/oracle_cds/10.1.0.5
10.1.0.5
first node
directory by typing:
only.
on the
) from the Oracle MetaLink website.
first node
Yes
.
.
Patching CRS to 10.1.0.5
1 Log in as oracle on the first node
2
Start the Oracle installer by typing:
.
/oracle_cds/10.1.0.5/Disk1/runInstaller
In the
3
4
Welc om e
In the
Specify File Locations
window, click
Next
.
window, ensure that the source path points to the
of the 10.1.0.5 staging area.
5
In the
Destination
the path points to the CRS home and click
6
In the
Selected Nodes
displayed and click
7
In the
Summary
section select the CRS home name from the drop-down menu. Ensure that
Next
.
window, ensure that all the member nodes of the 10.1.0.3 installation are
Next
.
window, click
Install
.
The installer will prompt you to stop the CRS services and run the
8
Log in as
9
Exit the installer after you run this script from all the nodes.
10
On
a
root
on
all the nodes
each node
, perform the following steps:
and run the
root10105.sh
script from the CRS home location.
Verify the CRS installation by typing the following command from the
/opt/oracle/product/10.1.0/crs_1/bin
directory:
olsnodes -n -v
root10105.sh
products.xml
script.
file
A list of the public node names of all nodes in the cluster appears.
b
List all the services that are running by typing:
crs_stat
Patching the Database to 10.1.0.5 Patchset
1 Log in as oracle on the first node.
Stop the Oracle Notification Services (ONS) before upgrading the patchset by typing:
2
onsctl stop
Start the Oracle installer by typing:
3
/oracle_cds/10.1.0.5/Disk1/runInstaller
In the
4
5
Welc om e
In the
Specify File Locations
window, click
Next
.
window, ensure that the source path points to the
of the 10.1.0.5 staging area.
6
In the
Destination
section, select the database home name from the drop-down menu. Make sure
that the path points to the database home of the 10.1.0.3 installation and click
products.xml
Next
.
Deployment Guide27
file
7
In the
Selected Nodes
displayed and click
8
In the
Summary
The installer prompts you to run the
9
Log in as
10
Exit the installer after running this script from all the nodes.
root
window, ensure that all the member nodes of the 10.1.0.3 installation are
Next
.
window, click
on
each node
Install
.
root.sh
and run the
script on all the nodes after the process is completed.
root.sh
script from the database home location.
Configuring the Listener
This section describes the steps to configure the listener, which is required for remote client connection
to a database.
On any one node, perform the following procedure:
Log in as
1
2
Start the X Window System by typing:
startx
3
Open a terminal window and type:
xhost +
root
.
As the user
4
source /home/oracle/.bash_profile
5
Start the Net Configuration Assistant by typing:
netca
6
Select
7
On the
8
On the
9
On the
10
On the
and click
11
On the
12
On the
and click
13
On the
14
On the
15
Click
oracle
Cluster Configuration
TOPSNodes
Welc om e
Listener Configuration, Listener
Listener Configuration, Listener Name
Next
Listener Configuration, Select Protocols
Listener Configuration, TCP/IP Protocol
Next
Listener Configuration, More Listeners?
Listener Configuration Done
Finish
.
, run:
window, click
window, select
.
.
and click
Select All Nodes
Listener Configuration
window, click
Next
.
window, select
window, type
window, select
window, select
window, select No and click
and click
and click
Add
Next
.
Next
.
Next
.
and click
LISTENER
Next
.
in the
Listener Name
TCP
and click
Use the standard port number of 1521
Next
Next
.
.
field
28Deployment Guide
Creating the Seed Database
This section contains procedures for creating the seed database using either OCFS2 or ASM and for
verifying the seed database.
Creating the Seed Database Using OCFS2
1
On the
the Database Configuration Assistant (DBCA).
2
In the
3
In the
4
In the
5
In the
6
In the
7
In the
8
In the
password selections and entries, and click
9
In the
10
In the
11
In the
specify the flash recovery size, and then click
12
In the
13
In the
14
In the
Pool
15
In the
16
In the
17
In the
first node
Welc om e
Operations
Node Selection
Database Templates
Database Identification
Management Options
Database Credentials
Storage Options
, as the user
oracle
window, select
window, click
window, click
window, click
window, click
window, click
window, select
window, enter a
Database File Locations
Recovery Configuration
Database Content
Database Services
window, click
window, click
window, click
Initialization Parameters
value to 500 MB, and click
Database Storage
Creation Options
Summary
window click OK to create the database.
window, click
window, check
, type
dbca -datafileDestination /u01
Oracle Real Application Cluster Database
Create a Database
Select All
Custom Database
Next
and click
and click
Next
Next
.
.
and click
Global Database Name
.
Next
such as
Use the Same Password for All Accounts
Next
.
Cluster File System
window, click
Next
Next
Next
.
Specify flash recovery area
Next
.
.
.
and click
Next
, click
and click
.
racdb
.
Browse
to start
Next
.
and click
, complete
and select
window, if your cluster has more than four nodes, change the
Next
.
Next
.
Create Database
and click
Finish
.
Shared
Next
/u02
.
,
NOTE: The creation of the seed database may take more than an hour.
NOTE: If you receive an Enterprise Manager Configuration Error during the seed database creation, click OK
to ignore the error.
When the database creation is completed, the
Password Management
window appears.
Deployment Guide29
18
Click
Exit
.
A message appears indicating that the cluster database is being started on all nodes.
19
On
each node
a
Determine which database instance exists on that node by typing:
srvctl status database -d <database name>
b
Add the ORACLE_SID environment variable entry in the