Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, and PowerEdge are trademarks of Dell Inc.; EMC, PowerPath, and Navisphere are registered
trademarks of EMC Corporation; Intel and Xeon are registered trademarks of Intel Corporation; Red Hat is a registered trademark of Red Hat, Inc.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products.
Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
Configuring Shared Storage for Oracle Clusterware and the
Database Using ASM
Installing Oracle RAC 10g
Before You Begin
Installing Oracle Clusterware
Installing the Oracle Database 10g Software
RAC Post Deployment Fixes and Patches
Configuring the Listener
Creating the Seed Database Using OCFS2
Creating the Seed Database Using ASM
Configuring the Public Network
Configuring Database Storage
Configuring Database Storage Using the Oracle ASM Library Driver
Installing Oracle Database 10g
Installing the Oracle Database 10g 10.2.0.2 Patchset
Configuring the Listener
Creating the Seed Database
Adding a New Node to the Network Layer
Configuring Shared Storage on the New Node
Adding a New Node to the Oracle Clusterware Layer
Adding a New Node to the Database Layer
Reconfiguring the Listener
Adding a New Node to the Database Instance Layer
Removing a Node From the Cluster
Reinstalling the Software
Additional Information
Supported Software Versions
Determining the Private Network Interface
This document provides information about installing, configuring, reinstalling, and using
Oracle Database 10g Enterprise Edition with the Oracle Real Application Clusters (RAC) software on
your Dell|Oracle supported configuration. Use this document in conjunction with the Dell Deployment,
Red Hat Enterprise Linux, and Oracle RAC 10g software CDs to install your software.
NOTE: If you install your operating system using only the operating system CDs, the steps in this document may not
be applicable.
This document covers the following topics:
•Software and hardware requirements
®
•Installing and configuring Red Hat
•Verifying cluster hardware and software configurations
•Configuring storage and networking for Oracle RAC
•Installing Oracle RAC
•Configuring and installing Oracle Database 10
•Adding and removing nodes
•Reinstalling the software
•Additional information
•Troubleshooting
•Getting help
•Obtaining and using open source files
For more information on Dell supported configurations for Oracle, see the Dell|Oracle Tested and
Validated Configurations website at www.dell.com/10g.
Enterprise Linux
g
(single node)
Oracle RAC 10g Deployment Service
If you purchased the Oracle RAC 10g Deployment Service, your Dell Professional Services representative
will assist you with the following:
•Verifying cluster hardware and software configurations
•Configuring storage and networking
•Installing Oracle RAC 10
g
Release 2
Deployment Guide5
Software and Hardware Requirements
Before you install the Oracle RAC software on your system:
•Download the Red Hat CD images from the Red Hat website at
•Locate your Oracle CD kit.
•Download the
Dell Deployment CD
images that are appropriate for the solution being installed from
the Dell|Oracle Tested and Validated Configurations website at
downloaded CD images to CDs.
Table 1-1 lists basic software requirements for Dell supported configurations for Oracle. Table 1-2
through Table 1-3 list the hardware requirements. For more information on the minimum software
versions for drivers and applications, see "Supported Software Versions."
Table 1-1. Software Requirements
Software ComponentConfiguration
Red Hat Enterprise Linux AS EM64T (Version 4)Update 3
Oracle Database 10gVersio n 10 .2
• Enterprise Edition, including the RAC option for clusters
• Enterprise Edition for single-node configuration
EMC® PowerPath
NOTE: Depending on the number of users, the applications you use, your batch processes, and other factors, you
may need a system that exceeds the minimum hardware requirements in order to achieve desired performance.
®
Version 4.5.1
rhn.redhat.com
.
www.dell.com/10g
. Burn all these
NOTE: The hardware configuration of all the nodes must be identical.
See the Dell|Oracle Tested and Validated Configurations website at
www.dell.com/10g for information on supported configurations
Eight ports
License Agreements
NOTE: Your Dell configuration includes a 30-day trial license of Oracle software. If you do not have a license for
this product, contact your Dell sales representative.
Important Documentation
For more information on specific hardware components, see the documentation included with
your system.
For Oracle product information, see the How to Get Started guide in the Oracle CD kit.
Before You Begin
Before you install the Red Hat Enterprise Linux operating system, download the Red Hat Enterprise
Linux Quarterly Update ISO images from the Red Hat Network website at rhn.redhat.com and burn
these images to CDs.
To download the ISO images, perform the following steps:
1
Navigate to the Red Hat Network website at
2
Click
Channels
3
In the left menu, click
.
Easy ISOs
.
rhn.redhat.com
.
Deployment Guide7
4
In the
Easy ISOs
The ISO images for all Red Hat products appear.
5
In the
Channel Name
Linux software.
6
Download the ISOs for your Red Hat Enterprise Linux software as listed in your Solution Deliverable
List (SDL) from the Dell|Oracle Tested and Validated Configurations website at
7
Burn the ISO images to CDs.
page left menu, click
menu, click the appropriate ISO image for your Red Hat Enterprise
All
.
www.dell.com/10g
Installing and Configuring Red Hat Enterprise Linux
NOTICE: To ensure that the operating system is installed correctly, disconnect all external storage devices from
the system before you install the operating system.
This section describes the installation of the Red Hat Enterprise Linux AS operating system and the
configuration of the operating system for Oracle Database deployment.
Installing Red Hat Enterprise Linux Using the Deployment CDs
1
Disconnect all external storage devices from the system.
2
Locate your
3
Insert the
The system boots to the
4
When the deployment menu appears, type 1 to select
Linux 4 U3 (x86_64)
5
When another menu asking deployment image source appears, type 1 to select
Deployment CD.
Dell Deployment CD
Dell Deployment CD 1
Dell Deployment CD
.
and the
into the CD drive and reboot the system.
Red Hat Enterprise Linux AS EM64T
.
Oracle 10g R2 EE on Red Hat Enterprise
CDs.
Copy solution by
.
NOTE: This procedure may take several minutes to complete.
6
When prompted, insert
A deployment partition is created and the contents of the CDs are copied to it. When the copy
operation is completed, the system automatically ejects the last CD and boots to the
deployment partition.
When the installation is completed, the system automatically reboots and the Red Hat Setup
Agent appears.
7
In the
Red Hat Setup Agent Welcome
Do not create any operating system users at this time.
8
When prompted, specify a
8Deployment Guide
Dell Deployment CD 2
window, click
root password
.
and each Red Hat installation CD into the CD drive.
Next
to configure your operating system settings.
9
10
When the
When the
Network Setup
Security Level
window appears, click
window appears, disable the firewall. You may enable the firewall after
completing the Oracle deployment.
11
Log in as
root
.
Configuring Red Hat Enterprise Linux
1
Log in as
2
Insert the
mount /dev/cdrom
/media/cdrom/install.sh
root
.
Dell Deployment CD 2
into the CD drive and type the following commands:
Next
. You will configure network settings later.
The contents of the CD are copied to the
procedure is completed, type
3
Ty p e
cd /dell-oracle-deployment/scripts/standard
umount /dev/cdrom
containing the scripts installed from the
NOTE: Scripts discover and validate installed component versions and, when required, update components
to supported levels.
4
Ty p e
./005-oraclesetup.py
5
Ty p e
source /root/.bash_profile
6
Ty p e
./010-hwCheck.py
to configure the Red Hat Enterprise Linux for Oracle installation.
to verify that the CPU, RAM, and disk sizes meet the minimum
/usr/lib/dell/dell-deploy-cd
directory. When the copy
and remove the CD from the CD drive.
to navigate to the directory
Dell Deployment CD
.
to start the environment variables.
Oracle Database installation requirements.
If the script reports that a parameter failed, update your hardware configuration and run the script
again (see Table 1-2 and Table 1-3 for updating your hardware configuration).
7
Connect the external storage device.
8
Reload the HBA driver(s) using
rmmod
and
modprobe
commands. For instance, for Emulex HBAs,
reload the lpfc driver by issuing
rmmod lpfc
modprobe lpfc
For QLA HBAs, identify the drivers that are loaded (
lsmod | grep qla
), and reload these drivers.
Updating Your System Packages Using Red Hat Network
Red Hat periodically releases software updates to fix bugs, address security issues, and add new features.
You can download these updates through the Red Hat Network (RHN) service. See the Dell|Oracle
Tested and Validated Configurations website at www.dell.com/10g for the latest supported
configurations before you use RHN to update your system software to the latest revisions.
NOTE: If you are deploying Oracle Database on a single node, skip the following sections and see "Configuring and
Deploying Oracle Database 10g (Single Node)."
Deployment Guide9
Verifying Cluster Hardware and Software Configurations
Dell|EMC Fibre Channel
storage systems
client systems
PowerEdge systems
(Oracle Database)
Gb Ethernet switches (private network)
Dell|EMC Fibre Channel switches
(SAN)
LAN/WAN
CAT 5e/6 (public NIC)
CAT 5e/6 (copper gigabit NIC)
fiber optic cables
additional fiber optic cables
Before you begin cluster setup, verify the hardware installation, communication interconnections, and
node software configuration for the entire cluster. The following sections provide setup information for
hardware and software Fibre Channel cluster configurations.
Fibre Channel Cluster Setup
Your Dell Professional Services representative completed the setup of your Fibre Channel cluster. Verify
the hardware connections and the hardware and software configurations as described in this section.
Figure 1-1 and Figure 1-3 show an overview of the connections required for the cluster, and Table 1-4
summarizes the cluster connections.
Figure 1-1. Hardware Connections for a Fibre Channel Cluster
Each PowerEdge system node One Category 5 enhanced (CAT 5e) or CAT 6 cable from public NIC to local area
network (LAN)
One CAT 5e or CAT 6 cable from private Gigabit NIC to Gigabit Ethernet switch
One CAT 5e or CAT 6 cable from a redundant private Gigabit NIC to a redundant
Gigabit Ethernet switch
One fiber optic cable from optical HBA 0 to Fibre Channel switch 0
One fiber optic cable from HBA 1 to Fibre Channel switch 1
Each Dell|EMC Fibre
Channel storage system
Each Dell|EMC Fibre
Channel switch
Each Gigabit Ethernet switch One CAT 5e or CAT 6 connection to the private Gigabit NIC on each PowerEdge
Two CAT 5e or CAT 6 cables connected to the LAN
One to four fiber optic cable connections to each Fibre Channel switch; for
example, for a four-port configuration:
•One
fiber optic cable
•One
fiber optic cable
•One
fiber optic cable
•One
fiber optic cable
One to four fiber optic cable connections to the Dell|EMC Fibre Channel storage
system
One fiber optic cable connection to each PowerEdge system’s HBA
system
One CAT 5e or CAT 6 connection to the remaining Gigabit Ethernet switch
from SPA port 0 to Fibre Channel switch 0
from SPA port 1 to Fibre Channel switch 1
from SPB port 0 to Fibre Channel switch 1
from SPB port 1 to Fibre Channel switch 0
Verify that the following tasks are completed for your cluster:
•All hardware is installed in the rack.
•All hardware interconnections are set up as shown in Figure 1-1 and
Figure 1-3, and
listed in Table 1-4.
•All logical unit numbers (LUNs), redundant array of independent disk (RAID) groups, and storage
groups are created on the Dell|EMC Fibre Channel storage system.
•Storage groups are assigned to the nodes in the cluster.
Before continuing with the following sections, visually inspect all hardware and interconnections for
correct installation.
Deployment Guide11
Fibre Channel Hardware and Software Configurations
•Each node must include the minimum hardware peripheral components as described in Table 1-2.
•Each node must have the following software installed:
–Red Hat Enterprise Linux software (see Table 1-1)
–Fibre Channel HBA driver
•The Fibre Channel storage system must be configured with the following:
–A minimum of three LUNs created and assigned to the cluster storage group (see Table 1-5)
–A minimum LUN size of 5 GB
Table 1-5. LUNs for the cluster storage group
LUNMinimum SizeNumber of PartitionsUsed For
First LUN512 MBthree of 128 MB eachVoting disk, Oracle Cluster
Registry (OCR), and
storage processor (SP) file
Second LUNLarger than the size of your databaseoneDatabase
Third LUNMinimum twice the size of your
second LUN
oneFlash Recovery Area
Cabling Your Storage System
You can configure your Oracle cluster storage system in a direct-attached configuration or a four-port
SAN-attached configuration, depending on your needs. See the following procedures for both
configurations.
12Deployment Guide
Figure 1-2. Cabling in a Direct-Attached Fibre Channel Cluster
HBA ports (2)
node 1
node 2
HBA ports (2)
0
1
0
1
CX700 storage system
SP-B
SP-A
0
1
2
3
SP ports
3
2
1
0
Direct-Attached Configuration
To configure your nodes in a direct-attached configuration (see Figure 1-2), perform the following steps:
Connect one optical cable from HBA0 on node 1 to port 0 of SP-A.
1
2
Connect one optical cable from HBA1 on node 1 to port 0 of SP-B.
3
Connect one optical cable from HBA0 on node 2 to port 1 of SP-A.
4
Connect one optical cable from HBA1 on node 2 to port 1 of SP-B.
Deployment Guide13
Figure 1-3. Cabling in a SAN-Attached Fibre Channel Cluster
HBA ports (2)
node 1
node 2
HBA ports (2)
SP-B
SP-A
0
1
2
3
SP ports
3
2
1
0
sw0
sw1
01
0
1
CX700 storage system
SAN-Attached Configuration
To configure your nodes in a four-port SAN-attached configuration (see Figure 1-3), perform the
following steps:
1
Connect one optical cable from SP-A port 0 to Fibre Channel switch 0.
2
Connect one optical cable from SP-A port 1 to Fibre Channel switch 1.
3
Connect one optical cable from SP-A port 2 to Fibre Channel switch 0.
4
Connect one optical cable from SP-A port 3 to Fibre Channel switch 1.
5
Connect one optical cable from SP-B port 0 to Fibre Channel switch 1.
6
Connect one optical cable from SP-B port 1 to Fibre Channel switch 0.
14Deployment Guide
7
Connect one optical cable from SP-B port 2 to Fibre Channel switch 1.
8
Connect one optical cable from SP-B port 3 to Fibre Channel switch 0.
9
Connect one optical cable from HBA0 on node 1 to Fibre Channel switch 0.
10
Connect one optical cable from HBA1 on node 1 to Fibre Channel switch 1.
11
Connect one optical cable from HBA0 on node 2 to Fibre Channel switch 0.
12
Connect one optical cable from HBA1 on node 2 to Fibre Channel switch 1.
Configuring Storage and Networking for Oracle RAC 10g
This section provides information and procedures for setting up a Fibre Channel cluster running a
seed database:
•Configuring the public and private networks
•Securing your system
•Verifying the storage configuration
•Configuring shared storage for Cluster Ready Services (CRS) and Oracle Database
Oracle RAC 10g is a complex database configuration that requires an ordered list of procedures.
To configure networks and storage in a minimal amount of time, perform the following procedures in order.
Configuring the Public and Private Networks
This section presents steps to configure the public and private cluster networks.
NOTE: Each node requires a unique public and private internet protocol (IP) address and an additional public
IP address to serve as the virtual IP address for the client connections and connection failover. The virtual
IP address must belong to the same subnet as the public IP. All public IP addresses, including the virtual IP address,
should be registered with Domain Naming Service and routable.
Depending on the number of NIC ports available, configure the interfaces as shown in Table 1-6.
Table 1-6. NIC Port Assignments
NIC PortThree Ports AvailableFour Ports available
1Public IP and virtual IPPublic IP
2Private IP (bonded)Private IP (bonded)
3Private IP (bonded)Private IP (bonded)
4NAVirtual IP
Deployment Guide15
Configuring the Public Network
NOTE: Ensure that your public IP address is a valid, routable IP address.
If you have not already configured the public network, do so by performing the following steps on
each node:
1
Log in as
2
Edit the network device file
root
.
/etc/sysconfig/network-scripts/ifcfg-eth#
, where # is the number of the
network device, and configure the file as follows:
For example, the line for node 1 would be as follows:
HOSTNAME=node1.domain.com
4
Ty p e :
service network restart
5
Ty p e
ifconfig
6
To check your network configuration, ping each public IP address from a client on the LAN outside
to verify that the IP addresses are set correctly.
the cluster.
7
Connect to each node to verify that the public network is functioning and type
to verify that the secure shell (
Configuring the Private Network Using Bonding
ssh)
command is working.
ssh <public IP>
Before you deploy the cluster, configure the private cluster network to allow the nodes to communicate
with each other. This involves configuring network bonding and assigning a private IP address and
hostname to each node in the cluster.
To set up network bonding for Broadcom or Intel NICs and configure the private network, perform the
following steps on each node:
1
Log in as
2
Add the following line to the
root
.
/etc/modprobe.conf
file:
alias bond0 bonding
16Deployment Guide
3
For high availability, edit the
/etc/modprobe.conf
file and set the option for link monitoring.
The default value for miimon is 0, which disables link monitoring. Change the value to
100 milliseconds initially, and adjust it as needed to improve performance as shown in the following
example. Type:
options bonding miimon=100 mode=1
In the
4
/etc/sysconfig/network-scripts/
directory, create or edit the
ifcfg-bond0
configuration file.
For example, using sample network parameters, the file would appear as follows:
to verify that the private interface is functioning.
and ignore any warnings.
The private IP address for the node should be assigned to the private interface bond0.
7
When the private IP addresses are set up on every node, ping each IP address from one node to ensure
that the private network is functioning.
Deployment Guide17
8
Connect to each node and verify that the private network and
ssh
are functioning correctly by typing:
ssh <private IP>
9
On
each node,
modify the
/etc/hosts
file by adding the following lines:
127.0.0.1 localhost.localdomain localhost
<private IP node1> <private hostname node1>
<private IP node2> <private hostname node2>
<public IP node1> <public hostname node1>
<public IP node2> <public hostname node2>
<virtual IP node1> <virtual hostname node1>
<virtual IP node2> <virtual hostname node2>
NOTE: The examples in this and the following step are for a two-node configuration; add lines for each
additional node.
10
On
each node
, create or modify the
/etc/hosts.equiv
file by listing all of your public IP addresses or host
names. For example, if you have one public hostname, one virtual IP address, and one virtual hostname
for each node, add the following lines:
<virtual IP or hostname node1> oracle
<virtual IP or hostname node2> oracle
11
Log in as
oracle
by typing:
rsh <public hostname nodex>
where x is the node number.
18Deployment Guide
, connect to each node to verify that the remote shell (
rsh
) command is working
Verifying the Storage Configuration
While configuring the clusters, create partitions on your Fibre Channel storage system. In order to create
the partitions, all the nodes must be able to detect the external storage devices. To verify that each node
can detect each storage LUN or logical disk, perform the following steps:
1
For Dell|EMC Fibre Channel storage system, verify that the EMC Navisphere® agent and the correct
version of PowerPath (see Table 1-7) are installed on each node, and that each node is assigned to the
correct storage group in your EMC Navisphere software. See the documentation that came with your
Dell|EMC Fibre Channel storage system for instructions.
NOTE: The Dell Professional Services representative who installed your cluster performed this step. If you
reinstall the software on a node, you must perform this step.
2
Visually verify that the storage devices and the nodes are connected correctly to the Fibre Channel
switch (see Figure 1-1 and Table 1-4).
3
Verify that you are logged in as
4
On
each node
, type:
more /proc/partitions
The node detects and displays the LUNs or logical disks, as well as the partitions created on those
external devices.
NOTE: The listed devices vary depending on how your storage system is configured.
A list of the LUNs or logical disks that are detected by the node is displayed, as well as the partitions
that are created on those external devices. PowerPath pseudo devices appear in the list, such as
/dev/emcpowera, /dev/emcpowerb
root
, and
.
/dev/emcpowerc
.
5
In the
/proc/partitions
file, ensure that:
•All PowerPath pseudo devices appear in the file with similar device names across all nodes.
For example,
/dev/emcpowera
, /
dev/emcpowerb
, and
/dev/emcpowerc
.
•The Fibre Channel LUNs appear as SCSI devices, and each node is configured with the same
number of LUNs.
For example, if the node is configured with a SCSI drive or RAID container attached to a Fibre
Channel storage device with three logical disks,
internal drive, and
emcpowera, emcpowerb
, and
sda
identifies the node’s RAID container or
emcpowerc
identifies the LUNs (or PowerPath
pseudo devices).
If the external storage devices do not appear in the /proc/partitions file, reboot the node.
Deployment Guide19
Disable SELinux
To run the Oracle database, you must disable SELinux.
To temporarily disable SELinux, perform the following steps:
Log in as
1
2
At the command prompt, type:
setenforce 0
To permanently disable SELinux, perform the following steps on all the nodes:
1
Open your
2
Locate the kernel command line and append the following option:
Configuring Shared Storage Using the ASM Library Driver
1
Log in as
2
Open a terminal window and perform the following steps on all nodes:
a
b
root
.
Ty p e
service oracleasm configure
Type the following inputs for all the nodes:
udevstart
Default user to own the driver interface [ ]:
Default group to own the driver interface []:
Start Oracle ASM library driver on boot (y/n) [n]:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:
24Deployment Guide
on all the nodes and repeat step 4.
oracle
dba
y
y
3
On the
4
Repeat step 3 for any additional ASM disks that need to be created.
5
Verify that the ASM disks are created and marked for ASM usage.
In the terminal window, type the following and press <Enter>:
service oracleasm listdisks
The disks that you created in step 3 appear.
For example:
ASM1
ASM2
6
Ensure that the remaining nodes are able to access the ASM disks that you created in step 3.
On each remaining node, open a terminal, type the following, and press <Enter>:
service oracleasm scandisks
first node
service oracleasm createdisk ASM1 /dev/emcpowerb1
service oracleasm createdisk ASM2 /dev/emcpowerc1
, in the terminal window, type the following and press <Enter>:
Installing Oracle RAC 10g
This section describes the steps required to install Oracle RAC 10g, which involves installing CRS and
installing the Oracle Database 10g software. Dell recommends that you create a seed database to verify
that the cluster works correctly before you deploy it in a production environment.
Before You Begin
To prevent failures during the installation procedure, configure all the nodes with identical system
clock settings.
Synchronize your node system clock with a Network Time Protocol (NTP) server. If you cannot access an
NTP server, perform one of the following procedures:
•Ensure that the system clock on the Oracle Database software installation node is set to a later time
than the remaining nodes.
•Configure one of your nodes as an NTP server to synchronize the remaining nodes in the cluster.
If your system is configured correctly, the following message appears:
Pre-check for database installation was successful.
2
As user
root
, type:
xhost +
3
As user
4
Log in as
root
oracle
, mount the
<CD_mountpoint>/runInstaller
The Oracle Universal Installer starts.
5
In the
Welc om e
28Deployment Guide
node2
are the public host names.
not
configured correctly, see "Troubleshooting" for more information.
Oracle Database 10g
CD.
, and type:
window, click
Next
.
6
In the
Select Installation Type
7
In the
Specify Home Details
/opt/oracle/product/10.2.0/db_1
NOTE: The Oracle home name in this step must be different from the Oracle home name that you identified
during the CRS installation. You cannot install the Oracle 10g Enterprise Edition with RAC into the same home
name that you used for CRS
8
In the
Specify Hardware Cluster Installation Mode
9
In the
Product-Specific Prerequisite Checks
column for each system check, and then click
NOTE: In some cases, a warning may appear regarding swap size. Ignore the warning and click Yes
to proceed.
10
In the
Select Configuration Option
11
In the
Summary
window, click
window, select
window in the
and click
Next
.
window, select
Install
.
Enterprise Edition
Path
window, ensure that
Next
The Oracle Database software is installed on your cluster.
and click
Next
.
field, verify that the complete Oracle home path is
.
window, click
Select All
Succeeded
and click
Next
appears in the
.
Status
.
Install database Software only
and click
Next
.
Next, the
12
Follow the instructions in the window and click OK.
13
In the
Execute Configuration Scripts
End of Installation
window, click
window appears.
Exit
.
RAC Post Deployment Fixes and Patches
This section provides the required fixes and patch information for deploying Oracle RAC 10g.
Reconfiguring the CSS Miscount for Proper EMC PowerPath Failover
When an HBA, switch, or EMC Storage Processor (SP) failure occurs, the total PowerPath failover time
to an alternate device may exceed 105 seconds. The default CSS disk time-out for Oracle 10g R2 version
10.2.0.1 is 60 seconds. To ensure that the PowerPath failover procedure functions correctly, increase the
CSS time-out to 120 seconds.
For more information, see Oracle Metalink Note 294430.1 on the Oracle Metalink website at
metalink.oracle.com.
To increase the CSS time-out:
1
Shut down the database and CRS on all nodes except on one node.
2
On the running node, log in as user
crsctl set css misscount 120
3
Reboot all nodes for the CSS setting to take effect.
root
and type:
Deployment Guide29
Installing the Oracle Database 10g 10.2.0.2 Patchset
Downloading and Extracting the Installation Software
1
On the
2
Create a folder for the patches and utilities at
3
Open a web browser and navigate to the Oracle Support website at
4
Log in to your Oracle Metalink account.
5
Search for the patch number 4547817 with Linux x86-64 (AMD64/EM64T) as the platform.
6
Download the patch to the
7
To unzip the downloaded zip file, type the following in a terminal window and press <
first node
, log in as
oracle
.
/opt/oracle/patches
/opt/oracle/patches
directory.
.
metalink.oracle.com
unzip p4547817_10202_LINUX-x86-64.zip
Upgrading Oracle Clusterware Installation
1
On the
2
Shut down Oracle Clusterware. To do so, type the following in the terminal window and press
<
Enter
first node
>:
, log in as
root
.
crsctl stop crs
3
On the remaining nodes, open a terminal window and repeat step 1 and step 2.
4
On the
5
In the terminal window, type the following and press <
first node
, log in as
oracle
.
Enter
>:
export ORACLE_HOME=/crs/oracle/product/10.2.0/crs
.
Enter
>:
6
Start the Oracle Universal Installer. To do so, type the following in the terminal window and
Enter
press <
>:
cd /opt/oracle/patches/Disk1/
./runInstaller
The
10
7
8
9
Welc om e
Click
In the
In the
In the
Next
screen appears.
.
Specify Home Details
Specify Hardware Cluster Installation Mode
Summary
The Oracle Universal Installer scans your system, displays all the patches that are required to be
installed, and installs them on your system. When the installation is completed, the
screen appears.
NOTE: This procedure may take several minutes to complete.
30Deployment Guide
screen, click
screen, click
Install
.
Next
.
screen, click
Next
.
End of Installation
Loading...
+ 390 hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.