Red Hat ENTERPRISE LINUX 5 - ADMINISTRATION User Manual

Red Hat Enterprise Linux 5
Cluster Administration
Configuring and Managing a Red Hat Cluster
Cluster Administration
Red Hat Enterprise Linux 5 Cluster Administration Configuring and Managing a Red Hat Cluster Edition 4
Copyright © 2009 Red Hat Inc.. This material may only be distributed subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version of the OPL is presently available at http://www.opencontent.org/openpub/).
Red Hat and the Red Hat "Shadow Man" logo are registered trademarks of Red Hat, Inc. in the United States and other countries.
All other trademarks referenced herein are the property of their respective owners.
1801 Varsity Drive Raleigh, NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park, NC 27709 USA
Configuring and Managing a Red Hat Cluster describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 5. It does not include information about Red Hat Linux Virtual Servers (LVS). Information about installing and configuring LVS is in a separate document.
iii
Introduction v
1. Document Conventions ................................................................................................... vi
1.1. Typographic Conventions ..................................................................................... vi
1.2. Pull-quote Conventions ....................................................................................... viii
1.3. Notes and Warnings ........................................................................................... viii
2. Feedback ....................................................................................................................... ix
1. Red Hat Cluster Configuration and Management Overview 1
1.1. Configuration Basics ..................................................................................................... 1
1.1.1. Setting Up Hardware ......................................................................................... 1
1.1.2. Installing Red Hat Cluster software ..................................................................... 2
1.1.3. Configuring Red Hat Cluster Software ................................................................ 2
1.2. Conga ......................................................................................................................... 4
1.3. system-config-cluster Cluster Administration GUI ................................................. 6
1.3.1. Cluster Configuration Tool .............................................................................. 7
1.3.2. Cluster Status Tool .......................................................................................... 8
1.4. Command Line Administration Tools .............................................................................. 9
2. Before Configuring a Red Hat Cluster 11
2.1. Compatible Hardware ................................................................................................. 11
2.2. Enabling IP Ports ....................................................................................................... 11
2.2.1. Enabling IP Ports on Cluster Nodes .................................................................. 11
2.2.2. Enabling IP Ports on Computers That Run luci ................................................. 12
2.3. Configuring ACPI For Use with Integrated Fence Devices ............................................. 12
2.3.1. Disabling ACPI Soft-Off with chkconfig Management ...................................... 14
2.3.2. Disabling ACPI Soft-Off with the BIOS .............................................................. 14
2.3.3. Disabling ACPI Completely in the grub.conf File ............................................ 15
2.4. Considerations for Configuring HA Services ................................................................. 17
2.5. Configuring max_luns ................................................................................................. 19
2.6. Considerations for Using Quorum Disk ........................................................................ 20
2.7. Red Hat Cluster Suite and SELinux ............................................................................. 21
2.8. Multicast Addresses ................................................................................................... 21
2.9. Considerations for Using Conga ................................................................................. 22
2.10. General Configuration Considerations ........................................................................ 22
3. Configuring Red Hat Cluster With Conga 23
3.1. Configuration Tasks .................................................................................................... 23
3.2. Starting luci and ricci ................................................................................................ 23
3.3. Creating A Cluster ...................................................................................................... 25
3.4. Global Cluster Properties ............................................................................................ 25
3.5. Configuring Fence Devices ......................................................................................... 28
3.5.1. Creating a Shared Fence Device ...................................................................... 30
3.5.2. Modifying or Deleting a Fence Device ............................................................... 31
3.6. Configuring Cluster Members ...................................................................................... 32
3.6.1. Initially Configuring Members ............................................................................ 32
3.6.2. Adding a Member to a Running Cluster ............................................................ 33
3.6.3. Deleting a Member from a Cluster .................................................................... 34
3.7. Configuring a Failover Domain .................................................................................... 35
3.7.1. Adding a Failover Domain ................................................................................ 36
3.7.2. Modifying a Failover Domain ............................................................................ 37
3.8. Adding Cluster Resources .......................................................................................... 38
3.9. Adding a Cluster Service to the Cluster ....................................................................... 38
3.10. Configuring Cluster Storage ...................................................................................... 40
Cluster Administration
iv
4. Managing Red Hat Cluster With Conga 43
4.1. Starting, Stopping, and Deleting Clusters ..................................................................... 43
4.2. Managing Cluster Nodes ............................................................................................ 44
4.3. Managing High-Availability Services ............................................................................ 45
4.4. Diagnosing and Correcting Problems in a Cluster ......................................................... 46
5. Configuring Red Hat Cluster With system-config-cluster 47
5.1. Configuration Tasks .................................................................................................... 47
5.2. Starting the Cluster Configuration Tool ..................................................................... 48
5.3. Configuring Cluster Properties ..................................................................................... 53
5.4. Configuring Fence Devices ......................................................................................... 54
5.5. Adding and Deleting Members .................................................................................... 55
5.5.1. Adding a Member to a Cluster .......................................................................... 55
5.5.2. Adding a Member to a Running Cluster ............................................................ 56
5.5.3. Deleting a Member from a Cluster .................................................................... 58
5.6. Configuring a Failover Domain .................................................................................... 59
5.6.1. Adding a Failover Domain ................................................................................ 60
5.6.2. Removing a Failover Domain ........................................................................... 62
5.6.3. Removing a Member from a Failover Domain .................................................... 63
5.7. Adding Cluster Resources .......................................................................................... 63
5.8. Adding a Cluster Service to the Cluster ....................................................................... 64
5.9. Propagating The Configuration File: New Cluster ......................................................... 67
5.10. Starting the Cluster Software ..................................................................................... 67
6. Managing Red Hat Cluster With system-config-cluster 69
6.1. Starting and Stopping the Cluster Software .................................................................. 69
6.2. Managing High-Availability Services ............................................................................ 69
6.3. Modifying the Cluster Configuration ............................................................................. 71
6.4. Backing Up and Restoring the Cluster Database .......................................................... 72
6.5. Disabling the Cluster Software .................................................................................... 73
6.6. Diagnosing and Correcting Problems in a Cluster ......................................................... 74
A. Example of Setting Up Apache HTTP Server 75
A.1. Apache HTTP Server Setup Overview ........................................................................ 75
A.2. Configuring Shared Storage ........................................................................................ 75
A.3. Installing and Configuring the Apache HTTP Server ..................................................... 76
B. Fence Device Parameters 79
C. HA Resource Parameters 85
D. HA Resource Behavior 95
D.1. Parent, Child, and Sibling Relationships Among Resources .......................................... 96
D.2. Sibling Start Ordering and Resource Child Ordering ..................................................... 96
D.2.1. Typed Child Resource Start and Stop Ordering ................................................. 97
D.2.2. Non-typed Child Resource Start and Stop Ordering ........................................... 99
D.3. Inheritance, the <resources> Block, and Reusing Resources ...................................... 101
D.4. Failure Recovery and Independent Subtrees ............................................................. 102
D.5. Debugging and Testing Services and Resource Ordering ............................................ 102
E. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5 105
F. Revision History 109
Index 111
v
Introduction
This document provides information about installing, configuring and managing Red Hat Cluster components. Red Hat Cluster components are part of Red Hat Cluster Suite and allow you to connect a group of computers (called nodes or members) to work together as a cluster. This document does not include information about installing, configuring, and managing Linux Virtual Server (LVS) software. Information about that is in a separate document.
The audience of this document should have advanced working knowledge of Red Hat Enterprise Linux and understand the concepts of clusters, storage, and server computing.
This document is organized as follows:
Chapter 1, Red Hat Cluster Configuration and Management Overview
Chapter 2, Before Configuring a Red Hat Cluster
Chapter 3, Configuring Red Hat Cluster With Conga
Chapter 4, Managing Red Hat Cluster With Conga
Chapter 5, Configuring Red Hat Cluster With system-config-cluster
Chapter 6, Managing Red Hat Cluster With system-config-cluster
Appendix A, Example of Setting Up Apache HTTP Server
Appendix B, Fence Device Parameters
Appendix C, HA Resource Parameters
Appendix D, HA Resource Behavior
Appendix E, Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5
Appendix F, Revision History
For more information about Red Hat Enterprise Linux 5, refer to the following resources:
Red Hat Enterprise Linux Installation Guide — Provides information regarding installation of Red Hat Enterprise Linux 5.
Red Hat Enterprise Linux Deployment Guide — Provides information regarding the deployment, configuration and administration of Red Hat Enterprise Linux 5.
For more information about Red Hat Cluster Suite for Red Hat Enterprise Linux 5, refer to the following resources:
Red Hat Cluster Suite Overview — Provides a high level overview of the Red Hat Cluster Suite.
Logical Volume Manager Administration — Provides a description of the Logical Volume Manager (LVM), including information on running LVM in a clustered environment.
Global File System: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System).
Introduction
vi
Global File System 2: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS2 (Red Hat Global File System 2).
Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5.
Using GNBD with Global File System — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS.
Linux Virtual Server Administration — Provides information on configuring high-performance systems and services with the Linux Virtual Server (LVS).
Red Hat Cluster Suite Release Notes — Provides information about the current release of Red Hat Cluster Suite.
Red Hat Cluster Suite documentation and other Red Hat documents are available in HTML, PDF, and RPM versions on the Red Hat Enterprise Linux Documentation CD and online at http://
www.redhat.com/docs/.
1. Document Conventions
This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information.
In PDF and paper editions, this manual uses typefaces drawn from the Liberation Fonts1 set. The Liberation Fonts set is also used in HTML editions if the set is installed on your system. If not, alternative but equivalent typefaces are displayed. Note: Red Hat Enterprise Linux 5 and later includes the Liberation Fonts set by default.
1.1. Typographic Conventions
Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Mono-spaced Bold
Used to highlight system input, including shell commands, file names and paths. Also used to highlight key caps and key-combinations. For example:
To see the contents of the file my_next_bestselling_novel in your current working directory, enter the cat my_next_bestselling_novel command at the shell prompt and press Enter to execute the command.
The above includes a file name, a shell command and a key cap, all presented in Mono-spaced Bold and all distinguishable thanks to context.
Key-combinations can be distinguished from key caps by the hyphen connecting each part of a key­combination. For example:
Press Enter to execute the command.
Press Ctrl-Alt-F1 to switch to the first virtual terminal. Press Ctrl-Alt-F7 to return to your X-Windows session.
1
https://fedorahosted.org/liberation-fonts/
Typographic Conventions
vii
The first sentence highlights the particular key cap to press. The second highlights two sets of three key caps, each set pressed simultaneously.
If source code is discussed, class names, methods, functions, variable names and returned values mentioned within a paragraph will be presented as above, in Mono-spaced Bold. For example:
File-related classes include filesystem for file systems, file for files, and dir for directories. Each class has its own associated set of permissions.
Proportional Bold
This denotes words or phrases encountered on a system, including application names; dialogue box text; labelled buttons; check-box and radio button labels; menu titles and sub-menu titles. For example:
Choose System > Preferences > Mouse from the main menu bar to launch Mouse
Preferences. In the Buttons tab, click the Left-handed mouse check box and click Close to switch the primary mouse button from the left to the right (making the mouse
suitable for use in the left hand).
To insert a special character into a gedit file, choose Applications > Accessories
> Character Map from the main menu bar. Next, choose Search > Find… from the Character Map menu bar, type the name of the character in the Search field and click Next. The character you sought will be highlighted in the Character Table. Double- click this highlighted character to place it in the Text to copy field and then click the Copy button. Now switch back to your document and choose Edit > Paste from the gedit menu bar.
The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in Proportional Bold and all distinguishable by context.
Note the > shorthand used to indicate traversal through a menu and its sub-menus. This is to avoid the difficult-to-follow 'Select Mouse from the Preferences sub-menu in the System menu of the main menu bar' approach.
Mono-spaced Bold Italic or Proportional Bold Italic
Whether Mono-spaced Bold or Proportional Bold, the addition of Italics indicates replaceable or variable text. Italics denotes text you do not input literally or displayed text that changes depending on circumstance. For example:
To connect to a remote machine using ssh, type ssh username@domain.name at a shell prompt. If the remote machine is example.com and your username on that machine is john, type ssh john@example.com.
The mount -o remount file-system command remounts the named file system. For example, to remount the /home file system, the command is mount -o remount /home.
To see the version of a currently installed package, use the rpm -q package command. It will return a result as follows: package-version-release.
Note the words in bold italics above — username, domain.name, file-system, package, version and release. Each word is a placeholder, either for text you enter when issuing a command or for text displayed by the system.
Introduction
viii
Aside from standard usage for presenting the title of a work, italics denotes the first use of a new and important term. For example:
When the Apache HTTP Server accepts requests, it dispatches child processes or threads to handle them. This group of child processes or threads is known as a server-pool. Under Apache HTTP Server 2.0, the responsibility for creating and maintaining these server-pools has been abstracted to a group of modules called Multi-Processing Modules (MPMs). Unlike other modules, only one module from the MPM group can be loaded by the Apache HTTP Server.
1.2. Pull-quote Conventions
Two, commonly multi-line, data types are set off visually from the surrounding text.
Output sent to a terminal is set in Mono-spaced Roman and presented thus:
books Desktop documentation drafts mss photos stuff svn books_tests Desktop1 downloads images notes scripts svgs
Source-code listings are also set in Mono-spaced Roman but are presented and highlighted as follows:
package org.jboss.book.jca.ex1;
import javax.naming.InitialContext;
public class ExClient
{ public static void main(String args[]) throws Exception { InitialContext iniCtx = new InitialContext(); Object ref = iniCtx.lookup("EchoBean"); EchoHome home = (EchoHome) ref; Echo echo = home.create();
System.out.println("Created Echo");
System.out.println("Echo.echo('Hello') = " + echo.echo("Hello")); }
}
1.3. Notes and Warnings
Finally, we use three visual styles to draw attention to information that might otherwise be overlooked.
Feedback
ix
Note
A Note is a tip or shortcut or alternative approach to the task at hand. Ignoring a note should have no negative consequences, but you might miss out on a trick that makes your life easier.
Important
Important boxes detail things that are easily missed: configuration changes that only apply to the current session, or services that need restarting before an update will apply. Ignoring Important boxes won't cause data loss but may cause irritation and frustration.
Warning
A Warning should not be ignored. Ignoring warnings will most likely cause data loss.
2. Feedback
If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component Documentation-cluster.
Be sure to mention the manual identifier:
Cluster_Administration(EN)-5 (2009-08-18T16:22)
By mentioning this manual's identifier, we know exactly which version of the guide you have.
If you have a suggestion for improving the documentation, try to be as specific as possible. If you have found an error, please include the section number and some of the surrounding text so we can find it easily.
x
Chapter 1.
1
Red Hat Cluster Configuration and Management Overview
Red Hat Cluster allows you to connect a group of computers (called nodes or members) to work together as a cluster. You can use Red Hat Cluster to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS file system or setting up service failover).
1.1. Configuration Basics
To set up a cluster, you must connect the nodes to certain cluster hardware and configure the nodes into the cluster environment. This chapter provides an overview of cluster configuration and management, and tools available for configuring and managing a Red Hat Cluster.
Configuring and managing a Red Hat Cluster consists of the following basic steps:
1. Setting up hardware. Refer to Section 1.1.1, “Setting Up Hardware”.
2. Installing Red Hat Cluster software. Refer to Section 1.1.2, “Installing Red Hat Cluster software”.
3. Configuring Red Hat Cluster Software. Refer to Section 1.1.3, “Configuring Red Hat Cluster
Software”.
1.1.1. Setting Up Hardware
Setting up hardware consists of connecting cluster nodes to other hardware required to run a Red Hat Cluster. The amount and type of hardware varies according to the purpose and availability requirements of the cluster. Typically, an enterprise-level cluster requires the following type of hardware (refer to Figure 1.1, “Red Hat Cluster Hardware Overview”).For considerations about hardware and other cluster configuration concerns, refer to "Before Configuring a Red Hat Cluster" or check with an authorized Red Hat representative.
• Cluster nodes — Computers that are capable of running Red Hat Enterprise Linux 5 software, with at least 1GB of RAM.
• Ethernet switch or hub for public network — This is required for client access to the cluster.
• Ethernet switch or hub for private network — This is required for communication among the cluster nodes and other cluster hardware such as network power switches and Fibre Channel switches.
• Network power switch — A network power switch is recommended to perform fencing in an enterprise-level cluster.
• Fibre Channel switch — A Fibre Channel switch provides access to Fibre Channel storage. Other options are available for storage according to the type of storage interface; for example, iSCSI or GNBD. A Fibre Channel switch can be configured to perform fencing.
• Storage — Some type of storage is required for a cluster. The type required depends on the purpose of the cluster.
Chapter 1. Red Hat Cluster Configuration and Management Overview
2
Figure 1.1. Red Hat Cluster Hardware Overview
1.1.2. Installing Red Hat Cluster software
To install Red Hat Cluster software, you must have entitlements for the software. If you are using the Conga configuration GUI, you can let it install the cluster software. If you are using other tools to configure the cluster, secure and install the software as you would with Red Hat Enterprise Linux software.
1.1.3. Configuring Red Hat Cluster Software
Configuring Red Hat Cluster software consists of using configuration tools to specify the relationship among the cluster components. Figure 1.2, “Cluster Configuration Structure” shows an example of the hierarchical relationship among cluster nodes, high-availability services, and resources. The cluster nodes are connected to one or more fencing devices. Nodes can be grouped into a failover domain for a cluster service. The services comprise resources such as NFS exports, IP addresses, and shared GFS partitions.
Configuring Red Hat Cluster Software
3
Figure 1.2. Cluster Configuration Structure
The following cluster configuration tools are available with Red Hat Cluster:
Conga — This is a comprehensive user interface for installing, configuring, and managing Red Hat clusters, computers, and storage attached to clusters and computers.
system-config-cluster — This is a user interface for configuring and managing a Red Hat cluster.
• Command line tools — This is a set of command line tools for configuring and managing a Red Hat cluster.
A brief overview of each configuration tool is provided in the following sections:
Section 1.2, “Conga”
Section 1.3, “system-config-cluster Cluster Administration GUI”
Section 1.4, “Command Line Administration Tools”
In addition, information about using Conga and system-config-cluster is provided in subsequent chapters of this document. Information about the command line tools is available in the man pages for the tools.
Chapter 1. Red Hat Cluster Configuration and Management Overview
4
1.2. Conga
Conga is an integrated set of software components that provides centralized configuration and
management of Red Hat clusters and storage. Conga provides the following major features:
• One Web interface for managing cluster and storage
• Automated Deployment of Cluster Data and Supporting Packages
• Easy Integration with Existing Clusters
• No Need to Re-Authenticate
• Integration of Cluster Status and Logs
• Fine-Grained Control over User Permissions
The primary components in Conga are luci and ricci, which are separately installable. luci is a server that runs on one computer and communicates with multiple clusters and computers via ricci. ricci is an agent that runs on each computer (either a cluster member or a standalone computer) managed by
Conga.
luci is accessible through a Web browser and provides three major functions that are accessible
through the following tabs:
homebase — Provides tools for adding and deleting computers, adding and deleting users, and configuring user privileges. Only a system administrator is allowed to access this tab.
cluster — Provides tools for creating and configuring clusters. Each instance of luci lists clusters that have been set up with that luci. A system administrator can administer all clusters listed on this tab. Other users can administer only clusters that the user has permission to manage (granted by an administrator).
storage — Provides tools for remote administration of storage. With the tools on this tab, you can manage storage on computers whether they belong to a cluster or not.
To administer a cluster or storage, an administrator adds (or registers) a cluster or a computer to a luci server. When a cluster or a computer is registered with luci, the FQDN hostname or IP address of each computer is stored in a luci database.
You can populate the database of one luci instance from another luciinstance. That capability provides a means of replicating a luci server instance and provides an efficient upgrade and testing path. When you install an instance of luci, its database is empty. However, you can import part or all of a luci database from an existing luci server when deploying a new luci server.
Each luci instance has one user at initial installation — admin. Only the admin user may add systems to a luci server. Also, the admin user can create additional user accounts and determine which users are allowed to access clusters and computers registered in the luci database. It is possible to import users as a batch operation in a new luci server, just as it is possible to import clusters and computers.
When a computer is added to a luci server to be administered, authentication is done once. No authentication is necessary from then on (unless the certificate used is revoked by a CA). After that, you can remotely configure and manage clusters and storage through the luci user interface. luci and ricci communicate with each other via XML.
Conga
5
The following figures show sample displays of the three major luci tabs: homebase, cluster, and storage.
For more information about Conga, refer to Chapter 3, Configuring Red Hat Cluster With Conga,
Chapter 4, Managing Red Hat Cluster With Conga, and the online help available with the luci server.
Figure 1.3. luci homebase Tab
Figure 1.4. luci cluster Tab
Chapter 1. Red Hat Cluster Configuration and Management Overview
6
Figure 1.5. luci storage Tab
1.3. system-config-cluster Cluster Administration GUI
This section provides an overview of the cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite — system-config-cluster. It is for use with the cluster infrastructure and the high-availability service management components. system-config-cluster consists of two major functions: the Cluster Configuration Tool and the Cluster Status Tool. The Cluster Configuration Tool provides the capability to create, edit, and propagate the cluster configuration file (/etc/cluster/cluster.conf). The Cluster Status Tool provides the capability to manage high- availability services. The following sections summarize those functions.
Note
While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Conga, provides more convenience and flexibility than system-config-cluster.
Cluster Configuration Tool
7
1.3.1. Cluster Configuration Tool
You can access the Cluster Configuration Tool (Figure 1.6, “Cluster Configuration Tool) through the Cluster Configuration tab in the Cluster Administration GUI.
Figure 1.6. Cluster Configuration Tool
The Cluster Configuration Tool represents cluster configuration components in the configuration file (/etc/cluster/cluster.conf) with a hierarchical graphical display in the left panel. A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it. Clicking the triangle icon expands and collapses the portion of the tree below a component. The components displayed in the GUI are summarized as follows:
Cluster Nodes — Displays cluster nodes. Nodes are represented by name as subordinate elements under Cluster Nodes. Using configuration buttons at the bottom of the right frame (below Properties), you can add nodes, delete nodes, edit node properties, and configure fencing methods for each node.
Chapter 1. Red Hat Cluster Configuration and Management Overview
8
Fence Devices — Displays fence devices. Fence devices are represented as subordinate elements under Fence Devices. Using configuration buttons at the bottom of the right frame (below Properties), you can add fence devices, delete fence devices, and edit fence-device properties. Fence devices must be defined before you can configure fencing (with the Manage Fencing For This Node button) for each node.
Managed Resources — Displays failover domains, resources, and services.
Failover Domains — For configuring one or more subsets of cluster nodes used to run a high-
availability service in the event of a node failure. Failover domains are represented as subordinate elements under Failover Domains. Using configuration buttons at the bottom of the right frame (below Properties), you can create failover domains (when Failover Domains is selected) or edit failover domain properties (when a failover domain is selected).
Resources — For configuring shared resources to be used by high-availability services. Shared
resources consist of file systems, IP addresses, NFS mounts and exports, and user-created scripts that are available to any high-availability service in the cluster. Resources are represented as subordinate elements under Resources. Using configuration buttons at the bottom of the right frame (below Properties), you can create resources (when Resources is selected) or edit resource properties (when a resource is selected).
Note
The Cluster Configuration Tool provides the capability to configure private resources, also. A private resource is a resource that is configured for use with only one service. You can configure a private resource within a Service component in the GUI.
Services — For creating and configuring high-availability services. A service is configured by
assigning resources (shared or private), assigning a failover domain, and defining a recovery policy for the service. Services are represented as subordinate elements under Services. Using configuration buttons at the bottom of the right frame (below Properties), you can create services (when Services is selected) or edit service properties (when a service is selected).
1.3.2. Cluster Status Tool
You can access the Cluster Status Tool (Figure 1.7, “Cluster Status Tool) through the Cluster Management tab in Cluster Administration GUI.
Command Line Administration Tools
9
Figure 1.7. Cluster Status Tool
The nodes and services displayed in the Cluster Status Tool are determined by the cluster configuration file (/etc/cluster/cluster.conf). You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service.
1.4. Command Line Administration Tools
In addition to Conga and the system-config-cluster Cluster Administration GUI, command line tools are available for administering the cluster infrastructure and the high-availability service management components. The command line tools are used by the Cluster Administration GUI and init scripts supplied by Red Hat. Table 1.1, “Command Line Tools” summarizes the command line tools.
Chapter 1. Red Hat Cluster Configuration and Management Overview
10
Command Line Tool
Used With Purpose
ccs_tool
— Cluster Configuration System Tool
Cluster Infrastructure
ccs_tool is a program for making online updates to the cluster configuration file. It provides the capability to create and modify cluster infrastructure components (for example, creating a cluster, adding and removing a node). For more information about this tool, refer to the ccs_tool(8) man page.
cman_tool
— Cluster Management Tool
Cluster Infrastructure
cman_tool is a program that manages the CMAN cluster manager. It provides the capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in a cluster. For more information about this tool, refer to the cman_tool(8) man page.
fence_tool — Fence Tool
Cluster Infrastructure
fence_tool is a program used to join or leave the default fence domain. Specifically, it starts the fence daemon (fenced) to join the domain and kills fenced to leave the domain. For more information about this tool, refer to the fence_tool(8) man page.
clustat — Cluster Status Utility
High-availability Service Management Components
The clustat command displays the status of the cluster. It shows membership information, quorum view, and the state of all configured user services. For more information about this tool, refer to the clustat(8) man page.
clusvcadm
— Cluster User Service Administration Utility
High-availability Service Management Components
The clusvcadm command allows you to enable, disable, relocate, and restart high-availability services in a cluster. For more information about this tool, refer to the clusvcadm(8) man page.
Table 1.1. Command Line Tools
Chapter 2.
11
Before Configuring a Red Hat Cluster
This chapter describes tasks to perform and considerations to make before installing and configuring a Red Hat Cluster, and consists of the following sections:
Section 2.1, “Compatible Hardware”
Section 2.2, “Enabling IP Ports”
Section 2.3, “Configuring ACPI For Use with Integrated Fence Devices”
Section 2.5, “Configuring max_luns”
Section 2.6, “Considerations for Using Quorum Disk”
Section 2.7, “Red Hat Cluster Suite and SELinux”
Section 2.8, “Multicast Addresses”
Section 2.9, “Considerations for Using Conga
Section 2.10, “General Configuration Considerations”
2.1. Compatible Hardware
Before configuring Red Hat Cluster software, make sure that your cluster uses appropriate hardware (for example, supported fence devices, storage devices, and Fibre Channel switches). Refer to the hardware configuration guidelines at http://www.redhat.com/cluster_suite/hardware/ for the most current hardware compatibility information.
2.2. Enabling IP Ports
Before deploying a Red Hat Cluster, you must enable certain IP ports on the cluster nodes and on computers that run luci (the Conga user interface server). The following sections indentify the IP ports to be enabled:
Section 2.2.1, “Enabling IP Ports on Cluster Nodes”
Section 2.2.2, “Enabling IP Ports on Computers That Run luci
2.2.1. Enabling IP Ports on Cluster Nodes
To allow Red Hat Cluster nodes to communicate with each other, you must enable the IP ports assigned to certain Red Hat Cluster components. Table 2.1, “Enabled IP Ports on Red Hat Cluster
Nodes” lists the IP port numbers, their respective protocols, and the components to which the port
numbers are assigned. At each cluster node, enable IP ports according to Table 2.1, “Enabled IP Ports
on Red Hat Cluster Nodes”.
Note
IPV6 is not supported for Cluster Suite in Red Hat Enterprise Linux 5.
Chapter 2. Before Configuring a Red Hat Cluster
12
IP Port Number Protocol Component
5404, 5405 UDP cman (Cluster Manager)
11111 TCP ricci (part of Conga remote agent)
14567 TCP gnbd (Global Network Block Device)
16851 TCP modclusterd (part of Conga remote agent)
21064 TCP dlm (Distributed Lock Manager)
50006, 50008, 50009
TCP ccsd (Cluster Configuration System daemon)
50007 UDP ccsd (Cluster Configuration System daemon)
Table 2.1. Enabled IP Ports on Red Hat Cluster Nodes
Note
Table 2.1, “Enabled IP Ports on Red Hat Cluster Nodes” shows no IP ports to enable for
rgmanager. For RHEL 5.1 and later, rgmanager does not use TCP or UDP sockets.
2.2.2. Enabling IP Ports on Computers That Run luci
To allow client computers to communicate with a computer that runs luci (the Conga user interface server), and to allow a computer that runs luci to communicate with ricci in the cluster nodes, you must enable the IP ports assigned to luci and ricci. Table 2.2, “Enabled IP Ports on a Computer That
Runs luci lists the IP port numbers, their respective protocols, and the components to which the
port numbers are assigned. At each computer that runs luci, enable IP ports according to Table 2.1,
“Enabled IP Ports on Red Hat Cluster Nodes”.
Note
If a cluster node is running luci, port 11111 should already have been enabled.
IP Port Number Protocol Component
8084 TCP luci (Conga user interface server)
11111 TCP ricci (Conga remote agent)
Table 2.2. Enabled IP Ports on a Computer That Runs luci
2.3. Configuring ACPI For Use with Integrated Fence
Devices
If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing.
Note
For the most current information about integrated fence devices supported by Red Hat Cluster Suite, refer to http://www.redhat.com/cluster_suite/hardware/1.
Configuring ACPI For Use with Integrated Fence Devices
13
If a cluster node is configured to be fenced by an integrated fence device, disable ACPI Soft-Off for that node. Disabling ACPI Soft-Off allows an integrated fence device to turn off a node immediately and completely rather than attempting a clean shutdown (for example, shutdown -h now). Otherwise, if ACPI Soft-Off is enabled, an integrated fence device can take four or more seconds to turn off a node (refer to note that follows). In addition, if ACPI Soft-Off is enabled and a node panics or freezes during shutdown, an integrated fence device may not be able to turn off the node. Under those circumstances, fencing is delayed or unsuccessful. Consequently, when a node is fenced with an integrated fence device and ACPI Soft-Off is enabled, a cluster recovers slowly or requires administrative intervention to recover.
Note
The amount of time required to fence a node depends on the integrated fence device used. Some integrated fence devices perform the equivalent of pressing and holding the power button; therefore, the fence device turns off the node in four to five seconds. Other integrated fence devices perform the equivalent of pressing the power button momentarily, relying on the operating system to turn off the node; therefore, the fence device turns off the node in a time span much longer than four to five seconds.
To disable ACPI Soft-Off, use chkconfig management and verify that the node turns off immediately when fenced. The preferred way to disable ACPI Soft-Off is with chkconfig management: however, if that method is not satisfactory for your cluster, you can disable ACPI Soft-Off with one of the following alternate methods:
• Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay
Note
Disabling ACPI Soft-Off with the BIOS may not be possible with some computers.
• Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file
Important
This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster.
The following sections provide procedures for the preferred method and alternate methods of disabling ACPI Soft-Off:
Section 2.3.1, “Disabling ACPI Soft-Off with chkconfig Management” — Preferred method
Section 2.3.2, “Disabling ACPI Soft-Off with the BIOS” — First alternate method
Section 2.3.3, “Disabling ACPI Completely in the grub.conf File” — Second alternate method
Chapter 2. Before Configuring a Red Hat Cluster
14
2.3.1. Disabling ACPI Soft-Off with chkconfig Management
You can use chkconfig management to disable ACPI Soft-Off either by removing the ACPI daemon (acpid) from chkconfig management or by turning off acpid.
Note
This is the preferred method of disabling ACPI Soft-Off.
Disable ACPI Soft-Off with chkconfig management at each cluster node as follows:
1. Run either of the following commands:
chkconfig --del acpid — This command removes acpid from chkconfig management.
— OR —
chkconfig --level 2345 acpid off — This command turns off acpid.
2. Reboot the node.
3. When the cluster is configured and running, verify that the node turns off immediately when fenced.
Note
You can fence the node with the fence_node command or Conga.
2.3.2. Disabling ACPI Soft-Off with the BIOS
The preferred method of disabling ACPI Soft-Off is with chkconfig management (Section 2.3.1,
“Disabling ACPI Soft-Off with chkconfig Management”). However, if the preferred method is not
effective for your cluster, follow the procedure in this section.
Note
Disabling ACPI Soft-Off with the BIOS may not be possible with some computers.
You can disable ACPI Soft-Off by configuring the BIOS of each cluster node as follows:
1. Reboot the node and start the BIOS CMOS Setup Utility program.
2. Navigate to the Power menu (or equivalent power management menu).
3. At the Power menu, set the Soft-Off by PWR-BTTN function (or equivalent) to Instant-Off (or the equivalent setting that turns off the node via the power button without delay). Example 2.1, “BIOS
CMOS Setup Utility: Soft-Off by PWR-BTTN set to Instant-Off shows a Power menu with
ACPI Function set to Enabled and Soft-Off by PWR-BTTN set to Instant-Off.
Disabling ACPI Completely in the grub.conf File
15
Note
The equivalents to ACPI Function, Soft-Off by PWR-BTTN, and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off via the power button without delay.
4. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration.
5. When the cluster is configured and running, verify that the node turns off immediately when fenced.
Note
You can fence the node with the fence_node command or Conga.
+---------------------------------------------|-------------------+ | ACPI Function [Enabled] | Item Help | | ACPI Suspend Type [S1(POS)] |-------------------| | x Run VGABIOS if S3 Resume Auto | Menu Level * | | Suspend Mode [Disabled] | | | HDD Power Down [Disabled] | | | Soft-Off by PWR-BTTN [Instant-Off | | | CPU THRM-Throttling [50.0%] | | | Wake-Up by PCI card [Enabled] | | | Power On by Ring [Enabled] | | | Wake Up On LAN [Enabled] | | | x USB KB Wake-Up From S3 Disabled | | | Resume by Alarm [Disabled] | | | x Date(of Month) Alarm 0 | | | x Time(hh:mm:ss) Alarm 0 : 0 : | | | POWER ON Function [BUTTON ONLY | | | x KB Power ON Password Enter | | | x Hot Key Power ON Ctrl-F1 | | | | | | | | +---------------------------------------------|-------------------+
This example shows ACPI Function set to Enabled, and Soft-Off by PWR-BTTN set to Instant-Off.
Example 2.1. BIOS CMOS Setup Utility: Soft-Off by PWR-BTTN set to Instant-Off
2.3.3. Disabling ACPI Completely in the grub.conf File
The preferred method of disabling ACPI Soft-Off is with chkconfig management (Section 2.3.1,
“Disabling ACPI Soft-Off with chkconfig Management”). If the preferred method is not effective
for your cluster, you can disable ACPI Soft-Off with the BIOS power management (Section 2.3.2,
“Disabling ACPI Soft-Off with the BIOS”). If neither of those methods is effective for your cluster,
Chapter 2. Before Configuring a Red Hat Cluster
16
you can disable ACPI completely by appending acpi=off to the kernel boot command line in the grub.conf file.
Important
This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled. Use this method only if the other methods are not effective for your cluster.
You can disable ACPI completely by editing the grub.conf file of each cluster node as follows:
1. Open /boot/grub/grub.conf with a text editor.
2. Append acpi=off to the kernel boot command line in /boot/grub/grub.conf (refer to
Example 2.2, “Kernel Boot Command Line with acpi=off Appended to It”).
3. Reboot the node.
4. When the cluster is configured and running, verify that the node turns off immediately when fenced.
Note
You can fence the node with the fence_node command or Conga.
# grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/hda default=0 timeout=5 serial --unit=0 --speed=115200 terminal --timeout=5 serial console title Red Hat Enterprise Linux Server (2.6.18-36.el5) root (hd0,0) kernel /vmlinuz-2.6.18-36.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0,115200n8 acpi=off initrd /initrd-2.6.18-36.el5.img
In this example, acpi=off has been appended to the kernel boot command line — the line starting with "kernel /vmlinuz-2.6.18-36.el5".
Example 2.2. Kernel Boot Command Line with acpi=off Appended to It
Considerations for Configuring HA Services
17
2.4. Considerations for Configuring HA Services
You can create a cluster to suit your needs for high availability by configuring HA (high-availability) services. The key component for HA service management in a Red Hat cluster, rgmanager, implements cold failover for off-the-shelf applications. In a Red Hat cluster, an application is configured with other cluster resources to form an HA service that can fail over from one cluster node to another with no apparent interruption to cluster clients. HA-service failover can occur if a cluster node fails or if a cluster system administrator moves the service from one cluster node to another (for example, for a planned outage of a cluster node).
To create an HA service, you must configure it in the cluster configuration file. An HA service comprises cluster resources. Cluster resources are building blocks that you create and manage in the cluster configuration file — for example, an IP address, an application initialization script, or a Red Hat GFS shared partition.
An HA service can run on only one cluster node at a time to maintain data integrity. You can specify failover priority in a failover domain. Specifying failover priority consists of assigning a priority level to each node in a failover domain. The priority level determines the failover order — determining which node that an HA service should fail over to. If you do not specify failover priority, an HA service can fail over to any node in its failover domain. Also, you can specify if an HA service is restricted to run only on nodes of its associated failover domain. (When associated with an unrestricted failover domain, an HA service can start on any cluster node in the event no member of the failover domain is available.)
Figure 2.1, “Web Server Cluster Service Example” shows an example of an HA service that is a web
server named "content-webserver". It is running in cluster node B and is in a failover domain that consists of nodes A, B, and D. In addition, the failover domain is configured with a failover priority to fail over to node D before node A and to restrict failover to nodes only in that failover domain. The HA service comprises these cluster resources:
• IP address resource — IP address 10.10.10.201.
• An application resource named "httpd-content" — a web server application init script /etc/ init.d/httpd (specifying httpd).
• A file system resource — Red Hat GFS named "gfs-content-webserver".
Chapter 2. Before Configuring a Red Hat Cluster
18
Figure 2.1. Web Server Cluster Service Example
Clients access the HA service through the IP address 10.10.10.201, enabling interaction with the web server application, httpd-content. The httpd-content application uses the gfs-content-webserver file system. If node B were to fail, the content-webserver HA service would fail over to node D. If node D were not available or also failed, the service would fail over to node A. Failover would occur with minimal service interruption to the cluster clients. For example, in an HTTP service, certain state information may be lost (like session data). The HA service would be accessible from another cluster node via the same IP address as it was before failover.
Note
For more information about HA services and failover domains, refer to Red Hat Cluster Suite Overview. For information about configuring failover domains, refer to Section 3.7,
“Configuring a Failover Domain” (using Conga) or Section 5.6, “Configuring a Failover Domain” (using system-config-cluster).
An HA service is a group of cluster resources configured into a coherent entity that provides specialized services to clients. An HA service is represented as a resource tree in the cluster configuration file, /etc/cluster/cluster.conf (in each cluster node). In the cluster configuration
Configuring max_luns
19
file, each resource tree is an XML representation that specifies each resource, its attributes, and its relationship among other resources in the resource tree (parent, child, and sibling relationships).
Note
Because an HA service consists of resources organized into a hierarchical tree, a service is sometimes referred to as a resource tree or resource group. Both phrases are synonymous with HA service.
At the root of each resource tree is a special type of resource — a service resource. Other types of resources comprise the rest of a service, determining its characteristics. Configuring an HA service consists of creating a service resource, creating subordinate cluster resources, and organizing them into a coherent entity that conforms to hierarchical restrictions of the service.
Red Hat Cluster supports the following HA services:
• Apache
• Application (Script)
• LVM (HA LVM)
• MySQL
• NFS
• Open LDAP
• Oracle
• PostgreSQL 8
• Samba
• SAP
• Tomcat 5
There are two major considerations to take into account when configuring an HA service:
• The types of resources needed to create a service
• Parent, child, and sibling relationships among resources
The types of resources and the hierarchy of resources depend on the type of service you are configuring.
The types of cluster resources are listed in Appendix C, HA Resource Parameters. Information about parent, child, and sibling relationships among resources is described in Appendix D, HA Resource
Behavior.
2.5. Configuring max_luns
It is not necessary to configure max_luns in Red Hat Enterprise Linux 5.
Chapter 2. Before Configuring a Red Hat Cluster
20
In Red Hat Enterprise Linux releases prior to Red Hat Enterprise Linux 5, if RAID storage in a cluster presents multiple LUNs, it is necessary to enable access to those LUNs by configuring max_luns (or max_scsi_luns for 2.4 kernels) in the /etc/modprobe.conf file of each node. In Red Hat Enterprise Linux 5, cluster nodes detect multiple LUNs without intervention required; it is not necessary to configure max_luns to detect multiple LUNs.
2.6. Considerations for Using Quorum Disk
Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node fitness. With heuristics you can determine factors that are important to the operation of the node in the event of a network partition. For example, in a four-node cluster with a 3:1 split, ordinarily, the three nodes automatically "win" because of the three-to-one majority. Under those circumstances, the one node is fenced. With qdiskd however, you can set up heuristics that allow the one node to win based on access to a critical resource (for example, a critical network path). If your cluster requires additional methods of determining node health, then you should configure qdiskd to meet those needs.
Note
Configuring qdiskd is not required unless you have special requirements for node health. An example of a special requirement is an "all-but-one" configuration. In an all-but-one configuration, qdiskd is configured to provide enough quorum votes to maintain quorum even though only one node is working.
Important
Overall, heuristics and other qdiskd parameters for your Red Hat Cluster depend on the site environment and special requirements needed. To understand the use of heuristics and other qdiskd parameters, refer to the qdisk(5) man page. If you require assistance understanding and using qdiskd for your site, contact an authorized Red Hat support representative.
If you need to use qdiskd, you should take into account the following considerations:
Cluster node votes
Each cluster node should have the same number of votes.
CMAN membership timeout value
The CMAN membership timeout value (the time a node needs to be unresponsive before CMAN considers that node to be dead, and not a member) should be at least two times that of the qdiskd membership timeout value. The reason is because the quorum daemon must detect failed nodes on its own, and can take much longer to do so than CMAN. The default value for CMAN membership timeout is 10 seconds. Other site-specific conditions may affect the relationship between the membership timeout values of CMAN and qdiskd. For assistance with adjusting the CMAN membership timeout value, contact an authorized Red Hat support representative.
Loading...
+ 92 hidden pages