Hp INTEGRITY RX5670 Cluster Installation and Configuration Guide

Cluster Installation and Configuration Guide
HP Integrity Servers with Microsoft® Windows® Server 2003
Manufacturing Part Number: 5991-3694
January 2007
© Copyright 2007
Hewlett-Packard Development Company, L.P.
All rights reserved.
© 2007 Hewlett-Packard Development Company, L.P. Microsoft and Windows are trademarks of Microsoft Corporation in the U.S. and other
countries. Hewlett-Packard Company shall not be liable for technical or editorial errors or
omissions contained herein. The information in this document is provided “as is” without warranty of any kind and is subject to change without notice. The warranties for HP products are set forth in the express limited warranty statements accompanying such products. Nothing herein should be construed as constituting an additional warranty.
2
1. Introduction
Clustering overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Server Cluster vs. Network Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Server Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Network Load Balancing (NLB). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Cluster terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Cluster service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Shared disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Resource dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Quorums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Heartbeats. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Virtual servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Failback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Contents

2. Setup, configuration, validation, and maintenance of the cluster
Verifying minimum software and hardware requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Gathering all required installation information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Creating and configuring the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Configuring the public and private networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Preparing node 1 for clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Configuring the shared storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Preparing node 2+ for clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Creating the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Joining node 2+ to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Configuring private/public network role and priority settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Validating cluster operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Method 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Method 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Upgrading individual nodes in the future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3
Contents
4

1Introduction

This document describes how to install and configure clustered computing solutions using HP Integrity servers running Microsoft® Windows® Server 2003.
Some of the clustering improvements for Microsoft Windows Server 2003, 64-bit Edition (over Microsoft Windows 2000) include:
Chapter 1
5
Introduction
Larger cluster sizes—64-bit Enterprise and Datacenter Editions now support up to
8 nodes.
Enhanced cluster installation wizard—Built-in validation and verification
function to help ensure base components are ready to be clustered.
Installation—Clustering software is automatically copied during operating system
installation.
Multi-node addition—Multiple nodes can now be added in a single operation
instead of one by one.
Active Directory integration—Tighter integration including a “virtual” computer
object, Kerberos authentication, and a default location for services to publish service control points. Users can access the virtual server just like any other Windows server.
6
Chapter 1
Introduction

Clustering overview

Clustering overview
A cluster is a group of individual servers, or nodes, configured to appear as a single, virtual server. The nodes making up the cluster generally run a common set of applications. They are physically connected by cables, and programmatically connected by the clustering software. Together these nodes appear as a single system to both users and applications.
Clusters provide the following advantages over stand-alone servers:
High availability—Clusters are designed to avoid single points-of-failure.
Applications can be distributed over more than one node, achieving a high degree of parallelism and failure recovery.
Manageability—Clusters appear as a single system to end users, applications, and
the network, while providing a single point of control for administrators, either locally or remotely.
Scalability—You can increase the cluster's computing power by adding more
processors or computers. Applications can also be scaled according to need as a company grows.
Because of the inherent redundancy of a hardware and software in a cluster, businesses are protected from system down-time due to single points of failure, power outages, natural disasters, and even during routine system maintenance or upgrades. In addition, clusters help businesses eliminate penalties and other costs associated with not being able to meet the Service Level Agreements they are contracted to provide.
A cluster is similar to a general distributed system, except that it provides the following additional capabilities:
1. Every node has full connectivity and communication with the other nodes in the cluster through the following methods:
Hard disks on a shared bus—One or more shared buses used for storage. Each
shared bus attaches one or more disks that hold data used to manage the cluster. Cluster service provides a dual-access storage model whereby multiple systems in the cluster can access the same storage.
Private network—One or more private networks, or interconnects, carry
internal cluster communication only (called “heartbeats”). At least one private network is required.
Public network—One or more public networks can be used as a backup for the
private network and can be used both for internal cluster communication and to host client applications. Network adapters, known to the cluster as network interfaces, attach nodes to networks.
2. Each nodes tracks cluster configuration. Every node in the cluster is aware when another system joins or leaves the cluster.
Chapter 1
3.Every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other nodes.
7
Introduction
Clustering overview
Clustered systems can be created from nodes having different numbers of CPUs, or CPUs with different clock speeds, or even from different Integrity platforms. Diverse configurations are tested, qualified, and certified frequently by HP. However, the only limitations are that each node must be an HP Integrity platform, and each node must have the same Host Bus Adaptors (HBAs), HBA drivers, and HBA firmware.
8
Chapter 1
Introduction

Server Cluster vs. Network Load Balancing

Server Cluster vs. Network Load Balancing
Windows Server 2003 provides two types of clustering services:
Server Cluster—Available only in Windows Server 2003, Enterprise Edition or
Datacenter Edition, this service provides high availability and scalability for mission-critical applications such as databases, messaging systems, and file and print services. The servers (nodes) in the cluster remain in constant communication. If one of the nodes becomes unavailable as a result of failure or maintenance, another node immediately begins providing service, a process known as failover. Users accessing the service continue to access it, unaware that it is now being provided from a different node. Both Windows Server 2003, Enterprise Edition and Datacenter Edition support server cluster configurations of up to 8 nodes.
Network Load Balancing (NLB)—Available in all editions of Windows Server
2003, this service load balances incoming Internet Protocol (IP) traffic across clusters. NLB enhances both the availability and scalability of Internet server-based programs such as Web servers, streaming media servers, and Terminal Services. By acting as the load balancing infrastructure and providing control information to management applications built on top of Windows Management Instrumentation (WMI), NLB can seamlessly integrate into existing Web server farm infrastructures. NLB clusters can scale to 32 nodes.
Table 1-1 summarizes some of the differences between these two technologies. Additional differences and considerations are detailed in the following sections.
Table 1-1 Server Cluster vs. Network Load Balancing
Server Cluster NLB
Used for databases, e-mail services, line of business (LOB) applications, and custom applications
Included with Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition
Provides high availability and server consolidation
Can be deployed on a single network or geographically distributed
Supports clusters up to eight nodes Supports clusters up to 32 nodes
Requires the use of shared or replicated storage
Used for Web servers, firewalls, and Web services
Included with all four versions of Windows Server 2003
Provides high availability and scalability
Generally deployed on a single network but can span multiple networks if properly configured
Doesn't require any special hardware or software; works “out of the box”
Chapter 1
9
Introduction
Server Cluster vs. Network Load Balancing

Server Cluster

Use Server Cluster to provide high availability for mission-critical applications through fail-over. It uses a “shared-nothing” architecture, which means that a resource can be active on only one node in the cluster at any given time. Because of this, it is well suited to applications that maintain some sort of fixed state (for example, a database). In addition to database applications, ERP or CRM, OLTP, file and print, e-mail, and custom application services are typically clustered using Server Cluster.
When you deploy Server Cluster, you first configure it between two and eight servers that will act as nodes in the cluster. Then you configure the cluster resources that are required by the application you're clustering. These resources may include network names, IP addresses, applications, services, and disk drives. Finally, you bring the cluster online so that it can begin processing client requests.
Most clustered applications and their associated resources are assigned to one cluster node at a time. If Server Cluster detects the failure of the primary node for a clustered application, or if that node is taken offline for maintenance, the clustered application is started on a backup cluster node. Client requests are immediately redirected to the backup cluster node to minimize the impact of the failure.
NOTE Though most clustered services run on only one node at a time, a cluster can run many
services simultaneously to optimize hardware utilization. Some clustered applications may run on multiple Server Cluster nodes simultaneously, including Microsoft SQL Server.

Network Load Balancing (NLB)

Use NLB to provide high availability for applications that scale out horizontally, such as Web servers, proxy servers, and other services that need client requests distributed across nodes in a cluster. It uses a load balancing architecture, which means that a resource can be active on all nodes in the cluster at any given time. Because of this, it is well suited to applications that do not maintain a fixed state (for example, a Web server).
NLB clusters don't use a quorum, and so they don't impose storage or network requirements on the cluster nodes. If a node in the cluster fails, NLB automatically redirects incoming requests to the remaining nodes. If you take a node in the cluster offline for maintenance, you can use NLB to allow existing client sessions to finish before taking the node offline. This eliminates any end-user impact during planned downtime. NLB is also capable of weighting requests, which allows you to mix high-powered servers with legacy servers and ensure all hardware is efficiently utilized.
Most often, NLB is used to build redundancy and scalability for firewalls, proxy servers, or Web servers, as illustrated in Figure 1-1. Other applications commonly clustered with NLB include virtual VPN endpoints, streaming media servers, and terminal services.
10
Chapter 1
For a detailed discussion of the key features of this technology, as well as its internal architecture and performance characteristics, see http://www.microsoft.com/windows2000/docs/NLBtech2.doc.
Figure 1-1 Network Load Balancing example
Introduction
Server Cluster vs. Network Load Balancing
Chapter 1
11
Loading...
+ 25 hidden pages