Novell BUSINESS CONTINUITY CLUSTERING 1.1 SP2 Administration Manual

Page 1
Administration Guide for NetWare® 6.5 SP8
Novell®
Business Continuity Clustering
novdocx (en) 13 May 2009
AUTHORIZED DOCUMENTATION
1.1 SP2
August 14, 2009
www.novell.com
BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 2
Legal Notices
Novell, Inc., makes no representations or warranties with respect to the contents or use of this documentation, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc., reserves the right to revise this publication and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes.
Further, Novell, Inc., makes no representations or warranties with respect to any software, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. Further, Novell, Inc., reserves the right to make changes to any and all parts of Novell software, at any time, without any obligation to notify any person or entity of such changes.
Any products or technical information provided under this Agreement may be subject to U.S. export controls and the trade laws of other countries. You agree to comply with all export control regulations and to obtain any required licenses or classification to export, re-export or import deliverables. You agree not to export or re-export to entities on the current U.S. export exclusion lists or to any embargoed or terrorist countries as specified in the U.S. export laws. You agree to not use deliverables for prohibited nuclear, missile, or chemical biological weaponry end uses. See the
Novell International Trade Services Web page (http://www.novell.com/info/exports/) for more information on
exporting Novell software. Novell assumes no responsibility for your failure to obtain any necessary export approvals.
novdocx (en) 13 May 2009
Copyright © 2007–2009 Novell, Inc. All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the express written consent of the publisher.
Novell, Inc., has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed on the Novell Legal Patents Web page (http://www.novell.com/company/legal/patents/) and one or more additional patents or pending patent applications in the U.S. and in other countries.
Novell, Inc. 404 Wyman Street, Suite 500 Waltham, MA 02451 U.S.A. www.novell.com
Online Documentation: To access the latest online documentation for this and other Novell products, see
the Novell Documentation Web page (http://www.novell.com/documentation).
Page 3
Novell Trademarks
For Novell Trademarks, see the Novell Trademark and Service Mark list (http://www.novell.com/company/legal/
trademarks/tmlist.html).
Third-Party Materials
All third-party trademarks are the property of their respective owners.
novdocx (en) 13 May 2009
Page 4
novdocx (en) 13 May 2009
4 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 5
Contents
About This Guide 11
1Overview 13
1.1 Disaster Recovery Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.2 Disaster Recovery Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.1 LAN-Based versus Internet-Based Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.2 Host-Based versus Storage-Based Data Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.3 Stretch Clusters vs. Cluster of Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Novell Business Continuity Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4 BCC Deployment Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.1 Two-Site Business Continuity Cluster Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
1.4.2 Multiple-Site Business Continuity Cluster Solution . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.4.3 Low-Cost Business Continuity Cluster Solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.5 Key Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.5.1 Business Continuity Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.5.2 Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.5.3 Landing Zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.5.4 BCC Drivers for Identity Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
novdocx (en) 13 May 2009
2 What’s New for BCC 1.1 for NetWare 27
2.1 What’s New for BCC 1.1 SP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 What’s New for BCC 1.1 SP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.3 What’s New for BCC 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 Planning a Business Continuity Cluster 29
3.1 Determining Design Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 LAN Connectivity Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 VLAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.2 NIC Teaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.3 IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.4 Name Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.5 IP Addresses for BCC-Enabled Cluster Resources. . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.4 SAN Connectivity Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5 Storage Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.6 eDirectory Design Guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.6.1 Object Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.6.2 Cluster Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.3 Partitioning and Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.4 Objects Created by the BCC Drivers for Identity Manager . . . . . . . . . . . . . . . . . . . . 33
3.6.5 Landing Zone. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.6 Naming Conventions for BCC-Enabled Resources . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.7 Cluster Design Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 Installing Business Continuity Clustering 37
4.1 Requirements for BCC 1.1 SP2 for NetWare. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Contents 5
Page 6
4.1.1 Business Continuity Clustering Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.2 Business Continuity Cluster Component Locations. . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.3 NetWare 6.5 SP8 (OES 2 SP1 NetWare) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.4 Novell Cluster Services 1.8.5 for NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.5 Novell eDirectory 8.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.6 SLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.7 Identity Manager 3.5.1 Bundle Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.8 Novell iManager 2.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.9 Storage-Related Plug-Ins for iManager 2.7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.1.10 Windows Workstation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.11 OpenWBEM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.1.12 Shared Disk Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.13 Mirroring Shared Disk Systems Between Peer Clusters . . . . . . . . . . . . . . . . . . . . . . 45
4.1.14 LUN Masking for Shared Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.15 Link Speeds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.1.16 Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.17 Web Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.18 BASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.19 LIBC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.20 autoexec.ncf File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Downloading the Business Continuity Clustering Software . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3 Configuring a BCC Administrator User. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.1 Creating the BCC Administrator User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.2 Assigning Trustee Rights for the BCC Administrator User to the Cluster Objects. . . 48
4.3.3 Assigning Trustee Rights for the BCC Administrator User to the _ADMIN Volume. . 48
4.3.4 Assigning Trustee Rights for the BCC Administrator User to the sys:\tmp Directory. 49
4.4 Installing and Configuring the Novell Business Continuity Clustering Software. . . . . . . . . . . . 50
4.4.1 Installing the BCC Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.4.2 Installing the Identity Manager Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.5 What’s Next . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
novdocx (en) 13 May 2009
5 Upgrading Business Continuity Clustering for NetWare 55
5.1 Guidelines for Upgrading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1.2 Performing a Rolling Cluster Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.2 Disabling BCC 1.0, Upgrading Servers to NetWare 6.5 SP8, then Enabling BCC 1.1 SP2. . . 56
5.3 Upgrading Clusters from BCC 1.0 to BCC 1.1 SP2 for NetWare. . . . . . . . . . . . . . . . . . . . . . . 58
5.3.1 Upgrading the BCC Cluster from 1.0 to 1.1 SP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3.2 Upgrading the BCC Cluster from 1.1 SP1 to 1.1 SP2 . . . . . . . . . . . . . . . . . . . . . . . . 60
5.4 Upgrading Clusters from BCC 1.1 SP1 to SP2 for NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.4.1 Upgrading NetWare and BCC on the Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.4.2 Authorizing the BCC Administrator User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.4.3 Upgrading Identity Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.4.4 Deleting and Re-Creating the BCC Driver Sets and Drivers . . . . . . . . . . . . . . . . . . . 65
5.4.5 Verifying the BCC Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6 Configuring Business Continuity Clustering Software 67
6.1 Configuring Identity Manager Drivers for the Business Continuity Cluster. . . . . . . . . . . . . . . . 67
6.1.1 Configuring the Identity Manager Drivers and Templates . . . . . . . . . . . . . . . . . . . . . 68
6.1.2 Creating SSL Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.1.3 Synchronizing Identity Manager Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6.1.4 Preventing Identity Manager Synchronization Loops . . . . . . . . . . . . . . . . . . . . . . . . 71
6.2 Configuring Clusters for Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2.1 Enabling Clusters for Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2.2 Adding Cluster Peer Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 7
6.2.3 Adding Search-and-Replace Values to the Resource Replacement Script . . . . . . . . 74
6.2.4 Adding SAN Management Configuration Information . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2.5 Verifying BCC Administrator User Trustee Rights and Credentials . . . . . . . . . . . . . . 78
6.3 BCC-Enabling Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.1 Enabling a Cluster Resource for Business Continuity . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.2 Adding Resource Script Search-and-Replace Values . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3.3 Selecting Peer Clusters for the Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.3.4 Adding SAN Array Mapping Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
7 Managing a Business Contiuity Cluster 83
7.1 Migrating a Cluster Resource to a Peer Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.1.1 Understanding BCC Resource Migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.1.2 Migrating Cluster Resources between Clusters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.2 Changing Cluster Peer Credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.3 Viewing the Current Status of a Business Continuity Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.3.1 Using iManager to View the Cluster Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
7.3.2 Using Console Commands to View the Cluster Status . . . . . . . . . . . . . . . . . . . . . . . 86
7.4 Generating a Cluster Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.5 Disabling Business Continuity Cluster Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7.6 Resolving Business Continuity Cluster Failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.6.1 SAN-Based Mirroring Failure Types and Responses . . . . . . . . . . . . . . . . . . . . . . . . 88
7.6.2 Host-Based Mirroring Failure Types and Responses . . . . . . . . . . . . . . . . . . . . . . . . 89
novdocx (en) 13 May 2009
8 Virtual IP Addresses 93
8.1 Virtual IP Address Definitions and Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
8.1.2 Characteristics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.2 Virtual IP Address Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.2.1 High Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
8.2.2 Unlimited Mobility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.3 Support for Host Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
8.2.4 Source Address Selection for Outbound Connections . . . . . . . . . . . . . . . . . . . . . . . 97
8.3 Reducing the Consumption of Additional IP Addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8.4 Configuring Virtual IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
8.4.1 Displaying Bound Virtual IP Addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9 Troubleshooting Business Continuity Clustering 1.1 101
9.1 Cluster Connection States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
9.2 Driver Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.3 Excluded Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.4 Security Equivalent User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.5 Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.6 Clusters Cannot Communicate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
9.7 BCC Startup Flags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
9.8 Problems with Installing BCC on NetWare. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
9.9 Identity Manager Drivers for Cluster Synchronization Do Not Start . . . . . . . . . . . . . . . . . . . . 106
9.10 Identity Manager Drivers Do Not Synchronize Objects from One Cluster to Another . . . . . . 107
9.11 Tracing Identity Manager Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.12 Peer Cluster Communication Not Working. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
9.13 Administration of Peer Clusters Not Functional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.14 A Resource Does Not Migrate to a Peer Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.15 A Resource Cannot Be Brought Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Contents 7
Page 8
9.16 Dumping Syslog on NetWare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.17 Slow Failovers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
9.18 Resource Script Search-and-Replace Functions Do Not Work . . . . . . . . . . . . . . . . . . . . . . . 110
9.19 Virtual NCP Server IP Addresses Won’t Change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9.20 The IP Address, Virtual Server DN, or Pool Name Does Not Appear on the iManager Cluster
Configuration Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
9.21 Blank Error String iManager Error Appears While Bringing a Resource Online. . . . . . . . . . . 111
9.22 Mapping Drives in Login Scripts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
9.23 Mapping Drives to Home Directories by Using the %HOME_DIRECTORY Variable . . . . . . 112
9.24 BCC Error Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10 Security Considerations 115
10.1 Security Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
10.2 Security Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.2.1 BCC Configuration Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.2.2 Changing the NCS: BCC Setings Attribute in the BCC XML Configuration. . . . . . . 117
10.2.3 Disabling SSL for Inter-Cluster Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.3 General Security Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.4 Security Information for Dependent Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
novdocx (en) 13 May 2009
A Console Commands for BCC 123
B Implementing a Multiple-Tree BCC 127
B.1 Planning a Multiple-Tree Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B.1.1 Cluster Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B.1.2 User Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
B.1.3 SSL Certificates for Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
B.2 Using Identity Manager to Copy User Objects to Another eDirectory Tree . . . . . . . . . . . . . . 128
B.3 Configuring User Object Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
B.4 Creating SSL Certificates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
B.5 Synchronizing the BCC-Specific Identity Manager Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . 130
B.6 Preventing Identity Manager Synchronization Loops. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
B.7 Migrating Resources to Another Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
C Setting Up Auto-Failover 133
C.1 Enabling Auto-Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
C.2 Creating an Auto-Failover Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
C.3 Refining the Auto-Failover Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
C.4 Adding or Editing Monitor Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
D Configuring Host-Based File System Mirroring for NSS Pools 137
D.1 Creating and Mirroring NSS Partitions on Shared Storage . . . . . . . . . . . . . . . . . . . . . . . . . . 138
D.2 Creating NSS Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
D.3 Novell Cluster Services Configuration and Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
D.4 Checking NSS Volume Mirror Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
E Documentation Updates 141
E.1 August 14, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 9
E.1.1 Console Commands for BCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
E.1.2 Installing Business Continuity Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
E.1.3 Upgrading Business Continuity Clustering for NetWare . . . . . . . . . . . . . . . . . . . . . 142
E.2 April 28, 2009 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
E.2.1 Converting BCC Clusters from NetWare to Linux . . . . . . . . . . . . . . . . . . . . . . . . . . 142
novdocx (en) 13 May 2009
Contents 9
Page 10
novdocx (en) 13 May 2009
10 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 11
About This Guide
novdocx (en) 13 May 2009
This guide describes how to install, configure, and manage Novell® Business Continuity Clustering
®
1.1 Support Pack 2 (SP2) for NetWare NetWare) in combination with Novell Cluster Services
Chapter 1, “Overview,” on page 13
Chapter 2, “What’s New for BCC 1.1 for NetWare,” on page 27
Chapter 3, “Planning a Business Continuity Cluster,” on page 29
Chapter 4, “Installing Business Continuity Clustering,” on page 37
Chapter 5, “Upgrading Business Continuity Clustering for NetWare,” on page 55
Chapter 6, “Configuring Business Continuity Clustering Software,” on page 67
Chapter 7, “Managing a Business Contiuity Cluster,” on page 83
Chapter 8, “Virtual IP Addresses,” on page 93
Chapter 9, “Troubleshooting Business Continuity Clustering 1.1,” on page 101
Chapter 10, “Security Considerations,” on page 115
Appendix A, “Console Commands for BCC,” on page 123
Appendix B, “Implementing a Multiple-Tree BCC,” on page 127
Appendix C, “Setting Up Auto-Failover,” on page 133
Appendix D, “Configuring Host-Based File System Mirroring for NSS Pools,” on page 137
6.5 SP8 (same as Novell Open Enterprise Server 2 SP1 for
TM
1.8.5 for NetWare clusters.
Audience
This guide is intended for intended for anyone involved in installing, configuring, and managing Novell Cluster ServicesTM for NetWare in combination with Novell Business Continuity Clustering for NetWare.
Feedback
We want to hear your comments and suggestions about this manual and the other documentation included with this product. Please use the User Comments feature at the bottom of each page of the online documentation, or go to www.novell.com/documentation/feedback.html (http://
www.novell.com/documentation/feedback.html) and enter your comments there.
Documentation Updates
The latest version of this Novell Business Continuity Clustering 1.1 SP2 Administration Guide for NetWare 6.5 SP8 (same as OES 2 SP1 NetWare) is available on the Business Continuity Clustering
Documentation Web site (http://www.novell.com/documentation/bcc/index.html) under BCC for
OES 2 SP1.
Additional Documentation
For information about using Novell Business Continuity Clustering 1.2 for Linux, see the BCC 1.2:
Administration Guide for Linux.
About This Guide 11
Page 12
For information about Novell Cluster Services for NetWare, see the OES 2 SP2: Novell Cluster
Services 1.8.5 for NetWare Administration Guide.
For the latest information about Novell Identity Manager 3.5.1, see the Identity Management
Documentation Web site (http://www.novell.com/documentation/idm35/index.html).
For the latest information about OES 2 SP1 for Linux and NetWare, see the OES 2 SP1
Documentation Web site (http://www.novell.com/documentation/oes2/index.html).
Documentation Conventions
In Novell documentation, a greater-than symbol (>) is used to separate actions within a step and items in a cross-reference path.
®
A trademark symbol (
, TM, etc.) denotes a Novell trademark. An asterisk (*) denotes a third-party
trademark.
novdocx (en) 13 May 2009
12 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 13
1
Overview
novdocx (en) 13 May 2009
1
As corporations become more international, fueled in part by the reach of the Internet, the requirement for service availability has increased. Novell offers corporations the ability to maintain mission-critical (24x7x365) data and application services to their users while still being able to perform maintenance and upgrades on their systems.
In the past few years, natural disasters (ice storms, earthquakes, hurricanes, tornadoes, and fires) have caused unplanned outages of entire data centers. In addition, U.S. federal agencies have realized the disastrous effects that terrorist attacks could have on the U.S. economy when corporations lose their data and the ability to perform critical business practices. This has resulted in initial recommendations for corporations to build mirrored or replicated data centers that are geographically separated by 300 kilometers (km) or more. (The minimum acceptable distance is 200 km.)
Many companies have built and deployed geographically mirrored data centers. The problem is that setting up and maintaining the multiple centers is a manual process that takes a great deal of planning and synchronizing. Even configuration changes must be carefully planned and replicated. One mistake and the redundant site is no longer able to effectively take over in the event of a disaster.
This section identifies the implications for disaster recovery, provides an overview of some of the network implementations today that attempt to address disaster recovery, and describes how Business Continuity Clustering can improve your disaster recovery solution by providing specialized software that automates cluster configuration, maintenance, and synchronization across two to four geographically separate sites.
®
Business Continuity Clustering (BCC)
Section 1.1, “Disaster Recovery Implications,” on page 13
Section 1.2, “Disaster Recovery Implementations,” on page 14
Section 1.3, “Novell Business Continuity Clusters,” on page 21
Section 1.4, “BCC Deployment Scenarios,” on page 22
Section 1.5, “Key Concepts,” on page 25
1.1 Disaster Recovery Implications
The implications of disaster recovery are directly tied to your data. Is your data mission critical? In many instances, critical systems and data drive the business. If these services stop, the business stops. When calculating the cost of downtime, some things to consider are
File transfers and file storage
E-mail, calendaring, and collaboration
Web hosting
Critical databases
Productivity
Reputation
Overview
13
Page 14
Continuous availability of critical business systems is no longer a luxury, it is a competitive business requirement.The Gartner Group estimates that 40% of enterprises that experience a disaster will go out of business in five years, and only 15% of enterprises have a full-fledged business continuity plan that goes beyond core technology and infrastructure.
The cost to the business for each one hour of service outage includes the following:
Income loss measured as the income-generating ability of the service, data, or impacted group
Productivity loss measured as the hourly cost of impacted employees
Recovery cost measured as the hourly cost of IT personnel to get services back online
Future lost revenue because of customer and partner perception
1.2 Disaster Recovery Implementations
Stretch clusters and cluster of clusters are two approaches for making shared resources available across geographically distributed sites so that a second site can be called into action after one site fails. To use these approaches, you must first understand how the applications you use and the storage subsystems in your network deployment can determine whether a stretch cluster or cluster of clusters solution is possible for your environment.
novdocx (en) 13 May 2009
Section 1.2.1, “LAN-Based versus Internet-Based Applications,” on page 14
Section 1.2.2, “Host-Based versus Storage-Based Data Mirroring,” on page 14
Section 1.2.3, “Stretch Clusters vs. Cluster of Clusters,” on page 15
1.2.1 LAN-Based versus Internet-Based Applications
Traditional LAN applications require a LAN infrastructure that must be replicated at each site, and might require relocation of employees to allow the business to continue. Internet-based applications allow employees to work from any place that offers an Internet connection, including homes and hotels. Moving applications and services to the Internet frees corporations from the restrictions of traditional LAN-based applications.
®
By using Novell exteNd Director portal services, iChain applications, and data can be rendered through the Internet, allowing for loss of service at one site but still providing full access to the services and data by virtue of the ubiquity of the Internet. Data and services continue to be available from the other mirrored sites.
, and ZENworks®, all services,
1.2.2 Host-Based versus Storage-Based Data Mirroring
For clustering implementations that are deployed in data centers in different geographic locations, the data must be replicated between the storage subsystems at each data center. Data-block replication can be done by host-based mirroring for synchronous replication over short distances up to 10 km. Typically, replication of data blocks between storage systems in the data centers is performed by SAN hardware that allows synchronous mirrors over a greater distance.
For stretch clusters, host-based mirroring is required to provide synchronous mirroring of the SBD (split-brain detector) partition between sites. This means that stretch-cluster solutions are limited to distances of 10 km.
14 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 15
Table 1-1 compares the benefits and limitations of host-based and storage-based mirroring.
Table 1-1 Comparison of Host-Based and Storage-Based Data Mirroring
Capability Host-Based Mirroring Storage-Based Mirroring
novdocx (en) 13 May 2009
Geographic distance between sites
Mirroring the SBD partition An SBD can be mirrored between
Synchronous data-block replication of data between sites
Failover support No additional configuration of the
Failure of the site interconnect LUNs can become primary at
SMI-S compliance If the storage subsystems are not
Up to 10 km Can be up to and over 300 km.
two sites.
Yes Yes, requires a Fibre Channel
hardware is required.
both locations (split brain problem).
SMI-S compliant, the storage subsystems must be controllable by scripts running on the nodes of the cluster.
The actual distance is limited only by the SAN hardware and media interconnects for your deployment.
Yes, if mirroring is supported by the SAN hardware and media interconnects for your deployment.
SAN or iSCSI SAN.
Requires additional configuration of the SAN hardware.
Clusters continue to function independently. Minimizes the chance of LUNs at both locations becoming primary (split brain problem).
If the storage subsystems are not SMI-S compliant, the storage subsystems must be controllable by scripts running on the nodes of the cluster.
1.2.3 Stretch Clusters vs. Cluster of Clusters
A stretch cluster and a cluster of clusters are two clustering implementations that you can use with Novell Cluster Services each deployment type, then compares the capabilities of each.
Novell Business Continuity Clustering automates some of the configuration and processes used in a cluster of clusters. For information, see Section 1.3, “Novell Business Continuity Clusters,” on
page 21.
“Stretch Clusters” on page 16
“Cluster of Clusters” on page 16
“Comparison of Stretch Clusters and Cluster of Clusters” on page 18
“Evaluating Disaster Recovery Implementations for Clusters” on page 20
TM
to achieve your desired level of disaster recovery. This section describes
Overview 15
Page 16
Stretch Clusters
novdocx (en) 13 May 2009
A stretch cluster consists of a single cluster where the nodes are located in two geographically
TM
separate data centers. All nodes in the cluster must be in the same Novell eDirectory
tree, which requires the eDirectory replica ring to span data centers. The IP addresses for nodes and cluster resources in the cluster must share a common IP subnet.
At least one storage system must reside in each data center. The data is replicated between locations by using host-based mirroring or storage-based mirroring. For information about using mirroring solutions for data replication, see Section 1.2.2, “Host-Based versus Storage-Based Data Mirroring,”
on page 14. Link latency can occur between nodes at different sites, so the heartbeat tolerance
between nodes of the cluster must be increased to allow for the delay.
The split-brain detector (SBD) is mirrored between the sites. Failure of the site interconnect can result in LUNs becoming primary at both locations (split brain problem) if host-based mirroring is used.
In the stretch-cluster architecture shown in Figure 1-1, the data is mirrored between two data centers that are geographically separated. The server nodes in both data centers are part of one cluster, so that if a disaster occurs in one data center, the nodes in the other data center automatically take over.
Figure 1-1 Stretch Cluster
8-node cluster stretched
between two sites
Building A Building B
Ethernet Switch Ethernet Switch
Server 2 Server 3Server 1 Server 4
Fibre Channel
Switch
Fibre Channel
Disk Array
WAN
Cluster
Heartbeat
SAN
Disk blocks
Server 6 Server 7Server 5 Server 8
Fibre Channel Switch
Fibre Channel
Disk Array
Site 2Site 1
Cluster of Clusters
A cluster of clusters consists of multiple clusters in which each cluster is located in a geographically separate data center. Each cluster can be in different Organizational Unit (OU) containers in the same eDirectory tree, or in different eDirectory trees. Each cluster can be in a different IP subnet.
16 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 17
A cluster of clusters provides the ability to fail over selected cluster resources or all cluster resources from one cluster to another cluster. For example, the cluster resources in one cluster can fail over to separate clusters by using a multiple-site fan-out failover approach. A given service can be provided by multiple clusters. Resource configurations are replicated to each peer cluster and synchronized manually. Failover between clusters requires manual management of the storage systems and the cluster.
Nodes in each cluster access only the storage systems co-located in the same data center. Typically, data is replicated by using storage-based mirroring. Each cluster has its own SBD partition. The SBD partition is not mirrored across the sites, which minimizes the chance for a split-brain problem occurring when using host-based mirroring. For information about using mirroring solutions for data replication, see Section 1.2.2, “Host-Based versus Storage-Based Data Mirroring,” on page 14.
In the cluster-of-clusters architecture shown in Figure 1-2, the data is synchronized by the SAN hardware between two data centers that are geographically separated. If a disaster occurs in one data center, the cluster in the other data center takes over.
Figure 1-2 Cluster of Clusters
novdocx (en) 13 May 2009
Server
1A
Fibre Channel
Server2AServer
Switch
Two independent clusters at
geographically separate sites
Building A Building B
Ethernet Switch
3A
Fibre Channel
Disk Arrays
Server
4A
WAN
eDirectory
IDM
Server
1B
SAN
Disk blocks
Ethernet Switch
Server2BServer
Fibre Channel
Disk Arrays
Cluster Site 2Cluster Site 1
3B
Server
Fibre Channel Switch
4B
Overview 17
Page 18
Comparison of Stretch Clusters and Cluster of Clusters
Table 1-2 compares the capabilities of a stretch cluster and a cluster of clusters.
Table 1-2 Comparison of Stretch Cluster and Cluster of Clusters
Capability Stretch Cluster Cluster of Clusters
Number of clusters One Two or more
novdocx (en) 13 May 2009
Number of geographically separated data centers
eDirectory trees Single tree only; requires the
eDirectory Organizational Units (OUs)
IP subnet IP addresses for nodes and
SBD partition A single SBD is mirrored between
Two Two or more
replica ring to span data centers.
Single OU container for all nodes.
As a best practice, place the cluster container in an OU separate from the rest of the tree.
cluster resources must be in a single IP subnet.
Because the subnet spans multiple locations, you must ensure that your switches handle gratuitous ARP (Address Resolution Protocol).
two sites by using host-based mirroring, which limits the distance between data centers to 10 km.
One or multiple trees
Each cluster can be in a different OU. Each cluster is in a single OU container.
As a best practice, place each cluster container in an OU separate from the rest of the tree.
IP addresses in a given cluster are in a single IP subnet. Each cluster can use the same or different IP subnet.
If you use the same subnet for all clusters in the cluster of clusters, you must ensure that your switches handle gratuitous ARP.
Each cluster has its own SBD.
Each cluster can have an on-site mirror of its SBD for high availability.
If the cluster of clusters uses host-based mirroring, the SBD is not mirrored between sites, which minimizes the chance of LUNs at both locations becoming primary.
Failure of the site interconnect if using host-based mirroring
Storage subsystem Each cluster accesses only the
18 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
LUNs might become primary at both locations (split brain problem).
storage subsystem on its own site.
Clusters continue to function independently.
Each cluster accesses only the storage subsystem on its own site.
Page 19
Capability Stretch Cluster Cluster of Clusters
novdocx (en) 13 May 2009
Data-block replication between sites
For information about data
Yes; typically uses storage-based mirroring, but host-based mirroring is possible for distances up to 10 km.
Yes; typically uses storage-based mirroring, but host-based mirroring is possible for distances up to 10 km.
replication solutions, see
Section 1.2.2, “Host-Based versus Storage-Based Data Mirroring,” on page 14.
Clustered services A single service instance runs in
the cluster.
Cluster resource failover Automatic failover to preferred
nodes at the other site.
Each cluster can run an instance of the service.
Manual failover to preferred nodes on one or multiple clusters (multiple-site fan-out failover).
Failover requires additional configuration.
Cluster resource configurations Configured for a single cluster Configured for the primary cluster
that hosts the resource, then the configuration is manually replicated to the peer clusters.
Cluster resource configuration synchronization
Failover of cluster resources between clusters
Controlled by the master node Manual process that can be
tedious and error-prone.
Not applicable Manual management of the
storage systems and the cluster.
Link latency between sites Can cause false failovers.
The cluster heartbeat tolerance between master and slave must be increased to as high as 30 seconds. Monitor cluster heartbeat statistics, then tune down as needed.
Each cluster functions independently in its own geographical site.
Overview 19
Page 20
Evaluating Disaster Recovery Implementations for Clusters
Table 1-3 illustrates why a cluster of cluster solution is less problematic to deploy than a stretch
cluster solution. Manual configuration is not a problem when using Novell Business Continuity Clustering for your cluster of clusters.
Table 1-3 Advantages and Disadvantages of Stretch Clusters versus Cluster of Clusters
Stretch Cluster Cluster of Clusters
novdocx (en) 13 May 2009
Advantages It automatically fails over when
configured with host-based mirroring.
It is easier to manage than separate
clusters.
Cluster resources can fail over to
nodes in any site.
Disadvantages
The eDirectory partition must span
the sites.
Failure of site interconnect can
result in LUNs becoming primary at both locations (split brain problem) if host-based mirroring is used.
An SBD partition must be mirrored
between sites.
It accommodates only two sites.
All IP addresses must reside in the
same subnet.
eDirectory partitions don’t need to
span the cluster.
Each cluster can be in different OUs
in the same eDirectory tree, or in different eDirectory trees.
IP addresses for each cluster can be
on different IP subnets.
Cluster resources can fail over to
separate clusters (multiple-site fan­out failover support).
Each cluster has its own SBD.
Each cluster can have an on-site mirror of its SBD for high availability.
If the cluster of clusters uses host­based mirroring, the SBD is not mirrored between sites, which minimizes the chance of LUNs at both locations becoming primary.
Resource configurations must be
manually synchronized.
Storage-based mirroring requires
additional configuration steps.
20 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 21
Stretch Cluster Cluster of Clusters
novdocx (en) 13 May 2009
Other Considerations
Host-based mirroring is required to
mirror the SBD partition between sites.
Link variations can cause false
failovers.
You could consider partitioning the
eDirectory tree to place the cluster container in a partition separate from the rest of the tree.
The cluster heartbeat tolerance
between master and slave must be increased to accommodate link latency between sites.
You can set this as high as 30 seconds, monitor cluster heartbeat statistics, and then tune down as needed.
Because all IP addresses in the
cluster must be on the same subnet, you must ensure that your switches handle ARP.
Contact your switch vendor or consult your switch documentation for more information.
Depending on the platform used,
storage arrays must be controllable by scripts that run on NetWare the SANs are not SMI-S compliant.
®
if
1.3 Novell Business Continuity Clusters
A Novell Business Continuity Clustering cluster is an automated cluster of Novell Cluster Services clusters. It is similar to what is described in “Cluster of Clusters” on page 16, except that the cluster configuration, maintenance, and synchronization have been automated by adding specialized software.
BCC supports up to four peer clusters. The sites are geographically separated mirrored data centers, with a high availability cluster located at each site. Configuration is automatically synchronized between the sites. Data is replicated between sites. All cluster nodes and their cluster resources are monitored at each site. If one site goes down, business continues through the mirrored sites.
The business continuity cluster configuration information is stored in eDirectory. eDirectory schema extensions provide the additional attributes required to maintain the configuration and status information of BCC enabled cluster resources. This includes information about the peer clusters, the cluster resources and their states, and storage control commands.
BCC is an integrated set of tools to automate the setup and maintenance of a business continuity infrastructure. Unlike competitive solutions that attempt to build stretch clusters, BCC uses a cluster of clusters. Each site has its own independent clusters, and the clusters in each of the geographically separate sites are each treated as “nodes” in a larger cluster, allowing a whole site to do fan-out failover to other multiple sites. Although this can currently be done manually with a cluster of clusters, BCC automates the system by using eDirectory and policy-based management of the resources and storage systems.
Overview 21
Page 22
Novell Business Continuity Clustering software provides the following advantages over typical cluster-of-clusters solutions:
Supports up to four clusters with up to 32 nodes each.
Integrates with shard storage hardware devices to automate the failover process through
standards-based mechanisms such as SMI-S.
Uses Novell Identity Manager technology to automatically synchronize and transfer cluster-
related eDirectory objects from one cluster to another, and between trees.
Provides the capability to fail over as few as one cluster resource, or as many as all cluster
resources.
Includes intelligent failover that allows you to perform site failover testing as a standard
practice.
Provides scripting capability that allows enhanced storage management control and
customization of migration and fail over between clusters.
Provides simplified business continuity cluster configuration and management by using the
browser-based Novell iManager management tool. iManager is used for the configuration and monitoring of the overall system and for the individual resources.
novdocx (en) 13 May 2009
1.4 BCC Deployment Scenarios
There are several Business Continuity Clustering deployment scenarios that can be used to achieve the desired level of disaster recovery. Three possible scenarios include:
Section 1.4.1, “Two-Site Business Continuity Cluster Solution,” on page 22
Section 1.4.2, “Multiple-Site Business Continuity Cluster Solution,” on page 23
Section 1.4.3, “Low-Cost Business Continuity Cluster Solution,” on page 24
1.4.1 Two-Site Business Continuity Cluster Solution
The two-site business continuity cluster deploys two independent clusters at geographically separate sites. Each cluster can support up to 32 nodes. The clusters can be designed in one of two ways:
Active Site/Active Site: Two active sites where each cluster supports different applications
and services. Either site can take over for the other site at any time.
Active Site/Passive Site: A primary site in which all services are normally active, and a
secondary site which is effectively idle. The data is mirrored to the secondary site, and the applications and services are ready to load if needed.
The active/active deployment option is typically used in a company that has more than one large site of operations. The active/passive deployment option is typically used when the purpose of the secondary site is primarily testing by the IT department. Replication of data blocks is typically done by SAN hardware, but it can be done by host-based mirroring for synchronous replication over short distances up to 10 km.
22 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 23
Figure 1-3 shows a two-site business continuity cluster that uses storage-based data replication
between the sites. BCC uses eDirectory and Identity Manager to synchronize cluster information between the two clusters.
Figure 1-3 Two-Site Business Continuity Cluster
Two independent clusters at
geographically separate sites
Building A Building B
novdocx (en) 13 May 2009
Server
1A
Fibre Channel
Server2AServer
Switch
Ethernet Switch
3A
Fibre Channel
Disk Arrays
Server
4A
WAN
eDirectory
IDM
Server
1B
SAN
Disk blocks
Ethernet Switch
Server2BServer
3B
Fibre Channel Switch
Fibre Channel
Disk Arrays
Cluster Site 2Cluster Site 1
Server
4B
1.4.2 Multiple-Site Business Continuity Cluster Solution
The multiple-site business continuity cluster is a large solution capable of supporting up to four sites. Each cluster can support up to 32 nodes. Services and applications can do fan-out failover between sites. Replication of data blocks is typically done by SAN hardware, but it can be done by host-based mirroring for synchronous replication over short distances up to 10 km.
Overview 23
Page 24
Figure 1-4 depicts a four-site business continuity cluster that uses storage-based data replication
between the sites. BCC uses eDirectory and Identity Manager to synchronize cluster information between the two clusters.
Figure 1-4 Multiple-Site Business Continuity Cluster
Four independent clusters in
geographically separate sites
Building D
Building C
novdocx (en) 13 May 2009
Server
1A
Fibre Channel
Switch
Building A
Ethernet Switch
Server
2A
Fibre Channel
Cluster Site 1
Server
Disk Arrays
3A
Server
4A
WAN
eDirectory
IDM
Server
1B
SAN
Disk blocks
Building B
Ethernet Switch
Server2BServer
3B
Fibre Channel Switch
Fibre Channel
Disk Arrays
Cluster Sites 2, 3, and 4
Server
4D
Server
4C
Server
4B
Channel
Channel
Using additional products, all services, applications, and data can be rendered through the Internet, allowing for loss of service at one site but still providing full access to the services and data by virtue of the ubiquity of the Internet. Data and services continue to be available from the other mirrored sites. Moving applications and services to the Internet frees corporations from the restrictions of traditional LAN-based applications. Traditional LAN applications require a LAN infrastructure that must be replicated at each site, and might require relocation of employees to allow the business to continue. Internet-based applications allow employees to work from any place that offers an Internet connection, including homes and hotels.
1.4.3 Low-Cost Business Continuity Cluster Solution
The low-cost business continuity cluster solution is similar to the previous two solutions, but replaces Fibre Channel storage arrays with iSCSI storage arrays. Data block mirroring can be accomplished either with iSCSI-based block replication, or host-based mirroring. In either case, snapshot technology can allow for asynchronous replication over long distances. However, the low­cost solution does not necessarily have the performance associated with higher-end Fibre Channel storage arrays.
24 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 25
1.5 Key Concepts
The key concepts in this section can help you understand how Business Continuity Clustering manages your business continuity cluster.
Section 1.5.1, “Business Continuity Clusters,” on page 25
Section 1.5.2, “Cluster Resources,” on page 25
Section 1.5.3, “Landing Zone,” on page 25
Section 1.5.4, “BCC Drivers for Identity Manager,” on page 25
1.5.1 Business Continuity Clusters
A cluster of two to four Novell Cluster Services clusters that are managed together by Business Continuity Clustering software. All nodes in every peer cluster are running the same operating system.
1.5.2 Cluster Resources
novdocx (en) 13 May 2009
A cluster resource is a cluster-enabled shared disk that is configured for Novell Cluster Services. It is also BCC-enabled so that it can be migrated and failed over between nodes in different peer clusters.
1.5.3 Landing Zone
The landing zone is an eDirectory context in which the objects for the Virtual Server, the Cluster Pool, and the Cluster Volume are placed when they are created for the peer clusters. You specify the landing zone context when you configure the Identity Manager drivers for the business continuity cluster.
1.5.4 BCC Drivers for Identity Manager
Business Continuity Clustering requires a special Identity Manager driver that uses an Identity Vault to synchronize the cluster resource configuration information between the peer clusters. If the peer clusters are in different eDirectory trees, an additional BCC driver helps synchronize user information between the trees. For information, see Section 6.1.1, “Configuring the Identity
Manager Drivers and Templates,” on page 68.
Overview 25
Page 26
novdocx (en) 13 May 2009
26 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 27
2
What’s New for BCC 1.1 for
novdocx (en) 13 May 2009
NetWare
This section describes the enhancements made to Novell® Business Continuity Clustering 1.1 for NetWare
2.1 What’s New for BCC 1.1 SP2
Business Continuity Clustering 1.1 SP2 for NetWare SP8 provides the following enhancements and changes:
®
.
Section 2.1, “What’s New for BCC 1.1 SP2,” on page 27
Section 2.2, “What’s New for BCC 1.1 SP1,” on page 27
Section 2.3, “What’s New for BCC 1.1,” on page 27
Support for NetWare 6.5 SP8 (same as Novell Open Enterprise Server (OES) 2 SP1 NetWare)
TM
TM
1.8.5 for NetWare
8.8
Support for Novell Cluster Services
Support for Identity Manager 3.5.1
Support for 64-bit architectures
Support for Novell eDirectory
Support for Novell iManager 2.7.2
2
2.2 What’s New for BCC 1.1 SP1
Business Continuity Clustering 1.1 SP1 for NetWare 6.5 SP6 provides the following enhancements and changes:
Support for NetWare 6.5 SP6 (same as OES 1 SP2 NetWare update)
Support for Identity Manager 3.x
2.3 What’s New for BCC 1.1
Business Continuity Clustering 1.1 for NetWare 6.5 SP5 provides the following enhancements and changes as compared to BCC 1.0 for NetWare:
Support for NetWare 6.5 SP5 (same as OES 1 SP2 NetWare)
Support for Identity Manager 2.x
Changed inter-cluster communication from the NCP
the CIM ports 5988 and 5989
Storage Management Initiative (SMI-S) CIM support
Standards-based management of the SAN for automatic LUN failover
Support for most SANs (such as Xiotech*, EMC*, HP*, IBM*, and so on)
TM
(NetWare Control Protocl) port 524 to
What’s New for BCC 1.1 for NetWare
27
Page 28
Automatic failover
No need for administrator intervention
Based on a configurable minimum number of nodes or a percentage of nodes
Extensible monitoring framework
Disabled by default
novdocx (en) 13 May 2009
28 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 29
3
Planning a Business Continuity
novdocx (en) 13 May 2009
Cluster
Use the guidelines in this section to design your Novell® Business Continuity Clustering solution. The success of your business continuity cluster depends on the stability and robustness of the individual peer clusters. BCC cannot overcome weaknesses in a poorly designed cluster environment.
Section 3.1, “Determining Design Criteria,” on page 29
Section 3.2, “Best Practices,” on page 29
Section 3.3, “LAN Connectivity Guidelines,” on page 30
Section 3.4, “SAN Connectivity Guidelines,” on page 31
Section 3.5, “Storage Design Guidelines,” on page 32
Section 3.6, “eDirectory Design Guidelines,” on page 32
Section 3.7, “Cluster Design Guidelines,” on page 34
3.1 Determining Design Criteria
The design goal for your business continuity cluster is to ensure that your critical data and services can continue in the event of a disaster. Design the infrastructure based on your business needs.
3
Determine your design criteria by asking and answering the following questions:
What are the key services that drive your business?
Where are your major business sites, and how many are there?
What services are essential for business continuance?
What is the cost of down time for the essential services?
Based on their mission-critical nature and cost of down time, what services are the highest
priority for business continuance?
Where are the highest-priority services currently located?
Where should the highest-priority services be located for business continuance?
What data must be replicated to support the highest-priority services?
How much data is involved, and how important is it?
3.2 Best Practices
The following practices help you avoid potential problems with your BCC:
IP address changes should always be made on the Protocols page of the iManager cluster plug-
in, not in load and unload scripts.
TM
This is the only way to change the IP address on the virtual NCP eDirectory
TM
.
server object in
Planning a Business Continuity Cluster
29
Page 30
Ensure that eDirectory and your clusters are stable before implementing BCC.
Engage Novell Consulting.
Engage a consulting group from your SAN vendor.
The cluster node that hosts the Identity Manager driver should have a full read/write
eDirectory
Driver set container
Cluster container
(Parent) container where the servers reside
Landing zone container
User object container
Ensure that you have full read/write replicas of the entire tree at each data center.
TM
replica with the following containers in the replica:
3.3 LAN Connectivity Guidelines
The primary objective of LAN connectivity in a cluster is to provide uninterrupted heartbeat communications. Use the guidelines in this section to design the LAN connectivity for each of the peer clusters in the business continuity cluster:
novdocx (en) 13 May 2009
Section 3.3.1, “VLAN,” on page 30
Section 3.3.2, “NIC Teaming,” on page 30
Section 3.3.3, “IP Addresses,” on page 31
Section 3.3.4, “Name Resolution,” on page 31
Section 3.3.5, “IP Addresses for BCC-Enabled Cluster Resources,” on page 31
3.3.1 VLAN
Use a dedicated VLAN (virtual local area network) for each cluster.
The cluster protocol is non-routable, so you cannot direct communications to specific IP addresses. Using a VLAN for the cluster nodes provides a protected environment for the heartbeat process and ensures that heartbeat packets are exchanged only between the nodes of a given cluster.
When using a VLAN, no foreign host can interfere with the heartbeat. For example, it avoids broadcast storms that slow traffic and result in false split-brain abends.
3.3.2 NIC Teaming
Use NIC teaming for adapters for LAN fault tolerance. NIC teaming combines Ethernet interfaces on a host computer for redundancy or increased throughput. It helps increase the availability of an individual cluster node, which helps avoid or reduce the occurrences of failover caused by slow LAN traffic.
When configuring Spanning Tree Protocol (STP), ensure that Portfast is enabled, or consider Rapid Spanning Tree. The default settings for STP inhibit the heartbeat for over 30 seconds whenever there is a change in link status. Test your STP configuration with Novell Cluster Services make sure that a node is not cast out of the cluster when a broken link is restored.
30 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
TM
running to
Page 31
Consider connecting cluster nodes to access switches for fault tolerance.
3.3.3 IP Addresses
Use a dedicated IP address range for each cluster. You need a unique static IP address for each of the following components of each peer cluster:
Cluster (master IP address)
Cluster nodes
Cluster resources that are not BCC-enabled (file system resources and service resources such as
DHCP, DNS, SLP, FTP, and so on)
Cluster resources that are BCC-enabled (file system resources and service resources such as
DHCP, DNS, SLP, FTP, and so on)
Plan your IP address assignment so that it is consistently applied across all peer clusters. Provide an IP address range with sufficient addresses for each cluster.
3.3.4 Name Resolution
novdocx (en) 13 May 2009
In BCC 1.1 and later, the master IP addresses are stored in the NCS:BCC Peers attribute. Ensure that SLP is properly configured for name resolution.
3.3.5 IP Addresses for BCC-Enabled Cluster Resources
Use dedicated IP address ranges for BCC-enabled cluster resources. With careful planning, the IP address and the name of the virtual server for the cluster resource never need to change.
The IP address of an inbound cluster resource is transformed to use an IP address in the same subnet of the peer cluster where it is being cluster migrated. You define the transformation rules to accomplish this by using the Identity Manager driver’s search and replace functionality. The transformation rules are easier to define and remember when you use strict IP address assignment, such as using the third octet to identify the subnet of the peer cluster.
3.4 SAN Connectivity Guidelines
The primary objective of SAN (storage area network) connectivity in a cluster is to provide solid and stable connectivity between cluster nodes and the storage system. Before installing Novell Cluster Services and Novell Business Continuity Clustering, make sure the SAN configuration is established and verified.
Use the guidelines in this section to design the SAN connectivity for each of the peer clusters in the business continuity cluster:
Use host-based multipath I/O management.
Use redundant SAN connections to provide fault-tolerant connectivity between the cluster
nodes and the shared storage devices.
Connect each node via two fabrics to the storage environment.
Planning a Business Continuity Cluster 31
Page 32
Use a minimum of two mirror connections between storage environments over different fabrics
and wide area networks.
Make sure the distance between storage subsystems is within the limitations of the fabric used
given the amount of data, how the data is mirrored, and how long applications can wait for acknowledgement. Also make sure to consider support for asynchronous versus synchronous connections.
3.5 Storage Design Guidelines
Use the guidelines in this section to design the shared storage solution for each of the peer clusters in the business continuity cluster.
Use a LUN device as the failover unit for each BCC-enabled cluster resource. Multiple pools
per LUN are possible, but are not recommended. A LUN cannot be concurrently accessed by servers belonging to different clusters. This means that all resources on a given LUN can be active in a given cluster at any given time. For maximum flexibility, we recommend that you create only one cluster resource per LUN.
If you use multiple LUNs for a given shared NSS pool, all LUNs must fail over together. We
recommend that you use only one LUN per pool, and only one pool per LUN.
Data must be mirrored between data centers by using host-based mirroring or storage-based
mirroring. Storage-based mirroring is recommended.
novdocx (en) 13 May 2009
When using host-based mirroring, make sure that the mirrored partitions are accessible for the
nodes of only one of the BCC peer clusters at any given time. If you use multiple LUNs for a given pool, each segment must be mirrored individually. In large environments, it might be difficult to determine the mirror state of all mirrored partitions at one time. You must also make sure that all segments of the resource fail over together.
3.6 eDirectory Design Guidelines
Your Novell eDirectory solution for each of the peer clusters in the business continuity cluster must consider the following configuration elements. Make sure your approach is consistent across all peer clusters.
Section 3.6.1, “Object Location,” on page 32
Section 3.6.2, “Cluster Context,” on page 33
Section 3.6.3, “Partitioning and Replication,” on page 33
Section 3.6.4, “Objects Created by the BCC Drivers for Identity Manager,” on page 33
Section 3.6.5, “Landing Zone,” on page 33
Section 3.6.6, “Naming Conventions for BCC-Enabled Resources,” on page 34
3.6.1 Object Location
Cluster nodes and Cluster objects can exist anywhere in the eDirectory tree. The virtual server object, cluster pool object, and cluster volume object are automatically created in the eDirectory context of the server where the cluster resource is created and cluster-enabled. You should create cluster resources on the master node of the cluster.
32 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 33
3.6.2 Cluster Context
Place each cluster in a separate Organizational Unit (OU). All server objects and cluster objects for a given cluster should be in the same OU.
Figure 3-1 Cluster Resources in Separate OUs
novdocx (en) 13 May 2009
3.6.3 Partitioning and Replication
Partition the cluster OU and replicate it to dedicated eDirectory servers holding a replica of the parent partition and to all cluster nodes. This helps prevent resources from being stuck in an NDS Sync state when a cluster resource’s configuration is modified.
®
3.6.4 Objects Created by the BCC Drivers for Identity Manager
When a resource is BCC-enabled, its configuration is automatically synchronized with every peer cluster in the business continuity cluster by using customized Identity Manager drivers. The following eDirectory objects are created in each peer cluster:
Cluster Resource object
Virtual Server object
Cluster Pool object
Cluster Volume object
The Cluster Resource object is placed in the Cluster object of the peer clusters where the resource did not exist initially. The Virtual Server, Cluster Pool, and Cluster Volume objects are stored in the landing zone. Search-and-replace transform rules define cluster-specific modifications such as the IP address.
3.6.5 Landing Zone
Any OU can be defined as the BCC landing zone. Use a separate OU for the landing zone than you use for a cluster OU. The cluster OU for one peer cluster can be the landing zone OU for a different peer cluster.
Planning a Business Continuity Cluster 33
Page 34
3.6.6 Naming Conventions for BCC-Enabled Resources
Develop a cluster-independent naming convention for BCC-enabled cluster resources. It can become confusing if the cluster resource name refers to one cluster and is failed over to a peer cluster.
You can use a naming convention for resources in your BCC as you create those resources. Changing existing names of cluster resources is less straightforward and can be error prone.
For example, when cluster-enabling NSS pools the default naming conventions used by NSS are:
novdocx (en) 13 May 2009
Cluster Resource: Cluster-Enabled Pool: Cluster-Enabled Volume: Virtual Server:
poolname_SERVER
clustername_poolname_POOL
clustername_volumename
clustername_poolname_SERVER
Instead, use names that are independent of the clusters and that are unique across all peer clusters. For example, replace the clustername with something static such as BCC.
Cluster Resource: Cluster-Enabled Pool: Cluster-Enabled Volume: Virtual Server:
poolname_SERVER
BCC_poolname_POOL
BCC_volumename
BCC_poolname_SERVER
Resources have an identity in each peer cluster, and the names are the same in each peer cluster. For example, Figure 3-2 shows the cluster resource identity in each of two peer clusters.
Figure 3-2 Cluster Resource Identity in Two Clusters
3.7 Cluster Design Guidelines
Your Novell Cluster Services solution for each of the peer clusters in the business continuity cluster must consider the following configuration guidelines. Make sure your approach is consistent across all peer clusters.
IP address assignments should be consistently applied within each peer cluster and for all
cluster resources.
Ensure that IP addresses are unique across all BCC peer clusters.
34 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 35
Volume IDs must be unique across all peer clusters. Each cluster node automatically assigns
SYS
volume ID 0 to volume
and volume ID 1 to volume
_ADMIN
. Cluster-enabled volumes use high volume IDs, starting from 254 in decending order. Novell Client uses the volume ID to access a volume.
When existing clusters are configured and enabled within the same business continuity cluster, the volume IDs for the existing shared volumes might also share the same volume IDs. To resolve this conflict, manually edit the load script for each volume that has been enabled for business continuity and change the volume IDs to unique values for each volume in the business continuity cluster.
BCC configuration should consider the configuration requirements for each of the services
supported across all peer clusters.
Create failover matrixes for each cluster resource so that you know what service is supported
and which nodes are the preferred nodes for failover within the same cluster and among the peer clusters.
novdocx (en) 13 May 2009
Planning a Business Continuity Cluster 35
Page 36
novdocx (en) 13 May 2009
36 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 37
4
Installing Business Continuity
novdocx (en) 13 May 2009
Clustering
This section describes how to install, set up, and configure Novell® Business Continuity Clustering
1.1 SP2 for NetWare your specific needs:
Section 4.1, “Requirements for BCC 1.1 SP2 for NetWare,” on page 37
Section 4.2, “Downloading the Business Continuity Clustering Software,” on page 47
Section 4.3, “Configuring a BCC Administrator User,” on page 47
Section 4.4, “Installing and Configuring the Novell Business Continuity Clustering Software,”
on page 50
Section 4.5, “What’s Next,” on page 53
4.1 Requirements for BCC 1.1 SP2 for NetWare
The requirements in this section must be met prior to installing Novell Business Continuity Clustering software.
Section 4.1.1, “Business Continuity Clustering Licensing,” on page 38
Section 4.1.2, “Business Continuity Cluster Component Locations,” on page 38
®
6.5 SP8 (same as Novell Open Enterprise Server (OES) 2 SP1 NetWare) for
4
Section 4.1.3, “NetWare 6.5 SP8 (OES 2 SP1 NetWare),” on page 39
Section 4.1.4, “Novell Cluster Services 1.8.5 for NetWare,” on page 39
Section 4.1.5, “Novell eDirectory 8.8,” on page 40
Section 4.1.6, “SLP,” on page 41
Section 4.1.7, “Identity Manager 3.5.1 Bundle Edition,” on page 41
Section 4.1.8, “Novell iManager 2.7.2,” on page 43
Section 4.1.9, “Storage-Related Plug-Ins for iManager 2.7.2,” on page 43
Section 4.1.10, “Windows Workstation,” on page 44
Section 4.1.11, “OpenWBEM,” on page 44
Section 4.1.12, “Shared Disk Systems,” on page 45
Section 4.1.13, “Mirroring Shared Disk Systems Between Peer Clusters,” on page 45
Section 4.1.14, “LUN Masking for Shared Devices,” on page 45
Section 4.1.15, “Link Speeds,” on page 45
Section 4.1.16, “Ports,” on page 46
Section 4.1.17, “Web Browser,” on page 46
Section 4.1.18, “BASH,” on page 47
Section 4.1.19, “LIBC,” on page 47
Section 4.1.20, “autoexec.ncf File,” on page 47
Installing Business Continuity Clustering
37
Page 38
4.1.1 Business Continuity Clustering Licensing
Novell Business Continuity Clustering software requires a license agreement for each business continuity cluster. For purchasing information, see Novell Business Continuity Clustering (http://
www.novell.com/products/businesscontinuity/howtobuy.html).
4.1.2 Business Continuity Cluster Component Locations
Figure 4-1 illustrates where the various components needed for a business continuity cluster are
installed.
Figure 4-1 Business Continuity Cluster Component Locations
novdocx (en) 13 May 2009
Server
A
IDM Management Utilities
BCC iManager plug-ins
Fibre Channel
Building A
Ethernet Switch
NW 6.5
iManager
Server
1A
Switch
Fibre Channel
Disk Arrays
Server
NW 6.5
NCS
BCC eng.
2A
NW 6.5
NCS
BCC eng.
Server
3A
eDirectory
NW 6.5
NCS
BCC eng.
IDM eng.
IDM eDir Driver
Disk blocks
WAN
IDM
NW 6.5
NCS
BCC eng.
IDM eng.
IDM eDir Driver
SAN
NW 6.5
NCS
BCC eng.
Server
1B
Building B
Ethernet Switch
Server
2B
NW 6.5
iManager
NW 6.5
NCS
BCC eng.
Fibre Channel
Disk Arrays
Server
B
IDM
Server
Management
3B
Utilities
BCC iManager plug-ins
Fibre Channel Switch
Cluster Site 1
Figure 4-1 uses the following abbreviations:
BCC: Novell Business Continuity Clustering 1.1 SP2 for NetWare NCS: Novell Cluster Services 1.8.5 for NetWare IDM:Identity Manager 3.5.1 Bundle Edition eDir: Novell eDirectory 8.8 NW 6.5: NetWare 6.5 SP8 (OES 2 SP1 NetWare)
38 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Cluster Site 2
Page 39
4.1.3 NetWare 6.5 SP8 (OES 2 SP1 NetWare)
NetWare® 6.5 Support Pack 8 (the same as Novell Open Enterprise Server (OES) 2 SP1 NetWare) must be installed and running on every node in each peer cluster that will be part of the business continuity cluster.
See the OES 2 SP1: NetWare Installation Guide for information on installing and configuring NetWare 6.5 SP8 (same as OES 2 SP1 NetWare).
4.1.4 Novell Cluster Services 1.8.5 for NetWare
You need two to four clusters with Novell Cluster Services™ 1.8.5 (the version that ships with NetWare 6.5 SP8 (same as OES 1 SP2 NetWare) installed and running on each node in the cluster.
See the OES 2 SP2: Novell Cluster Services 1.8.5 for NetWare Administration Guide for information on installing, configuring, and managing Novell Cluster Services.
Consider the following when preparing your clusters for the business continuity cluster:
“Cluster Names” on page 39
novdocx (en) 13 May 2009
“Storage” on page 39
“eDirectory” on page 39
“Peer Cluster Credentials” on page 40
Cluster Names
Each cluster must have a unique name, even if the clusters reside in different Novell eDirectory
TM
trees. Clusters must not have the same name as any of the eDirectory trees in the business continuity cluster.
Storage
The storage requirements for Novell Business Continuity Clustering software are the same as for Novell Cluster Services. For more information, see the following in the OES 2 SP2: Novell Cluster
Services 1.8.5 for NetWare Administration Guide:
Hardware Requirements
Shared Disk System Requirements
Rules for Using Disks in a Shared Storage Space
Some storage vendors require you to purchase or license their CLI (Command Line Interface) separately. The CLI for the storage system might not initially be included with your hardware.
Also, some storage hardware may not be SMI-S compliant and cannot be managed by using SMI-S commands.
eDirectory
The recommended configuration is to have each cluster in the same eDirectory tree but in different OUs (Organizational Units). This guide focuses on the single-tree setup.
Installing Business Continuity Clustering 39
Page 40
BCC 1.1 SP2 for NetWare also supports a business continuity cluster with clusters in two eDirectory trees. See Appendix B, “Implementing a Multiple-Tree BCC,” on page 127 for more information.
Peer Cluster Credentials
To add or change peer cluster credentials, you must access iManager on a server that is in the same eDirectory tree as the cluster where you are adding or changing peer credentials.
4.1.5 Novell eDirectory 8.8
Novell eDirectory 8.8 is supported with Business Continuity Clustering 1.1 Support Pack 1. See the
eDirectory 8.8 documentation (http://www.novell.com/documentation/edir88/index.html) for more
information.
Rights Need for Installing BCC
The first time that you install the Business Continuity Clustering engine software in an eDirectory tree, the eDirectory schema is automatically extended with BCC objects.
novdocx (en) 13 May 2009
IMPORTANT: The user who installs BCC must have the eDirectory credentials necessary to extend the schema.
If the eDirectory administrator username or password contains special characters (such as $, #, and so on), make sure to escape each special character by preceding it with a backslash (\) when you enter credentials.
Rights Needed for Individual Cluster Management
The BCC Administrator user is not automatically assigned the rights necessary to manage all aspects of each peer cluster. When managing individual clusters, you must log in as the Cluster Administrator user. You can manually assign the Cluster Administrator rights to the BCC Administrator user for each of the peer clusters if you want the BCC Administrator user to have all rights.
Rights Needed for BCC Management
Before you install BCC, create the BCC Administrator user and group identities in eDirectory to use when you manage the BCC. For information, see Section 4.3, “Configuring a BCC Administrator
User,” on page 47.
Rights Needed for Identity Manager
The node where Identity Manager is installed must have an eDirectory full replica with at least read/ write access to all eDirectory objects that will be synchronized between clusters. You can also have the eDirectory master running on the node instead of the replica.
The replica does not need to contain all eDirectory objects in the tree. The eDirectory full replica must have at least read/write access to the following containers in order for the cluster resource synchronization and user object synchronization to work properly:
The Identity Manager driver set container.
The container where the Cluster object resides.
40 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 41
The container where the Server objects reside.
If Server objects reside in multiple containers, this must be a container high enough in the tree to be above all containers that contain Server objects.
The best practice is to have all Server objects in one container.
novdocx (en) 13 May 2009
The container where the cluster Pool objects and Volume objects are placed when they are
synchronized to this cluster. This container is referred to as the landing zone. The NCP
TM
Server
objects for the virtual server of a BCC-enabled resource are also placed in the landing zone.
The container where the User objects reside that need to be synchronized. Typically, the User
objects container is in the same partition as the cluster objects.
IMPORTANT: Full eDirectory replicas are required. Filtered eDirectory replicas are not supported with this version of Business Continuity Clustering software.
4.1.6 SLP
You must have SLP (Server Location Protocol) set up and configured properly on each server node in every cluster. Typically, SLP is installed as part of the eDirectory installation and setup when you install the server operating system for the server. For information, see “Implementing the Service
Location Protocol” (http://www.novell.com/documentation/edir88/edir88/data/ba5lb4b.html) in the
Novell eDirectory 8.8 Administration Guide.
4.1.7 Identity Manager 3.5.1 Bundle Edition
The Identity Manager 3.5.1 Bundle Edition is required for synchronizing the configuration of the peer clusters in your business continuity cluster. It is not involved in other BCC management operations such as migrating cluster resources within or across peer clussters.
Before you install Business Continuity Clustering on the cluster nodes, make sure that Identity Manager and the Identity Manager driver for eDirectory are installed on one node in each peer cluster that you want to be part of the business continuity cluster.
The same Identity Manager installation program that is used to install the Identity Manager engine is also used to install the Identity Manager eDirectory driver and management utilities. See “Business
Continuity Cluster Component Locations” on page 38 for information on where to install Identity
Manager components.
“Downloading the Bundle Edition” on page 41
“Credential for Drivers” on page 42
“Identity Manager Engine and eDirectory Driver” on page 42
“Identity Manager Driver for eDirectory” on page 42
“Identity Manager Management Utilities” on page 42
Downloading the Bundle Edition
The bundle edition is a limited release of Novell Identity Manager 3.5.1 for NetWare 6.5 SP8 (same as OES 2 SP1 NetWare) that allows you to use the Identity Manager software, the eDirectory driver, and the Identity Manager management tools for Novell iManager 2.7.2. BCC driver templates are applied to the eDirectory driver to create BCC-specific drivers that automatically synchronize BCC
Installing Business Continuity Clustering 41
Page 42
configuration information between the Identity Manager nodes in peer clusters. To download the Bundle Edition, go to to the Identity Manager 3.5.1 Bundle Edition download site (http://
download.novell.com/Download?buildid=hEOxV3rys2M~).
Credential for Drivers
The Bundle Edition requires a credential that allows you to use drivers beyond an evaluation period. The credential can be found in the BCC license. In the Identity Manager interface in iManager, enter the credential for each driver that you create for BCC. You must also enter the credential for the matching driver that is installed in a peer cluster. You can enter the credential, or put the credential in a file that you point to.
Identity Manager Engine and eDirectory Driver
novdocx (en) 13 May 2009
BCC requires Identity Manager 3.5.1 or later to run on one node in each of the clusters that belong to
®
the business continuity cluster. (Identity Manager was formerly called DirXML
.) Identity Manager
should not be set up as clustered resource.
For information about installing and configuring Identity Manager 3.5.1, see the Identity Manager
3.5.1 documentation Web site (http://www.novell.com/documentation/idm35/).
The node where the Identity Manager engine and the eDirectory driver are installed must have an eDirectory full replica with at least read/write access to all eDirectory objects that will be synchronized between clusters. This does not apply to all eDirectory objects in the tree. For information about the eDirectory full replica requirements, see Section 4.1.5, “Novell eDirectory
8.8,” on page 40.
Identity Manager Driver for eDirectory
On the same node where you install the Identity Manager engine, install the following:
Single Tree: One instance of the Identity Manager driver for eDirectory.
Two Tre e s: Two instances of the Identity Manager driver for eDirectory. The eDirectory driver
must be installed twice on each node—once for Tree A and once for Tree B.
For information about installing the Identity Manager driver for eDirectory, see Identity Manager
3.5.1 Driver for eDirectory: Implementation Guide (http://www.novell.com/documentation/ idm35drivers/edirectory/data/bktitle.html).
Identity Manager Management Utilities
The Identity Manager management utilities must be installed on the same server as Novell iManager. The Identity Manager utilities and iManager can be installed on a cluster node, but installing them on a non-cluster node is the recommended configuration. For information about iManager requirements for BCC, see Section 4.1.8, “Novell iManager 2.7.2,” on page 43.
IMPORTANT: Identity Manager plug-ins for iManager require that eDirectory is running and working properly in the tree. If the plug-in does not appear in iManager, make sure that the
ndsd
eDirectory daemon (
To resta r t prompt as the
42 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
ndsd
on the master replica server, enter the following command at its terminal console
root
) is running on the server that contains the eDirectory master replica.
user:
Page 43
rcndsd restart
4.1.8 Novell iManager 2.7.2
Novell iManager 2.7.2 (the version released with NetWare 6.5 SP8 (OES 2 SP1 NetWare)) must be installed and running on a server in the eDirectory tree where you are installing Business Continuity Clustering software. You need to install the BCC plug-in, the Clusters plug-in, and the Storage Management plug-in in order to manage the BCC in iManager. As part of the install process, you must also install plug-ins for the Identity Manager role that are management templates for configuring a business continuity cluster.
For information about installing and using iManager, see the Novell iManager 2.7 documentation
Web site (http://www.novell.com/documentation/imanager27/index.html).
The Identity Manager management utilities must be installed on the same server as iManager. You can install iManager and the Identity Manager utilities on a cluster node, but installing them on a non-cluster node is the recommended configuration. For information about Identity Manager requirements for BCC, see Section 4.1.7, “Identity Manager 3.5.1 Bundle Edition,” on page 41.
See “Business Continuity Cluster Component Locations” on page 38 for specific information on where to install Identity Manager components.
novdocx (en) 13 May 2009
4.1.9 Storage-Related Plug-Ins for iManager 2.7.2
The Clusters plug-in (ncsmgmt.npm) has been updated from the release in NetWare 6.5 SP8 (OES 2 SP1 NetWare) to provide support for this release of Business Continuity Clustering. You must install the Clusters plug-in and the Storage Management plug-in (
TM
related plug-ins are Novell Storage Services Novell CIFS (
cifsmgmt.npm
Archive and Version Services (
), Novell Distributed File Services (
avmgmt.npm
(NSS) (
). NSS is required in order to use shared NSS pools as
storagemgmt.npm
nssmgmt.npm
dfsmgmt.npm
), Novell AFP (
cluster resources. The other services are optional.
IMPORTANT: The Storage Management plug-in module (
storagemgmt.npm
code required by all of the other storage-related plug-ins. Make sure that you include
storagemgmt.npm
when installing any of the others. If you use more than one of these plug-ins, you should install, update, or remove them all at the same time to make sure the common code works for all plug-ins.
The storage-related plug-ins are available as a zipped download on the Novell Downloads (http://
www.novell.com/downloads) Web s i t e .
1 On the iManager server, if the OES 2 version of the storage-related plug-ins are installed, or if
you upgraded this server from OES 2 Linux or NetWare 6.5 SP7, log in to iManager, then uninstall all of the storage-related plug-ins that are currently installed, including
storagemgmt.npm
.
This step is necessary for upgrades only if you did not uninstall and reinstall the storage-related plug-ins as part of the upgrade process.
2 Copy the new .npm files into the iManager plug-ins location, manually overwriting the older
version of the plug-in in the packages folder with the newer version of the plug-in.
3 In iManager, install all of the storage-related plug-ins, or install the plug-ins you need, plus the
common code.
). Other storage-
afpmgmt.npm
), and Novell
) contains common
),
Installing Business Continuity Clustering 43
Page 44
4 Restart Tomcat by entering the following commands at a system console prompt:
tc4stop tomcat4
5 Restart Apache by entering the following command at a system console prompt:
ap2webrs
4.1.10 Windows Workstation
The Business Continuity Clustering installation program is run from a Windows* workstation. Prior to running the installation program:
The Windows workstation must have the latest Novell Client™ software installed.
You must be authenticated to the eDirectory tree where the cluster resides.
4.1.11 OpenWBEM
novdocx (en) 13 May 2009
OpenWBEM must be configured to start in
autoexec.ncf
, and be running when you manage the cluster with Novell iManager. For information on setup and configuration, see the OES 2:
OpenWBEM Services Administration Guide (http://www.novell.com/documentation/oes2/ mgmt_openwbem_lx_nw/data/front.html).
Port 5989 is the default setting for secure HTTP (HTTPS) communications. If you are using a firewall, the port must be opened for CIMOM communications.
Beginning in OES 2 (NetWare 6.5 SP7), the Clusters plug-in (and all other storage-related plug-ins) for iManager require CIMOM connections for tasks that transmit sensitive information (such as a username and password) between iManager and the
_admin
volume on the OES 2 server that you are managing. Typically, CIMOM is running, so this should be the normal condition when using the server. CIMOM connections use Secure HTTP (HTTPS) for transferring data, and this ensures that sensitive data is not exposed.
If CIMOM is not currently running when you click OK or Finish for the task that sends the sensitive information, you get an error message explaining that the connection is not secure and that CIMOM must be running before you can perform the task.
IMPORTANT: If you receive file protocol errors, it might be because WBEM is not running.
To check the status of WBEM:
1 As the Admin user or equivalent, enter the following at the server console:
modules owcimomd
To start WBEM:
1 As the Admin user or equivalent, enter the following at the server console:
openwbem
44 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 45
4.1.12 Shared Disk Systems
For Business Continuity Clustering, a shared disk system is required for each peer cluster in the business continuity cluster. See “Shared Disk System Requirements” in the OES 2 SP2: Novell
Cluster Services 1.8.5 for NetWare Administration Guide.
In addition to the shared disks in an original cluster, you need additional shared disk storage in the other peer clusters to mirror the data between sites as described in Section 4.1.13, “Mirroring Shared
Disk Systems Between Peer Clusters,” on page 45.
4.1.13 Mirroring Shared Disk Systems Between Peer Clusters
The Business Continuity Clustering software does not perform data mirroring. You must separately configure either storage-based mirroring or host-based file system mirroring for the shared disks that you want to fail over between peer clusters. Storage-based synchronized mirroring is the preferred solution.
IMPORTANT: Use whatever method is available to implement storage-based mirroring or host­based file system mirroring between the peer clusters for each of the shared disks that you plan to fail over between peer clusters.
novdocx (en) 13 May 2009
For information about how to configure host-based file system mirroring for Novell Storage Services pool resources, see Appendix D, “Configuring Host-Based File System Mirroring for NSS
Pools,” on page 137.
For information about storage-based mirroring, consult the vendor for your storage system or see the vendor documentation.
4.1.14 LUN Masking for Shared Devices
LUN masking is the ability to exclusively assign each LUN to one or more host connections. With it, you can assign appropriately sized pieces of storage from a common storage pool to various servers. See your storage system vendor documentation for more information on configuring LUN masking.
When you create a Novell Cluster Services system that uses a shared storage system, it is important to remember that all of the servers that you grant access to the shared device, whether in the cluster or not, have access to all of the volumes on the shared storage space unless you specifically prevent such access. Novell Cluster Services arbitrates access to shared volumes for all cluster nodes, but cannot protect shared volumes from being corrupted by non-cluster servers.
Software included with your storage system can be used to mask LUNs or to provide zoning configuration of the SAN fabric to prevent shared volumes from being corrupted by non-cluster servers.
IMPORTANT: We recommend that you implement LUN masking in your business continuity cluster for data protection. LUN masking is provided by your storage system vendor.
4.1.15 Link Speeds
For real-time mirroring, link speeds should be 1 GB or better, the Fibre Channel cable length between sites should be less than 200 kilometers, and the links should be dedicated.
Installing Business Continuity Clustering 45
Page 46
Many factors should be considered for distances greater than 200 kilometers, some of which include:
The amount of data being transferred
The bandwidth of the link
Whether or not snapshot technology is being used
4.1.16 Ports
If you are using a firewall, the ports must be opened for OpenWBEM and the Identity Manager drivers.
Table 4-1 Default Ports for the BCC Setup
Product Default Port
OpenWBEM 5989
eDirectory driver 8196
novdocx (en) 13 May 2009
Cluster Resources Synchronization driver 2002 (plus the ports for additional instances)
User Object Synchronization driver 2001 (plus the ports for additional instances)
4.1.17 Web Browser
When using iManager, make sure your Web browser settings meet the requirements in this section.
“Web Browser Language Setting” on page 46
“Web Browser Character Encoding Setting” on page 46
Web Browser Language Setting
The iManager plug-in might not operate properly if the highest priority Language setting for your Web browser is set to a language other than one of iManager's supported languages. To avoid problems, in your Web browser, click Tools > Options > Languages, then set the first language preference in the list to a supported language.
Refer to the Novell iManager documentation (http://www.novell.com/documentation/imanager27/) for information about supported languages.
Web Browser Character Encoding Setting
Supported language codes are Unicode (UTF-8) compliant. To avoid display problems, make sure the Character Encoding setting for the browser is set to Unicode (UTF-8) or ISO 8859-1 (Western, Western European, West European).
In a Mozilla browser, click View > Character Encoding, then select the supported character encoding setting.
In an Internet Explorer browser, click View > Encoding, then select the supported character encoding setting.
46 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 47
4.1.18 BASH
BASH must be installed on all nodes that participate in the business continuity cluster. The BASH shell does not need to be running, only installed.
4.1.19 LIBC
You must have the latest LIBC patch installed. This is currently libcsp6X. See LIBC Update
NetWare 6.5 SP6 9.00.05 (Technical Information Document # 5003460) (http://www.novell.com/ support/search.do?cmd=displayKC&docType=kc&externalId=InfoDocument-patchbuilder­readme5003460&sliceId=&dialogID=36100275&stateId=0%200%2036102542).
4.1.20 autoexec.ncf File
The
sys:\system\autoexec.ncf
unixenv.ncf
is before the calls to
file must be modified so that the call to
openwbem.ncf
and
ldbcc.ncf
.
sys:/bin/
4.2 Downloading the Business Continuity
novdocx (en) 13 May 2009
Clustering Software
Before you install Novell Business Continuity Clustering, download and copy the software to a directory on your Windows workstation. To download Novell Business Continuity Clustering 1.1 SP2 for NetWare 6.5 SP8, go to the Novell Business Continuity Clustering download site (http://
download.novell.com/Download?buildid=bdkmSxRgKVk~).
4.3 Configuring a BCC Administrator User
The BCC Administrator user is a trustee of each of the peer Cluster objects in the business continuity cluster. During the install, you specify an existing user to be the BCC Administrator user. This user should have at least Read and Write rights to the All Attribute Rights property on the Cluster object of the remote cluster. The user should also have rights to the
Section 4.3.1, “Creating the BCC Administrator User,” on page 47
Section 4.3.2, “Assigning Trustee Rights for the BCC Administrator User to the Cluster
Objects,” on page 48
Section 4.3.3, “Assigning Trustee Rights for the BCC Administrator User to the _ADMIN
Volume,” on page 48
Section 4.3.4, “Assigning Trustee Rights for the BCC Administrator User to the sys:\tmp
Directory,” on page 49
sys:/tmp
directory.
4.3.1 Creating the BCC Administrator User
The BCC Administrator user will be a trustee of each of the peer cluster objects in the business continuity cluster. Identify an existing user, or create a new user, who you want to use as the BCC Administrator user.
Installing Business Continuity Clustering 47
Page 48
4.3.2 Assigning Trustee Rights for the BCC Administrator User to the Cluster Objects
Assign trustee rights to the BCC Administrator user for each cluster that you plan to add to the business continuity cluster.
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the NetWare server or Windows server where you have installed iManager and the Identity Manager preconfigured templates for iManager.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the Roles and Tasks column, click Rights, then click the Modify Trustees link.
4 Specify the Cluster object name, or browse and select it, then click OK.
5 If the BCC Administrator user is not listed as a trustee, click the Add Trustee button, browse
and select the User object, then click OK.
6 Click Assigned Rights for the BCC Administrator user, and then ensure the Read and Write
check boxes are selected for the All Attributes Rights property.
7 Click Done to save your changes.
novdocx (en) 13 May 2009
8 Repeat Step 3 through Step 7 for the other clusters in your business continuity cluster.
4.3.3 Assigning Trustee Rights for the BCC Administrator User to the _ADMIN Volume
You must also ensure that the BCC Administrator user has file system rights to the
_ADMIN:\Novell\Cluster
the
_ADMIN
volume is virtual, and is created each time the server starts. For this reason, you cannot
assign eDirectory trustee rights to the
To assign BCC Administrator user file system rights to the
1 Open the
sys:\etc\trustrees.xml
2 Add a trustee entry for the BCC Administrator user that assigns Read, Write, Modify, and File
Scan (RWMF) rights to the
3 Repeat this process on all NetWare nodes that are part or your BCC.
The trustee entry could be similar to the following entry:
<addTrustee> <name>BCCAdmin.users.lab.acme_tree</name> <fileName>_ADMIN:\Novell\Cluster</fileName> <rights> <read/> <write/> <fileScan/> <modify/> </rights> </addTrustee>
Note the following items with this example:
directory of each of the nodes in your BCC. This is necessary because
_ADMIN
volume.
_ADMIN:\Novell\Cluster
directory:
file
_ADMIN:\Novell\Cluster
directory.
The
48 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
<name>
element is the BCC Administrator user. The tree name is required.
Page 49
novdocx (en) 13 May 2009
The
The rights must be
You must add the trustee entry to all the NetWare nodes in your BCC.
The following is an example of a complete
<filename>
element must be
RWMF
.
_ADMIN:\Novell\Cluster
trustees.xml
file. Note the multiple trustee entries. For this reason you should edit this file and add the BCC entry rather than copy the file from server to server.
<specialTrustees> <addTrustee> <name>BCCAdmin.users.lab.acme_tree</name> <fileName>_ADMIN:\Novell\Cluster</fileName> <rights> <read/> <write/> <fileScan/> <modify/> </rights> </addTrustee> <addTrustee> <context/> <name>[public]</name> <fileName>_admin:manage_nss\files.cmd</fileName> <rights> <read/> <write/> <fileScan/> </rights> <background/> </addTrustee> </specialTrustees>
After the
trustees.xml
file has been modified on all NetWare nodes, the NetWare nodes must be rebooted. This can be done in a rolling fashion. You should start with the node that has the highest IP address first and work down in IP address order. This speeds the rate at which the Novell Cluster Services master node acquires the change.
4.3.4 Assigning Trustee Rights for the BCC Administrator User to the sys:\tmp Directory
You must also ensure that the BCC Administrator user is a trustee with Read, Write, Create, Erase, Modify, and File Scan access rights to the clusters.
IMPORTANT: If you are concerned about denial of service attacks with the BCC Administrator user, you can set a quota of 5 MB for that user. This can prevent the BCC Administrator user from filling the
To assign BCC Administrator user file system rights to the
1 Open the
2 Add a trustee entry for the BCC Administrator user that assigns Read, Write, Create, Erase,
3 Repeat this process on all NetWare nodes that are part or your BCC.
sys:
volume by copying an excessive number of files to the
sys:\etc\trustrees.xml
Modify, and File Scan (RWCEMF) rights to the
sys:\tmp
file
directory on every node in your NetWare
sys:\tmp
sys:\tmp
sys:\tmp
directory:
directory.
Installing Business Continuity Clustering 49
directory.
Page 50
The trustee entry could be similar to the following entry:
<addTrustee> <name>BCCAdmin.users.lab.acme_tree</name> <fileName>sys:\tmp</fileName> <rights> <read/> <write/> <create/> <erase/> <fileScan/> <modify/> </rights> </addTrustee>
Note the following items with this example:
The
<name>
element is the BCC Administrator user. The tree name is required.
novdocx (en) 13 May 2009
The
The rights must be
You must add the trustee entry to all the NetWare nodes in your BCC.
IMPORTANT: Make sure that you edit each
<filename>
element must be
RWCEMF
.
sys:\tmp
trustees.xml
file on each cluster node to add
the BCC entry rather than copy the file from server to server.
After the
trustees.xml
file has been modified on all NetWare nodes, the NetWare nodes must be rebooted. This can be done in a rolling fashion. You should start with the node that has the highest IP address first and work down in IP address order. This speeds the rate at which the Novell Cluster Services master node acquires the change.
4.4 Installing and Configuring the Novell Business Continuity Clustering Software
It is necessary to run the Novell Business Continuity Clustering installation program when you want to:
Install the Business Continuity Clustering engine software on cluster nodes for the clusters that
will be part of a business continuity cluster.
The Business Continuity Clustering installation installs to only one cluster at a time. You must run the installation program again for each NetWare cluster that you want to be part of a business continuity cluster.
Install the BCC-specific Identity Manager templates for iManager snap-ins on either a NetWare
6.5 SP5 or SP6 server (same as OES 1 SP2 NetWare) or a Windows server.
The templates add functionality to iManager so you can manage your business continuity cluster. You must have previously installed iManager on the server where you plan to install the templates.
50 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 51
IMPORTANT: Before you begin, make sure your setup meets the requirements specified in
Section 4.1, “Requirements for BCC 1.1 SP2 for NetWare,” on page 37. The BCC Administrator
user and group must already be configured as specified in Section 4.3, “Configuring a BCC
Administrator User,” on page 47.
Section 4.4.1, “Installing the BCC Engine,” on page 51
Section 4.4.2, “Installing the Identity Manager Templates,” on page 52
4.4.1 Installing the BCC Engine
You must install the Business Continuity Clustering engine software on each cluster node for the clusters that will be part of a business continuity cluster. You install the software on the nodes of one cluster at a time.
To install and configure Business Continuity Clustering, complete the following steps:
1 From the directory on your Windows workstation where you copied the Business Continuity
Clustering software, run
install.exe
For download information, see Section 4.2, “Downloading the Business Continuity Clustering
Software,” on page 47.
2 Continue through the installation wizard until you get to the page that prompts you to select the
components to install.
.
novdocx (en) 13 May 2009
3 Select one of the Identity Manager Templates for iManager installation options, select the
Novell Business Continuity Clustering component, then click Next.
The templates add functionality to iManager so you can manage your business continuity cluster. You must have previously installed iManager on the server where you plan to install the templates.
Identity Manager Templates for NetWare iManager Servers: Installs the templates on a NetWare iManager server. You will be asked to specify the NetWare server where the templates will be installed later in the installation.
Identity Manager Templates for Windows iManager Servers: Installs the templates on the local Windows iManager server. You will be asked to specify the path to Tomcat (a default path is provided) on the Windows server later in the installation.
Novell Business Continuity Clustering: Installs the core Business Continuity Clustering engine files. This core software must be installed on all nodes in each Novell Cluster Services cluster that will be part of a business continuity cluster.
4 Do one of the following:
NetWare iManager Server: If you chose to install the Identity Manager iManager
templates on a NetWare server, specify the name of the eDirectory tree and the fully distinguished name for the server where you want to install the templates. Then click Next.
If you don’t know the fully distinguished name for the server, you can browse and select it.
Windows iManager Server: If you chose to install the Identity Manager iManager
templates on a Windows server, specify the path to Tomcat (a default path is provided) on the server. Then click Next.
5 Continue through the Upgrade Reminder page, then specify the name of the eDirectory tree and
the fully distinguished name for the cluster where you want to install the core software files.
Installing Business Continuity Clustering 51
Page 52
If you don’t know the fully distinguished name for the cluster, you can browse and select it.
6 Select the servers in the cluster where you want to install the core software files for the
Business Continuity Clustering product.
All servers currently in the cluster you specified are listed and are selected by default.
You can choose to automatically start Business Continuity Clustering software on each selected node after the installation is complete. If Business Continuity Clustering software is not started automatically after the installation, you can start it manually later by rebooting the cluster
LDBCC
server or by entering
at the server console.
7 Enter the name and password of an eDirectory user (or browse and select one) with sufficient
rights to manage your BCC. This name should be entered in eDirectory dot format. For example,
admin.servers.novell
.
This user should have at least Read and Write rights to the All Attribute Rights property on the Cluster object of the remote cluster. For information, see Section 4.3, “Configuring a BCC
Administrator User,” on page 47.
8 Continue through the final installation page, then restart the cluster nodes where Identity
Manager is running and where you have upgraded
libc.nlm
.
Restarting the cluster nodes can be performed in a rolling fashion in which one server is restarted while the other servers in the cluster continue running. Then another server is restarted, and then another, until all servers in the cluster have been restarted.
This lets you keep your cluster up and running and lets your users continue to access the network while cluster nodes are being restarted.
novdocx (en) 13 May 2009
9 Repeat the above steps for each Novell Cluster Services cluster that will be part of the business
continuity cluster.
4.4.2 Installing the Identity Manager Templates
After the install, you can use the Business Continuity Clustering install program to install the Identity Manager templates on additional iManager servers in the same tree as the business continuity cluster.
1 From the directory on your Windows workstation where you copied the Business Continuity
Clustering software, run
install.exe
For download information, see Section 4.2, “Downloading the Business Continuity Clustering
Software,” on page 47.
2 Continue through the installation wizard until you get to the page that prompts you to select the
components to install.
3 Select one of the Identity Manager Templates for iManager installation options, deselect the
Novell Business Continuity Clustering component, then click Next.
The templates add functionality to iManager so you can manage your business continuity cluster. You must have previously installed iManager on the server where you plan to install the templates.
Identity Manager Templates for NetWare iManager Servers: Installs the templates on a NetWare iManager server. You will be asked to specify the NetWare server where the templates will be installed later in the installation.
.
52 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 53
Identity Manager Templates for Windows iManager Servers: Installs the templates on the local Windows iManager server. You will be asked to specify the path to Tomcat (a default path is provided) on the Windows server later in the installation.
4 Do one of the following:
NetWare iManager Server: If you chose to install the Identity Manager iManager
templates on a NetWare server, specify the name of the eDirectory tree and the fully distinguished name for the server where you want to install the templates. Then click Next.
If you don’t know the fully distinguished name for the server, you can browse and select it.
Windows iManager Server: If you chose to install the Identity Manager iManager
templates on a Windows server, specify the path to Tomcat (a default path is provided) on the server. Then click Next.
5 Continue through the final installation page.
4.5 What’s Next
After you have installed BCC on every node in each cluster that you want to be in the business continuity cluster, continue with configuring the BCC. For information, see Chapter 6, “Configuring
Business Continuity Clustering Software,” on page 67.
novdocx (en) 13 May 2009
Installing Business Continuity Clustering 53
Page 54
novdocx (en) 13 May 2009
54 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 55
5
Upgrading Business Continuity
novdocx (en) 13 May 2009
Clustering for NetWare
Novell® Business Continuity Clustering (BCC) 1.1 SP2 for NetWare® 6.5 SP8 (same as Novell Open Enterprise Server (OES) 2 SP1 for NetWare) supports upgrades from Novell Cluster Services NetWare) or from clusters running BCC 1.0 (which is available on NetWare only).
BCC 1.2 for OES 2 SP1 Linux supports conversion from BCC 1.1 SP2 on NetWare. In order to convert BCC clusters from NetWare to Linux clusters, you must first upgrade existing BCC 1.0 or BCC 1.1 SP1 for NetWare clusters to BCC 1.1 SP2 for NetWare. For information about converting to BCC 1.2 for Linux, see “Converting BCC Clusters from NetWare to Linux” in the BCC 1.2:
Administration Guide for Linux.
This section covers two upgrade scenarios:
TM
clusters that are running BCC 1.1 SP1 for NetWare 6.5 SP6 (same as OES 1 SP2
Section 5.1, “Guidelines for Upgrading,” on page 55
Section 5.2, “Disabling BCC 1.0, Upgrading Servers to NetWare 6.5 SP8, then Enabling BCC
1.1 SP2,” on page 56
Section 5.3, “Upgrading Clusters from BCC 1.0 to BCC 1.1 SP2 for NetWare,” on page 58
Section 5.4, “Upgrading Clusters from BCC 1.1 SP1 to SP2 for NetWare,” on page 63
5
5.1 Guidelines for Upgrading
Use the guidelines in this section to upgrade clusters one peer cluster at a time.
Section 5.1.1, “Requirements,” on page 55
Section 5.1.2, “Performing a Rolling Cluster Upgrade,” on page 56
5.1.1 Requirements
BCC 1.1 SP2 for NetWare requires that every node in each peer cluster be upgraded to NetWare 6.5 SP8 (same as OES 2 SP1 NetWare) with the latest patches for NetWare and Novell Cluster Services. For information, see the following resources:
NetWare: “Upgrading to OES 2 SP1 NetWare” in the OES 2 SP1: NetWare Installation Guide
for information about how to upgrade the NetWare 6.5 SP6 servers to NetWare 6.5 SP8
Cluster Services: Installation and Setup” in the OES 2 SP2: Novell Cluster Services 1.8.5 for
NetWare Administration Guide for information about what is required for Novell Cluster
Services for NetWare
Business Continuity Clustering: Section 4.1, “Requirements for BCC 1.1 SP2 for NetWare,”
on page 37 for information about what is required for Business Continuity Clustering 1.1 SP2
for NetWare
Identity Manager: Identity Manager 3.51 Installation Guide (http://www.novell.com/
documentation/idm35/install/data/front.html). For Identity Manager configuration
requirements, see Section 4.1.7, “Identity Manager 3.5.1 Bundle Edition,” on page 41.
Upgrading Business Continuity Clustering for NetWare
55
Page 56
5.1.2 Performing a Rolling Cluster Upgrade
Performing a rolling upgrade of NetWare and applying the latest patches lets you keep your cluster up and running and lets your users continue to access the network while the upgrade is being performed.
For example, during a rolling cluster upgrade, one server is upgraded from NetWare SP6 to SP8 and the latest patches are applied while the other servers in the cluster continue running a previous support pack of NetWare 6.5. Then another server is upgraded and patched, and then another, until all servers in the cluster have been upgraded to NetWare 6.5 SP8 with the latest patches.
After upgrading NetWare and applying the latest patches, reboot the server to automatically load Cluster Services software.
During the upgrade process, cluster pools, volumes, and resources fail over from the server being upgraded to other servers in the cluster. After a cluster server is upgraded and brought back online, the pools, volumes, and resources that failed over to other servers in the cluster during the upgrade process fail back to the upgraded server.
5.2 Disabling BCC 1.0, Upgrading Servers to
novdocx (en) 13 May 2009
NetWare 6.5 SP8, then Enabling BCC 1.1 SP2
The safest and most straightforward way for you to upgrade from BCC 1.0 to BCC 1.1 SP2 is to disable BCC on the peer clusters, upgrade to NetWare 6.5 SP8, re-install BCC 1.1 SP2, and re­configure BCC for the clusters.
This approach leaves you without the BCC capability during the upgrade; however, the clustering high availability is still active.
1 Create a worksheet for the Identity Manager driver configuration for each peer cluster where
you note the ports, landing zones, BCC drivers and driver sets, and certificates that are currently in use.
You can use this information later when you re-create the drivers.
2 Stop the BCC drivers from running on the Identity Manager nodes of every peer cluster.
3 On the Identity Manager node in each peer cluster, delete the Identity Manager drivers and
driver sets for the cluster.
4 Disable BCC for each of the cluster resources on all nodes in every peer cluster.
5 Disable BCC for each peer cluster.
6 Clean up the landing zones in each peer cluster.
7 Uninstall BCC 1.0 from each node in every peer cluster.
8 Perform a rolling cluster upgrade for all nodes in a peer cluster:
8a Issue the
8b Update NetWare 6.5 from version SP5 or SP6 to version SP8, and apply the latest patches
for NetWare and Novell Cluster Services.
For information, see the NetWare Installation Guide (http://www.novell.com/
documentation/oes2/inst_oes_nw/data/b7211uh.html)
8c Issue the
8d Install BCC 1.1 SP2 on each node in the cluster. Do not start BCC at this time.
56 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
cluster leave
cluster join
command on one node in the cluster.
command on the node.
Page 57
8e If the node is running iManager, update the following iManager plug-ins by uninstalling
the existing NPM files, and then re-installing the correct iManager NPMs, as described in
TID 3009687: You must specify an IP address; iManager plug-ins not working after update to SP6 (http://www.novell.com/support/php/
search.do?cmd=displayKC&docType=kc&externalId=3009687).
novdocx (en) 13 May 2009
Novell Archive and Version Services ( Novell Cluster Services ( Novell Storage Services Storage Management (
ncsmgmt.npm
TM
(
nssmgmt.npm
storagemgmt.npm
arkmgmt.npm
)
)
)
)
8f Repeat Step 3a through Step 3e for each node in the cluster.
9 On the Identity Manager node in the cluster, upgrade Identity Manager from version 2.x to
3.51.
For information, see “Upgrading” (http://www.novell.com/documentation/idm35/install/data/
ampxjxi.html) in the Identity Manager 3.5.1 Installation Guide.
9a Before you begin, make sure that the Identity Manager node meets the 3.5.1 upgrade
requirements.
9b Upgrade Identity Manager from version 2.x to 3.51.
IMPORTANT: Do not start the drivers yet.
The upgrade updates the Identity Manager software and its plug-ins for iManager 2.7.2 on the same node.
9c Stop Tomcat 5 by issuing
9d Reset Apache 2 by using the
9e Start Tomcat 5 by using the
tc5stop
ap2webrs
tomcat
command.
command.
command.
9f Wait at least 5 minutes for the iManager changes to take effect, then restart iManager.
9g In iManager, verify that Identity Manager is available.
IMPORTANT: In iManager, the Identity Manager plug-ins are displayed as version 3.6. The 3.6 version functions properly with Identity Manager 3.5.1.
10 On the Identity Manager node in each peer cluster, create new Identity Manager driver sets and
the BCC drivers with the new BCC 1.1 SP2 templates.
IMPORTANT: Do not create a new partition when creating the driver set.
You can use the information you documented in Step 1 as a guide for creating the drivers and driver sets. If desired, you can use the same names and port numbers that you used for BCC
1.0.
11 Start the new Identity Manager drivers for each peer cluster.
12 Re-enable BCC for the peer clusters.
13 Re-enable BCC and configure BCC for the cluster resources.
Upgrading Business Continuity Clustering for NetWare 57
Page 58
5.3 Upgrading Clusters from BCC 1.0 to BCC 1.1 SP2 for NetWare
Upgrading from BCC 1.0 to BCC 1.1 SP2 while leaving the BCC configuration in place is a two­phased process where you must perform an intermediate upgrade to BCC 1.1 SP1 on NetWare 6.5 SP6 as follows:
Initial Configuration Intermediate Configuration Final Configuration
novdocx (en) 13 May 2009
NetWare 6.5 SP5
BCC 1.0
Identity Manager 2.x
NetWare 6.5 SP6
BCC 1.1 SP1 for NetWare
Identity Manager 2.x
NetWare 6.5 SP8
BCC 1.1 SP2 for NetWare
Identity Manager 3.5.1
You must perform the upgrade sequence on every node in each peer cluster that is part of your existing business continuity cluster.
IMPORTANT: Before you begin, review Section 4.1, “Requirements for BCC 1.1 SP2 for
NetWare,” on page 37.
During a BCC upgrade, Business Continuity Clustering 1.0 clusters are unable to communicate with Business Continuity Clustering 1.1 clusters. This condition exists temporarily until the upgrade has been completed. If an actual disaster were to occur during the upgrade, a Business Continuity Clustering 1.0 cluster can be failed over to a Business Continuity Clustering 1.1 cluster.
Section 5.3.1, “Upgrading the BCC Cluster from 1.0 to 1.1 SP1,” on page 58
Section 5.3.2, “Upgrading the BCC Cluster from 1.1 SP1 to 1.1 SP2,” on page 60
5.3.1 Upgrading the BCC Cluster from 1.0 to 1.1 SP1
Perform a rolling cluster upgrade for each peer cluster from BCC 1.0 to BCC 1.1 SP1 to achieve the intermediate configuration as follows:
Initial Configuration Intermediate Configuration
NetWare 6.5 SP5
BCC 1.0
Identity Manager 2.x
iManager 2.5 or 2.6
1 Create a worksheet for the Identity Manager driver configuration for each peer cluster where
you note the ports, landing zones, BCC drivers and driver sets, and certificates that are currently in use.
You can use this information later in Step 5b when you re-create the drivers.
2 Stop the Identity Manager (formerly DirXML) drivers from running on the Identity Manager
node of every peer cluster in the business continuity cluster.
58 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
NetWare 6.5 SP6
BCC 1.1 SP1 for NetWare
Identity Manager 2.x
iManager 2.6
Page 59
3 Perform a rolling cluster upgrade for all nodes in a peer cluster:
novdocx (en) 13 May 2009
3a Issue the
cluster leave
command on one node in the cluster.
3b Update NetWare 6.5 from version SP5 to SP6, and apply the latest patches for NetWare
and Novell Cluster Services.
You can perform a script upgrade by using NWCONFIG.
For information, see “Updating an OES NetWare Server” (http://www.novell.com/
documentation/oes/install-nw/data/b7211uh.html) in the OES NetWare Installation Guide.
See the Business Continuity Clustering 1.1 Readme file (http://www.novell.com/
documentation/bcc/pdfdoc/readme/readme111.pdf) for instructions on mandatory patches.
3c Issue the
cluster join
command on the node.
3d Upgrade Business Continuity Clustering from version 1.0 to 1.1 SP1 by installing BCC
1.1 SP1.
IMPORTANT: Do not start BCC at this time.
The Business Continuity Clustering 1.1 installation program automatically detects if Business Continuity Clustering 1.0 is installed and performs the necessary updates to convert 1.0 to 1.1. This includes searching eDirectory for SAN scripts and updating those scripts to be SMI-S compliant.
For installation instructions for BCC 1.1 SP1, see “Running the Business Continuity
Cluster Installation Program” (http://www.novell.com/documentation/bcc/ bcc_administration_nw/data/ht05s5vv.html#h7qplroj) in the Novell Business Continuity Clustering 1.1 for NetWare Administration Guide (http://www.novell.com/documentation/ bcc/bcc_administration_nw/data/bktitle.html).
3e If the node is running iManager, update the following iManager plug-ins by uninstalling
the existing NPM files, and then re-installing the correct iManager NPMs, as described in
TID 3009687: You must specify an IP address; iManager plug-ins not working after update to SP6 (http://www.novell.com/support/php/
search.do?cmd=displayKC&docType=kc&externalId=3009687).
Novell Archive and Version Services ( Novell Cluster Services ( Novell Storage Services Storage Management (
ncsmgmt.npm
TM
(
nssmgmt.npm
storagemgmt.npm
arkmgmt.npm
)
)
)
)
3f Repeat Step 3a through Step 3e for each node in the cluster.
4 Repeat the rolling cluster upgrade in Step 3 for each peer cluster.
5 On the Identity Manager node in each peer cluster, delete and re-create the Identity Manager
driver sets for the cluster:
5a Delete the Identify manager 2.x driver sets.
5b Create new Identity Manager driver sets and the BCC drivers with the new BCC 1.1 SP1
templates.
IMPORTANT: Do not create a new partition when creating the driver set.
Upgrading Business Continuity Clustering for NetWare 59
Page 60
You can use the information you documented in Step 1 as a guide for creating the drivers and driver sets. If desired, you can use the same names and port numbers that you used for BCC 1.0.
5c Repeat Step 5a through Step 5b for each node in the peer cluster
novdocx (en) 13 May 2009
6 Issue the
ldbcc.ncf
command on each server node to have them join the BCC.
7 Start the Identity Manager drivers.
8 BCC disable and enable each BCC resource to make sure the BCC attributes are updated.
If a BCC enabled resource is missing BCC attributes, try deleting the eDirectory
TM
Cluster Resource objects for the pool resource, then re-create the cluster resource to get it back to a usable state in the BCC.
9 Reset the cluster peer credentials between clusters.
The BCC Administrator user credentials that were set for BCC 1.0 do not work with BCC 1.1. A fully distinguished eDirectory name (FDN) was required for BCC 1.0, but BCC 1.1 requires only the BCC administrator name.
For instructions on resetting BCC Administrator user credentials, see Section 7.2, “Changing
Cluster Peer Credentials,” on page 84.
IMPORTANT: Make sure the administrator username meets the requirements specified in
Section 4.3, “Configuring a BCC Administrator User,” on page 47.
10 Verify that Novell Cluster Services and BCC 1.1 SP1 appear to be functioning correctly by
performing the following tests:
10a Create a new BCC enabled pool resource, BCC migrate it between peer clusters, and
verify that it migrates correctly.
10b Make a load script change (for example, add a space character and save the change) to an
existing BCC resource that was created with 1.0, allow the revised load script to synchronize, then verify that the load script was updated at all eDirectory locations.
10c BCC migrate the pool resource that you modified in Step 10b between peer clusters, and
verify that it migrates correctly.
10d Check all SAN scripts ensure that they will perform the desired functions.
11 Continue with Section 5.3.2, “Upgrading the BCC Cluster from 1.1 SP1 to 1.1 SP2,” on
page 60.
5.3.2 Upgrading the BCC Cluster from 1.1 SP1 to 1.1 SP2
In the second phase of the upgrade from BCC 1.0, you perform a rolling cluster upgrade for each peer cluster from BCC 1.1 SP1 to BCC 1.1 SP2 to achieve the final configuration as follows:
Intermediate Configuration Final Configuration
NetWare 6.5 SP6
BCC 1.1 SP1 for NetWare
Identity Manager 2.x
1 Stop the Identity Manager (formerly DirXML) drivers from running in the Identity Manager
node of every peer cluster in the business continuity cluster.
60 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
NetWare 6.5 SP8
BCC 1.1 SP2 for NetWare
Identity Manager 3.5.1
Page 61
2 Delete the Identity Manager 2.x drivers and driver sets on the Identity Manager node of every
peer cluster.
3 Perform a rolling cluster upgrade from NetWare 6.5 SP6 to SP8 for all nodes in a peer cluster:
3a Issue the
cluster leave
command on one node in the cluster.
3b Upgrade NetWare 6.5 from version SP6 to SP8, and apply the latest patches for NetWare
and Novell Cluster Services.
You can perform a script upgrade by using NWCONFIG.
For information, see “3.0 Upgrading to OES 2 SP1 NetWare” (http://www.novell.com/
documentation/oes2/inst_oes_nw/data/b7211uh.html) in the OES 2 SP1: NetWare
Installation Guide.
3c If the node is running iManager, update the storage related plug-ins for iManager.
After upgrading to NetWare 6.5 SP8, iManager no longer displays the roles for Clusters, Storage, and DirXML. The storage related plug-ins require special handling because some storage features were reorganized in the NetWare 6.5 SP8 release. For information, see
“Storage Related Plug-Ins Must Be Uninstalled” (http://www.novell.com/documentation/ oes2/oes_readme/data/bi59826.html#biv1r9v) in the OES 2 SP1 Readme (http:// www.novell.com/documentation/oes2/oes_readme/data/readme.html).
novdocx (en) 13 May 2009
NOTE: The DirXML plug-ins will be replaced by Identity Manager 3.5.1 plug-ins in
Step 5 on page 62.
3c1 In iManager, uninstall the old iManager plug-ins for Novell Cluster Services
(ncsmgmt.npm
arkmgmt.npm
(
), NSS (
nssmgmt.npm
), Archive and Version Services
), and Storage Management, (
storagemgmt.npm
).
3c2 In iManager, install the new set of storage-related plug-ins for iManager 2.7.2 from
the NetWare 6.5 SP8 installation media:
Novell AFP ( Novell Archive and Version Services ( Novell CIFS ( Novell Cluster Services Novell Distributed File Services ( Novell Storage Services ( Storage Management, (
3c3 Stop Tomcat 5 by issuing
3c4 Reset Apache 2 by using the
3c5 Start Tomcat 5 by using the
afpmgmt.npm
cifsmgmt.npm
(ncsmgmt.npm
nssmgmt.npm
storagemgmt.npm
tc5stop
)
)
dfsmgmt.npm
command.
ap2webrs
tomcat
arkmgmt.npm
)
)
)
command.
command.
)
)
3c6 Wait at least 5 minutes for the iManager changes to take effect, then restart iManager.
3c7 In iManager, verify that BCC tasks are available in the Clusters role.
3d Repeat Step 3a through Step 3c7 for each node in the cluster.
4 On each node in the cluster, upgrade Business Continuity Clustering from version 1.1 SP1 to
1.1 SP2 by installing BCC 1.1 SP2.
IMPORTANT: Do not start BCC at this time.
For instructions, see Chapter 4, “Installing Business Continuity Clustering,” on page 37.
Upgrading Business Continuity Clustering for NetWare 61
Page 62
5 On the Identity Manager node in the cluster, upgrade Identity Manager from version 2.x to
3.51.
For information, see “Upgrading” (http://www.novell.com/documentation/idm35/install/data/
ampxjxi.html) in the Identity Manager 3.5.1 Installation Guide.
5a Before you begin, make sure that the Identity Manager node meets the 3.5.1 upgrade
requirements.
5b Upgrade Identity Manager from version 2.x to 3.51.
IMPORTANT: Do not start the drivers yet.
The upgrade updates the Identity Manager software and its plug-ins for iManager 2.7.2 on the same node.
5c Stop Tomcat 5 by issuing
5d Reset Apache 2 by using the
tc5stop
ap2webrs
command.
command.
novdocx (en) 13 May 2009
5e Start Tomcat 5 by using the
tomcat
command.
5f Wait at least 5 minutes for the iManager changes to take effect, then restart iManager.
5g In iManager, verify that Identity Manager is available.
IMPORTANT: In iManager, the Identity Manager plug-ins are displayed as version 3.6. The 3.6 version functions properly with Identity Manager 3.5.1.
6 Issue the
7 Issue the
cluster join
ldbcc.ncf
command on each node to have them join the BCC.
command on each node in the cluster.
8 Repeat Step 3 through Step 7 for each peer cluster.
9 Verify that Novell Cluster Services and Business Continuity Clustering appear to be
functioning correctly by migrating a BCC enabled resource between peer clusters.
10 Create the new BCC driver sets for Identity Manager.
11 On the Identity Manager node in every peer cluster, create the new Identity Manager drivers
with the new BCC 1.1 SP2 templates.
IMPORTANT: Do not create a new partition when creating the driver set.
As a guide, you can use the information from Step 1 and Step 5b in Section 5.3.1, “Upgrading
the BCC Cluster from 1.0 to 1.1 SP1,” on page 58.
12 Start the new Identity Manager drivers for each peer cluster.
13 Verify that Novell Cluster Services and BCC 1.1 SP2 appear to be functioning correctly by
performing the following tests:
13a Create a new BCC enabled pool resource, BCC migrate it between peer clusters, and
verify that it migrates correctly.
13b Make a load script change (for example, add a space character and save the change) to an
existing BCC resource that was created with 1.0, allow the revised load script to synchronize, then verify that the load script was updated at all eDirectory locations.
13c BCC migrate the pool resource that you modified in Step 13b between peer clusters, and
verify that it migrates correctly and that the Identity Manager drivers are synchronizing.
13d Check all SAN scripts ensure that they will perform the desired functions.
62 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 63
5.4 Upgrading Clusters from BCC 1.1 SP1 to SP2 for NetWare
Use the procedure in this section to upgrade from BCC version SP1 to SP2:
Initial Configuration Final Configuration
novdocx (en) 13 May 2009
NetWare 6.5 SP6
BCC 1.1 SP1 for NetWare
Identity Manager 2.x or 3.0.x
NetWare 6.5 SP8
BCC 1.1 SP2 for NetWare
Identity Manager 3.5.1
You must perform the upgrade sequence on every node in each peer cluster that is part of your existing business continuity cluster.
IMPORTANT: Before you begin, review Section 4.1, “Requirements for BCC 1.1 SP2 for
NetWare,” on page 37.
To upgrade BCC 1.1 from SP1 to SP2, perform the following tasks:
Section 5.4.1, “Upgrading NetWare and BCC on the Clusters,” on page 63
Section 5.4.2, “Authorizing the BCC Administrator User,” on page 64
Section 5.4.3, “Upgrading Identity Manager,” on page 65
Section 5.4.4, “Deleting and Re-Creating the BCC Driver Sets and Drivers,” on page 65
Section 5.4.5, “Verifying the BCC Upgrade,” on page 66
5.4.1 Upgrading NetWare and BCC on the Clusters
Perform a rolling cluster upgrade from NetWare 6.5 SP6 to SP8 for all nodes in a peer cluster:
1 Stop the Identity Manager drivers from running in the Identity Manager node of every peer
cluster in the business continuity cluster.
2 Issue the
cluster leave
command on one node in the cluster.
3 Upgrade NetWare 6.5 from version SP6 to SP8, and apply the latest patches for NetWare and
Novell Cluster Services.
You can perform a script upgrade by using NWCONFIG.
For information, see “3.0 Upgrading to OES 2 SP1 NetWare” (http://www.novell.com/
documentation/oes2/inst_oes_nw/data/b7211uh.html) in the OES 2 SP1: NetWare Installation
Guide.
4 If the node is running iManager, update the storage related plug-ins for iManager.
After upgrading to NetWare 6.5 SP8, iManager no longer displays the roles for Clusters, Storage, and DirXML. The storage related plug-ins require special handling because some storage features were reorganized in the NetWare 6.5 SP8 release. For information, see
“Storage Related Plug-Ins Must Be Uninstalled” (http://www.novell.com/documentation/oes2/ oes_readme/data/bi59826.html#biv1r9v) in the OES 2 SP1 Readme (http://www.novell.com/ documentation/oes2/oes_readme/data/readme.html).
Upgrading Business Continuity Clustering for NetWare 63
Page 64
NOTE: The DirXML plug-ins will be replaced by Identity Manager 3.5.1 plug-ins in
Section 5.4.3, “Upgrading Identity Manager,” on page 65.
4a In iManager, uninstall the old iManager plug-ins for Novell Cluster Services
(ncsmgmt.npm
and Storage Management, (
), NSS (
nssmgmt.npm
storagemgmt.npm
), Archive and Version Services (
arkmgmt.npm
).
4b In iManager, install the new set of storage-related plug-ins for iManager 2.7.2 from the
NetWare 6.5 SP8 installation media:
novdocx (en) 13 May 2009
),
Novell AFP ( Novell Archive and Version Services ( Novell CIFS ( Novell Cluster Services Novell Distributed File Services ( Novell Storage Services ( Storage Management, (
4c Stop Tomcat 5 by issuing
4d Reset Apache 2 by using the
4e Start Tomcat 5 by using the
afpmgmt.npm
cifsmgmt.npm
(ncsmgmt.npm
nssmgmt.npm
storagemgmt.npm
tc5stop
)
)
dfsmgmt.npm
ap2webrs
tomcat
arkmgmt.npm
)
)
)
command.
command.
command.
)
)
4f Wait at least 5 minutes for the iManager changes to take effect, then restart iManager.
4g In iManager, verify that BCC tasks are available in the Clusters role.
5 Repeat Step 2 through Step 4g for each node in the cluster.
6 Issue the
cluster join
command on each node in the cluster.
7 On each node in the cluster, upgrade Business Continuity Clustering from version 1.1 SP1 to
1.1 SP2 by installing BCC 1.1 SP2.
IMPORTANT: Do not start BCC at this time.
For instructions, see Chapter 4, “Installing Business Continuity Clustering,” on page 37.
8 Issue the
ldbcc.ncf
command on each node to have them join the BCC.
9 Repeat Step 2 through Step 8 for each peer cluster in your business continuity cluster.
5.4.2 Authorizing the BCC Administrator User
The BCC Administrator user must be a trustee of the Cluster objects in your BCC, and have at least read and write rights to the all attributes rights property.
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Rights, then click the Modify Trustees link.
4 Specify the Cluster object name, or browse and select it, then click OK.
64 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 65
5 If the BCC Administrator user is not listed as a trustee, click the Add Trustee button, browse
and select the User object, then click OK.
6 Click Assigned Rights for the BCC Administrator user, then ensure that the Read and Write
check boxes are selected for the All Attributes Rights property.
7 Click Done to save your changes.
8 Repeat Step 3 through Step 7 for the other Cluster objects in for every cluster in the business
continuity cluster.
5.4.3 Upgrading Identity Manager
On the Identity Manager node in the cluster, upgrade to Identity Manager 3.5.1.
Before you begin:
Review the BCC 1.1 SP2 configuration requirements in Section 4.1.7, “Identity Manager 3.5.1
Bundle Edition,” on page 41.
Make sure that the Identity Manager node meets the 3.5.1 upgrade requirements. For
information, see “Upgrading” (http://www.novell.com/documentation/idm35/install/data/
ampxjxi.html) in the Identity Manager 3.5.1 Installation Guide.
novdocx (en) 13 May 2009
For information about installing or upgrading Identity Manager, see the Identity Manager 3.51
Installation Guide (http://www.novell.com/documentation/idm35/install/data/front.html).
1 Upgrade Identity Manager to version 3.51.
IMPORTANT: Do not start the drivers yet.
The upgrade updates the Identity Manager software and its plug-ins for iManager 2.7.2 on the same node.
2 Stop Tomcat 5 by issuing
tc5stop
3 Reset Apache 2 by using the
4 Start Tomcat 5 by using the
command.
ap2webrs
tomcat
command.
command.
5 Wait at least 5 minutes for the iManager changes to take effect, then restart iManager.
6 In iManager, verify that Identity Manager is available.
IMPORTANT: In iManager, the Identity Manager plug-ins are displayed as version 3.6. The
3.6 version functions properly with Identity Manager 3.5.1.
5.4.4 Deleting and Re-Creating the BCC Driver Sets and Drivers
After completing the upgrade procedures for every node in all clusters of the business continuity cluster, you must delete and re-create the Identity Manager driver sets and drivers with the BCC 1.1 SP2 templates.
On the Identity Manager node in every peer cluster, do the following:
1 Delete the Identity Manager 2.x or 3.0.x drivers and driver sets on the Identity Manager node of
every peer cluster.
2 Create the new BCC driver sets for Identity Manager.
Upgrading Business Continuity Clustering for NetWare 65
Page 66
3 Create the new Identity Manager drivers with the new BCC 1.1 SP2 templates.
4 Start the new Identity Manager drivers for each peer cluster.
5.4.5 Verifying the BCC Upgrade
Verify that Novell Cluster Services and BCC 1.1 SP2 appear to be functioning correctly by performing the following tests:
1 Create a new BCC enabled pool resource, BCC migrate it between peer clusters, and verify that
it migrates correctly.
2 Make a load script change (for example, add a space character and save the change) to an
existing BCC resource that was created with 1.0, allow the revised load script to synchronize, then verify that the load script was updated at all eDirectory locations.
3 BCC migrate the pool resource that you modified in Step 13b between peer clusters, and verify
that it migrates correctly and that the Identity Manager drivers are synchronizing.
4 Check all SAN scripts ensure that they will perform the desired functions.
novdocx (en) 13 May 2009
66 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 67
6
Configuring Business Continuity
novdocx (en) 13 May 2009
Clustering Software
After you have installed and configured Identity Manager and the Novell® Business Continuity Clustering software, and you have configured file system mirroring, you need to set up the BCC software.
Section 6.1, “Configuring Identity Manager Drivers for the Business Continuity Cluster,” on
page 67
Section 6.2, “Configuring Clusters for Business Continuity,” on page 73
Section 6.3, “BCC-Enabling Cluster Resources,” on page 78
6.1 Configuring Identity Manager Drivers for the Business Continuity Cluster
The Identity Manager preconfigured templates for iManager that were installed when you ran the Novell Business Continuity Clustering installation must be configured so you can properly manage your business continuity cluster. The preconfigured templates include the following:
Cluster Resource Synchronization: A set of policies, filters, and objects that synchronize
cluster resource information between any two of the peer clusters. This template must always be configured, even in a single-tree business continuity cluster.
6
User Object Synchronization: A set of policies, filters, and objects that synchronize User
objects between any any two trees (or partitions) that contain the clusters in the business continuity cluster. Typically, this template is used to configure drivers when the clusters in your business continuity cluster are in different eDirectory User Object Synchronization drivers between clusters if you put User objects in a different eDirectory partition than is used for the Cluster objects; however, this is not a recommended configuration. See Appendix B, “Implementing a Multiple-Tree BCC,” on page 127 for more information about implementing BCC between two trees.
The Identity Manager engine and eDirectory driver must be installed on one node in each cluster. The node where Identity Manager is installed must have an eDirectory full replica with at least read/ write access to all eDirectory objects that will be synchronized between clusters. For information about the full replica requirements, see Section 4.1.5, “Novell eDirectory 8.8,” on page 40.
Identity Manager requires a credential that allows you to use drivers beyond an evaluation period. The credential can be found in the BCC license. In the Identity Manager interface in iManager, enter the credential for each driver that you create for BCC. You must also enter the credential for the matching driver that is installed in a peer cluster. You can enter the credential, or put the credential in a file that you point to.
Section 6.1.1, “Configuring the Identity Manager Drivers and Templates,” on page 68
Section 6.1.2, “Creating SSL Certificates,” on page 70
Section 6.1.3, “Synchronizing Identity Manager Drivers,” on page 70
Section 6.1.4, “Preventing Identity Manager Synchronization Loops,” on page 71
TM
trees. You might also need to set up
Configuring Business Continuity Clustering Software
67
Page 68
6.1.1 Configuring the Identity Manager Drivers and Templates
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Identity Manager Utilities, then click the New Driver link.
4 Choose to place the new driver in a new driver set, then click Next.
Both the User Object Synchronization driver and the Cluster Resource Synchronization driver can be added to the same driver set.
5 Specify the driver set name, context, and the server that the driver set will be associated with.
The server is the same server where you installed the Identity Manager engine and eDirectory driver.
6 Choose to not create a new partition for the driver set, then click Next.
7 Choose to import a preconfigured driver from the server, select the Identity Manager
preconfigured template for cluster resource synchronization, then click Next.
The template name is
8 Fill in the values on the wizard page as prompted, then click Next.
BCCClusterResourceSynchronization.XML
.
novdocx (en) 13 May 2009
Each field contains an example of the type of information that should go into the field. Descriptions of the information required are also included with each field.
Driver name: Specify a unique name for this driver to identify its function. For example,
Cluster1SyncCluster2. If you use both preconfigured templates, you must specify different driver names for each driver template.
Name of SSL Certificate: If you do not have an SSL certificate, leave this value set to the
default. The certificate is created later in the configuration process. See “Creating SSL
Certificates” on page 70 for instructions on creating SSL certificates.
In a single tree configuration, if you specify the SSL CertificateDNS certificate that was created when you installed OES 2 on the Identity Manager node, you do not need to create an additional SSL certificate later.
DNS name of other IDM node: Specify the DNS name or IP address of the Identity
Manager server in the other cluster.
Port number for this driver: If you have a business continuity cluster that consists of
three or four clusters, you must specify unique port numbers for each driver template set. The default port number is 2002.
You must specify the same port number for the same template in the other cluster. For example, if you specify 2003 as the port number for the resource synchronization template, you must specify 2003 as the port number for the resource synchronization template in the peer driver for the other cluster.
Full Distinguished Name (DN) of the cluster this driver services: For example,
Cluster1.siteA.Novell.
68 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 69
Fully Distinguished Name (DN) of the landing zone container: Specify the context of
the container where the cluster pool and volume objects in the other cluster are placed when they are synchronized to this cluster.
This container is referred to as the landing zone. The NCPTM server objects for the virtual server of a BCC enabled resource are also placed in the landing zone.
IMPORTANT: The context must already exist and must be specified using dot format without the tree name. For example, siteA.Novell.
Prior to performing this step, you could create a separate container in eDirectory specifically for these cluster pool and volume objects. You would then specify the context of the new container in this step.
The IDM Driver object must have sufficient rights to any object it reads or writes in the following containers:
The Identity Manager driver set container.
The container where the Cluster object resides.
The container where the Server objects reside.
novdocx (en) 13 May 2009
If server objects reside in multiple containers, this must be a container high enough in the tree to be above all containers that contain server objects. The best practice is to have all server objects in one container.
The container where the cluster pool and volume objects are placed when they are
synchronized to this cluster.
This container is referred to as the landing zone. The NCP server objects for the virtual server of a BCC enabled resource are also placed in the landing zone.
You can do this by making the IDM Driver object security equivalent to another User object with those rights. See Step 9.
IMPORTANT: If you choose to include User object synchronization, exclude the Admin User object from being synchronized. See Step 7 in Section B.5, “Synchronizing the BCC-Specific
Identity Manager Drivers,” on page 130 for information about synchronizing User objects
when adding new clusters to the business continuity cluster.
9 Make the IDM Driver object security equivalent to an existing User object:
9a Click Define Security Equivalences, then click Add.
9b Browse to and select the desired User object, then click OK.
9c Click Next, then click Finish.
10 Repeat Step 1 through Step 9 above on the other clusters in your business continuity cluster.
This includes creating a new driver and driver set for each cluster.
IMPORTANT: If you have upgraded to Identity Manager 3 and click either the cluster resource synchronization driver or the user object synchronization driver, a message is displayed prompting you to convert the driver to a new architecture. Click OK to convert the driver.
Configuring Business Continuity Clustering Software 69
Page 70
6.1.2 Creating SSL Certificates
It is recommended that you create an SSL certificate for the Cluster Resource Synchronization driver. Creating one certificate creates the certificate for a driver pair. For example, creating an SSL certificate for the Cluster Resource Synchronization driver also creates the certificate for the Cluster Resource Synchronization drivers on the other clusters.
To create an SSL certificate:
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Identity Manager Utilities, then click NDS-to-NDS Driver Certificates.
4 Specify the requested driver information for this cluster, then click Next.
You must specify the driver name (including the context) you supplied in Step 8 on page 68 for this cluster. Use the following format when specifying the driver name:
DriverName.DriverSet.OrganizationalUnit.OrganizationName
Ensure that there are no spaces (beginning or end) in the specified context, and do not use the
cn=DriverName.ou=OrganizationalUnitName.o=OrganizationName
5 Specify the requested driver information for the driver in the other cluster.
Use the same format specified in Step 4.
format.
novdocx (en) 13 May 2009
6 Click Next, then click Finish.
6.1.3 Synchronizing Identity Manager Drivers
If you are adding a new cluster to an existing business continuity cluster, you must synchronize the BCC-specific Identity Manager drivers after you have created the BCC-specific Identity Manager drivers and SSL certificates. If the BCC-specific Identity Manager drivers are not synchronized, clusters cannot be enabled for business continuity. Synchronizing the Identity Manager drivers is only necessary when you are adding a new cluster to an existing business continuity cluster.
To synchronize the BCC-specific Identity Manager drivers:
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Identity Manager, then click the Identity Manager Overview link.
4 Search for and find the BCC driver set.
5 Click the red Cluster Sync icon for the driver you want to synchronize, then click the Migrate
from eDirectory button.
70 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 71
6 Click Add, browse to and select the Cluster object for the new cluster you are adding to the
business continuity cluster, then click OK.
Selecting the Cluster object causes the BCC-specific Identity Manager drivers to synchronize.
If you have multiple eDirectory trees in your BCC, see Section B.5, “Synchronizing the BCC-
Specific Identity Manager Drivers,” on page 130.
6.1.4 Preventing Identity Manager Synchronization Loops
If you have three or more clusters in your business continuity cluster, you should set up synchronization for the User objects and Cluster Resource objects in a manner that prevents Identity Manager synchronization loops. Identity Manager synchronization loops can cause excessive network traffic and slow server communication and performance.
For example, in a three-cluster business continuity cluster, an Identity Manager synchronization loop occurs when Cluster One is configured to synchronize with Cluster Two, Cluster Two is configured to synchronize with Cluster Three, and Cluster Three is configured to synchronize back to Cluster One. This is illustrated in Figure 6-1 below.
Figure 6-1 Three-Cluster Identity Manager Synchronization Loop
novdocx (en) 13 May 2009
Cluster
One
IDM Sync
IDM Sync
IDM Sync
Cluster
Three
Cluster
Two
A preferred method is to make Cluster One an Identity Manager synchronization master in which Cluster One synchronizes with Cluster Two, and Cluster Two and Cluster Three both synchronize with Cluster One. This is illustrated in Figure 6-2 below.
Configuring Business Continuity Clustering Software 71
Page 72
Figure 6-2 Three-Cluster Identity Manager Synchronization Master
Cluster
One
IDM Sync
IDM Sync
novdocx (en) 13 May 2009
Cluster
Three
Cluster
Two
You could also have Cluster One synchronize with Cluster Two, Cluster Two synchronize with Cluster Three, and Cluster Three synchronize back to Cluster Two as illustrated in Figure 6-3.
Figure 6-3 Alternate Three-Cluster Identity Manager Synchronization Scenario
Cluster
One
IDM Sync
IDM Sync
Cluster
Three
Cluster
Two
To change your BCC synchronization scenario:
1 In the Connections section of the Business Continuity Cluster Properties page, select one or
more peer clusters that you want a cluster to synchronize to, then click Edit.
In order for a cluster to appear in the list of possible peer clusters, that cluster must have the following:
Business Continuity Clustering software installed.
Identity Manager installed.
The BCC-specific Identity Manager drivers configured and running.
Be enabled for business continuity.
72 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 73
6.2 Configuring Clusters for Business Continuity
The following tasks must be performed on each separate Novell Cluster Services cluster that you want to be part of the business continuity cluster:
Section 6.2.1, “Enabling Clusters for Business Continuity,” on page 73
Section 6.2.2, “Adding Cluster Peer Credentials,” on page 74
Section 6.2.3, “Adding Search-and-Replace Values to the Resource Replacement Script,” on
page 74
Section 6.2.4, “Adding SAN Management Configuration Information,” on page 75
Section 6.2.5, “Verifying BCC Administrator User Trustee Rights and Credentials,” on page 78
NOTE: Identity Manager must be configured and running before configuring clusters for business continuity.
6.2.1 Enabling Clusters for Business Continuity
novdocx (en) 13 May 2009
If you want to enable a cluster to fail over selected resources or all cluster resources to another cluster, you must enable business continuity on that cluster.
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed. This server should be in the same eDirectory tree as the cluster you are enabling for business continuity.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 Ensure that the BCC-specific Identity Manager drivers are running:
3a In the left column, click Identity Manager, and then click the Identity Manager Overview
link.
3b Search the eDirectory Container or tree for the BCC-specific Identity Manager drivers.
3c For each driver, click the upper right corner of the driver icon to see if a driver is started or
stopped.
3d If the driver is stopped, start it by selecting Start.
4 In the left column, click Clusters, then click the Cluster Options link.
5 Specify a cluster name, or browse and select one.
6 Click the Properties button, then click the Business Continuity tab.
7 Ensure that the Enable Business Continuity Features check box is selected.
8 Repeat Step 1 through Step 7 for the other cluster that this cluster will migrate resources to.
9 Continue with Adding Cluster Peer Credentials.
Configuring Business Continuity Clustering Software 73
Page 74
6.2.2 Adding Cluster Peer Credentials
In order for one cluster to connect to a second cluster, the first cluster must be able to authenticate to the second cluster. To make this possible, you must add the username and password of the user that the selected cluster will use to connect to the selected peer cluster.
IMPORTANT: In order to add or change cluster peer credentials, you must access iManager on a server that is in the same eDirectory tree as the cluster you are adding or changing peer credentials for.
1 In the Connections section of the Business Continuity Cluster Properties page, select the peer
cluster, then click Edit.
In order for a cluster to appear in the list of possible peer clusters, the cluster must have the following:
Business Continuity Clustering software installed.
Identity Manager installed and running.
The BCC-specific Identity Manager drivers configured and running.
Be enabled for business continuity.
novdocx (en) 13 May 2009
2 Add the administrator username and password that the selected cluster will use to connect to
the selected peer cluster.
When adding the administrator username, do not include the context for the user. For example,
bccadmin
use
Rather than using the Admin user to administer your BCC, you should consider creating another user with sufficient rights to the appropriate contexts in your eDirectory tree to manage your BCC. For information, see Section 4.3, “Configuring a BCC Administrator User,” on
page 47.
3 Repeat Step 1 and Step 2 for the other cluster that this cluster will migrate resources to.
4 Continue with Adding Search-and-Replace Values to the Resource Replacement Script.
instead of
bccadmin.prv.novell
.
6.2.3 Adding Search-and-Replace Values to the Resource Replacement Script
To enable a resource for business continuity, certain values (such as IP addresses) specified in resource load and unload scripts need to be changed in corresponding resources in the other clusters. You need to add the search-and-replace strings that are used to transform cluster resource load and unload scripts from this cluster to another cluster. Replacement scripts are for inbound changes to scripts for objects being synchronized from other clusters, not outbound.
TIP: You can see the IP addresses that are currently assigned to resources by entering the
secondary ipaddress
command at the NetWare server console of cluster servers.
display
The search-and-replace data is cluster-specific, and it is not synchronized via Identity Manager between the clusters in the business continuity cluster.
74 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 75
To add resource script search-and-replace values:
1 In iManager, click Clusters > Cluster Options, select the Cluster object, click Properties, then
select the Busines Continuity.
2 In the Resource Script Replacements section of the Business Continuity Cluster Properties
page, click New.
3 Add the desired search-and-replace values.
The search-and-replace values you specify here apply to all resources in the cluster that have been enabled for business continuity.
For example, if you specify 10.1.1.1 as the search value and 192.168.1.1 as the replace value, the resource with the 10.1.1.1 IP address in its scripts is searched for in the primary cluster and, if found, the 192.168.1.1 IP address is assigned to the corresponding resource in the secondary cluster.
You can also specify global search-and-replace addresses for multiple resources in one line. This can be used only if the last digits in the IP addresses are the same in both clusters. For example, if you specify 10.1.1. as the search value and 192.168.1. as the replace value, the software finds the 10.1.1.1, 10.1.1.2, 10.1.1.3 and 10.1.1.4 addresses, and replaces them with the 192.168.1.1, 192.168.1.2, 192.168.1.3, and 192.168.1.4 addresses, respectively.
novdocx (en) 13 May 2009
IMPORTANT: Make sure to use a trailing dot in the search-and-replace value. If a trailing dot is not used, 10.1.1 could be replaced with an IP value such as 192.168.100 instead of
192.168.1.
4 (Optional) Select the Use Regular Expressions check box to use wildcard characters in your
search-and-replace values. The following links provide information on regular expressions and wildcard characters:
Regular Expressions (http://www.opengroup.org/onlinepubs/007908799/xbd/re.html)
Regular-Expressions.info (http://www.regular-expressions.info/)
Wikipedia (http://en.wikipedia.org/wiki/Regular_expression)
oreilly.com (http://www.oreilly.com/catalog/regex/)
You can find additional information on regular expressions and wildcard characters by searching the Web.
5 Click Apply to save your changes.
Clicking OK does not apply the changes to the directory.
6 Verify that the change has been synchronized with the peer clusters by the Identity Vault.
7 Continue with Section 6.2.4, “Adding SAN Management Configuration Information,” on
page 75.
6.2.4 Adding SAN Management Configuration Information
You can create scripts and add commands that are specific to your SAN hardware. These scripts and commands might be needed to promote mirrored LUNs to primary on the cluster where the pool resource is being migrated to, or demote mirrored LUNs to secondary on the cluster where the pool resource is being migrated from.
Configuring Business Continuity Clustering Software 75
Page 76
You can also add commands and Perl scripts to the resource scripts to call other scripts. Any command that can be run at the NetWare server console can be used. The scripts or commands you add are stored in eDirectory. If you add commands to call outside scripts, those scripts must exist on every server in the cluster.
IMPORTANT: Scripts are not synchronized by Identity Manager.
To add SAN management configuration information:
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Clusters, then click the Cluster Options link.
4 Specify a cluster name, or browse and select one.
5 Under Cluster Objects, select a cluster resource that is enabled for business continuity, then
click Details.
Cluster resources that are enabled for business continuity have the BCC label on the resource type icon.
novdocx (en) 13 May 2009
6 Click the Business Continuity tab, then click SAN Management.
7 Create BCC SAN management load and unload scripts:
7a Under BCC Load Scripts, click New to bring up a page that lets you create a script to
promote mirrored LUNs on a cluster.
You can also delete a script, edit a script by clicking Details, or change the order that load scripts execute by clicking the Move Up and Move Down links.
7b Specify the values on the SAN Management Script Details page.
Descriptions of the information required for the page fields and options include:
Name and Description: Specify a name, and if desired, a description of the script
you are creating.
CIMOM IP/DNS: If you are not using a template and if you selected the CIM Client
check box on the previous page, specify the IP address or DNS name for your SAN. This is the IP address or DNS name that is used for SAN management.
Namespace: If you selected the CIM Client check box on the previous page, accept
the default namespace, or specify a different namespace for your SAN.
Namespace determines which models and classes are used with your SAN. Consult your SAN documentation to determine which namespace is required for your SAN.
Username and Password: If you selected the CIM Client check box on the previous
page, specify the username and password that is used to connect to and manage your SAN.
Port: If you selected the CIM Client check box on the previous page, accept the
default port number or specify a different port number. This is the port number that CIMOM (your SAN manager) uses. Consult your SAN documentation to determine which port number you should use.
76 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 77
Secure: If you selected the CIM Client check box on the previous page, select or
deselect the Secure check box depending whether you want SAN management communication to be secure (HTTPS) or non-secure (HTTP).
Script Parameters: If desired, specify variables and values for the variables that are
used in the SAN management script.
To specify a variable, click New, then provide the variable name and value in the fields provided. Click OK to save your entries. You can specify additional variables by clicking New again and providing variable names and values. You can also edit and delete existing script parameters by clicking the applicable link.
Script Parameters Text Box: Use this text box to add script commands to the script
you are creating.
These script commands are specific to your SAN hardware. You can add a Perl script, or any commands that can be run on Linux or NetWare (depending on your platform). If you add commands to call outside scripts, those scripts must exist on every server in the cluster.
CIM Enabled: Select this box if your SAN supports SMI-S and you did not select
the CIM Client check box on the previous page. This causes the CIM-specific fields to become active on this page.
novdocx (en) 13 May 2009
Synchronous: If this check box is not selected, multiple scripts can be run
concurrently. Selecting the box causes scripts to run individually, one after another. Most SAN vendors do not support running multiple scripts concurrently.
Edit Flags: This is an advanced feature, and should not be used except under the
direction of Novell Support.
7c Click Apply and OK on the Script Details page, then click OK on the Resource Properties
page to save your script changes.
IMPORTANT: After clicking Apply and OK on the Script Details page, you are returned to the Resource Properties page (with the Business Continuity tab selected). If you do not click OK on the Resource Properties page, your script changes are not saved.
IMPORTANT: The CIMOM daemons on all nodes in the business continuity cluster should be configured to bind to all IP addresses on the server.
Business Continuity Clustering connects to the CIMOM by using the master IP address for the cluster. Because the master IP address moves to other nodes during a failover or migration, the CIMOM must be configured to bind to all IP addresses (secondary and primary), rather than just the primary IP address of the host.
You can do this by editing the
openwbem.conf
file. See “Changing the OpenWBEM CIMOM
Configuration” (http://www.novell.com/documentation/oes/cimom/data/bv3wn7m.html#bv3wn7m)
in the OpenWBEM Services Administration Guide for OES for more information.
Configuring Business Continuity Clustering Software 77
Page 78
6.2.5 Verifying BCC Administrator User Trustee Rights and Credentials
You must ensure that the user who manages your BCC (BCC Administrator user) is a trustee of the Cluster objects and has at least Read and Write eDirectory rights to the All Attributes Rights property. For instructions, see “Assigning Trustee Rights for the BCC Administrator User to the
Cluster Objects” on page 48.
You must also ensure that the BCC Administrator user has file system rights to the
_ADMIN:\Novell\Cluster
the_ADMIN volume is virtual, and is created each time the server starts. For this reason, you cannot assign eDirectory trustee rights to the
Rights for the BCC Administrator User to the _ADMIN Volume” on page 48.
You must also ensure that the BCC Administrator user has Read, Write, Create, Erase, Modify, and File Scan access rights to the see “Assigning Trustee Rights for the BCC Administrator User to the sys:\tmp Directory” on
page 49.
directory of all nodes in your BCC. This is necessary because
_ADMIN
sys:/tmp
volume. For instructions, see “Assigning Trustee
directory on every node in your clusters. For instructions,
novdocx (en) 13 May 2009
6.3 BCC-Enabling Cluster Resources
Cluster resources can be configured for business continuity after they are created. Configuring a resource for business continuity consists of enabling that resource for business continuity, adding load and unload scripts search-and-replace data specific to the resource, and selecting peer clusters for the resource.
IMPORTANT: In a business continuity cluster, you should have only one NSS pool for each LUN that could be failed over to another cluster. This is necessary because in a business continuity cluster, entire LUNs fail over to other clusters, rather than individual pools, which fail over to other nodes within a cluster.
A cluster-enabled NSS pool must contain at least one volume before its cluster resource can be enabled for business continuity. You get an error message if you attempt to enable the resource for business continuity if its NSS pool does not contain a volume.
Also, if you have encrypted NSS volumes in your BCC, then all clusters in that BCC must be in the same eDirectory tree. If not, then the clusters in the other eDirectory tree cannot decrypt the NSS volumes. This rule applies to both NetWare and Linux BCCs.
Section 6.3.1, “Enabling a Cluster Resource for Business Continuity,” on page 78
Section 6.3.2, “Adding Resource Script Search-and-Replace Values,” on page 79
Section 6.3.3, “Selecting Peer Clusters for the Resource,” on page 80
Section 6.3.4, “Adding SAN Array Mapping Information,” on page 81
6.3.1 Enabling a Cluster Resource for Business Continuity
Cluster resources must be enabled for business continuity on the primary cluster before they can be synchronized and appear as resources in the other clusters in the business continuity cluster. Enabling a cluster resource makes it possible for that cluster resource or cluster pool resource to be migrated to another cluster.
78 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 79
IMPORTANT: Although you can add search-and-replace data that is resource-specific after you enable a resource for business continuity, we recommend adding the search-and-replace data for the entire cluster before you enable resources for business continuity. See “Adding Search-and-Replace
Values to the Resource Replacement Script” on page 74 for instructions on adding search-and-
replace data for the entire cluster.
When you enable a resource for business continuity and that resource has been synchronized and appears in the other clusters, the preferred nodes for the other clusters are by default set to all nodes in the cluster. If you want to change the resource’s preferred nodes for other clusters in your BCC, you must manually do it. Changes to the preferred nodes list in the primary cluster do not automatically replicate to the preferred nodes lists for other clusters in your BCC.
1 (Conditional) If you are creating a new cluster resource or cluster pool resource, follow the
instructions for creating a cluster resource or cluster pool resource using iManager in the OES 2
SP2: Novell Cluster Services 1.8.5 for NetWare Administration Guide, then continue with
Step 2.
2 Enable a cluster resource or cluster pool resource for business continuity:
2a Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2b Specify your username and password, specify the tree where you want to log in, then click
Login.
2c In the left column, click Clusters, then click the Cluster Options link.
2d Specify a cluster name, or browse and select one.
novdocx (en) 13 May 2009
2e Select the desired cluster resource from the list of Cluster objects.
2f Click the Details link, then click the Business Continuity tab.
3 Ensure that the Enable Business Continuity Features check box is selected.
4 Continue with Step 1 in the Adding Resource Script Search-and-Replace Values section.
6.3.2 Adding Resource Script Search-and-Replace Values
If you did not previously add search-and-replace data specific to the entire cluster, you must now add it for this resource.
IMPORTANT: Adding resource script search-and-replace values for the entire cluster is recommended rather than adding those values for individual cluster resources. You should contact Novell Support prior to adding search-and-replace values for individual cluster resources.
To enable a resource for business continuity, certain values (such as IP addresses, DNS names, and tree names) specified in resource load and unload scripts need to be changed in corresponding resources in the other clusters. You need to add the search-and-replace strings that are used to transform cluster resource load and unload scripts from this cluster to another cluster.
The search-and-replace data you add is resource-specific, and it is not synchronized via Identity Manager between the clusters in the business continuity cluster.
Configuring Business Continuity Clustering Software 79
Page 80
To add resource script search-and-replace values specific to this resource:
1 In the Resource Replacement Script section of the page, click New.
If a resource has already been configured for business continuity, you can click Edit to change existing search-and-replace values or click Delete to delete them.
2 Add the desired search-and-replace values, then click OK.
The search-and-replace values you specify here apply to only to the resource you are enabling for business continuity. If you want the search-and-replace values to apply to any or all cluster resources, add them to the entire cluster instead of just to a specific resource.
See “Adding Search-and-Replace Values to the Resource Replacement Script” on page 74 for more information on resource script search-and-replace values and adding those values to the entire cluster.
3 Do one of the following:
If this is an existing cluster resource, continue with Step 1 in the Selecting Peer Clusters
for the Resource section below.
If you are creating a new cluster resource, click Next, then continue with Step 1 in the
Selecting Peer Clusters for the Resource section.
novdocx (en) 13 May 2009
You can select the Use Regular Expressions check box to use wildcard characters in your search­and-replace values. The following links provide information on regular expressions and wildcard characters:
Regular Expressions (http://www.opengroup.org/onlinepubs/007908799/xbd/re.html)
Regular-Expressions.info (http://www.regular-expressions.info/)
Wikipedia (http://en.wikipedia.org/wiki/Regular_expression)
oreilly.com (http://www.oreilly.com/catalog/regex/)
You can find additional information on regular expressions and wildcard characters by searching the Web.
IMPORTANT: If you change the resource-specific search-and-replace data after initially adding it, you must update the resource load or unload scripts in one of the other clusters by editing it and adding a space or a comment. This causes the script to be updated with the new search-and-replace data.
You could also update the IP address on the cluster protocols page in iManager to cause IP address search-and-replace values to be updated for both load and unload scripts. This might require you to go back and change the IP addresses specified in the resource load and unload scripts in the source cluster to their original values.
6.3.3 Selecting Peer Clusters for the Resource
Peer clusters are the other clusters that this cluster resource can be migrated to. The cluster or clusters that you select determine where the resource can be manually migrated. If you decide to migrate this resource to another cluster, you must migrate it to one of the clusters that has been selected.
1 Select the other clusters that this resource can be migrated to.
80 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 81
2 Do one of the following:
If you are creating a new non-pool cluster resource that contains a Reiser or Ext3 file
system, click Finish.
If this is an existing non-pool cluster resource that contains a Reiser or Ext3 file system,
click Apply.
If you are creating a new cluster pool resource, click Next, then add the SAN management
configuration information. For information, see “Adding SAN Management
Configuration Information” on page 75.
If this is an existing cluster pool resource, add the SAN management configuration
information. For information, see “Adding SAN Management Configuration Information”
on page 75.
6.3.4 Adding SAN Array Mapping Information
For information on adding SAN array mapping information, see “Adding SAN Management
Configuration Information” on page 75.
novdocx (en) 13 May 2009
Configuring Business Continuity Clustering Software 81
Page 82
novdocx (en) 13 May 2009
82 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 83
7
Managing a Business Contiuity
novdocx (en) 13 May 2009
Cluster
This section can help you effectively manage a business continuity cluster with the Novell® Business Continuity Clustering software. It describes how to migrate cluster resources from one Novell Cluster Services generate reports of the cluster configuration and status.
For information about using console commands to manage your business continuity cluster, see
Appendix A, “Console Commands for BCC,” on page 123.
Section 7.1, “Migrating a Cluster Resource to a Peer Cluster,” on page 83
Section 7.2, “Changing Cluster Peer Credentials,” on page 84
Section 7.3, “Viewing the Current Status of a Business Continuity Cluster,” on page 85
Section 7.4, “Generating a Cluster Report,” on page 86
Section 7.5, “Disabling Business Continuity Cluster Resources,” on page 86
Section 7.6, “Resolving Business Continuity Cluster Failures,” on page 87
7.1 Migrating a Cluster Resource to a Peer Cluster
TM
cluster to another, to modify peer credentials for existing clusters, and to
7
Although Novell Business Continuity Clustering provides an automatic failover feature that fails over resources between peer clusters, we recommend that you manually migrate cluster resources between the peer clusters instead. For information about configuring and using automatic failover for a business continuity cluster, Appendix C, “Setting Up Auto-Failover,” on page 133.
Section 7.1.1, “Understanding BCC Resource Migration,” on page 83
Section 7.1.2, “Migrating Cluster Resources between Clusters,” on page 84
7.1.1 Understanding BCC Resource Migration
If the node where a resource is running fails, if the entire cluster fails, or if you just want to migrate the resource to another cluster, you can manually start the cluster resource on another cluster in the business continuity cluster. If the source cluster site fails, you must go to the destination cluster site to manually migrate or bring up resources at that site. Each resource starts on its preferred node on the destination cluster.
Migrating a pool resource to another cluster causes the following to happen:
1. If the source cluster can be contacted, the state of the resource is changed to offline.
2. The resource changes from primary to secondary on the source cluster.
3. Any SAN script that is associated with the pool resource is run.
4. On the destination cluster, the resource changes from secondary to primary so that it can be brought online.
Managing a Business Contiuity Cluster
83
Page 84
A custom Perl script can be created for disk mapping on Fibre Channel SANs. The purpose of this script is to make the LUNs in the SAN available to the destination cluster. A reverse script is also created for testing purposes so pool resources can be migrated back to the source cluster.
5. The
6. Resources are brought online and load on the most preferred node in the cluster.
cluster scan for new devices
the cluster is aware of LUNs that are now available.
command is executed on the destination cluster so that
novdocx (en) 13 May 2009
TIP: You can use the preferred node on the destination cluster.
7. Resources appear as running and primary on the cluster where you have migrated them.
cluster migrate
command to start resources on nodes other than the
7.1.2 Migrating Cluster Resources between Clusters
WARNING: Do not migrate resources for a test failover if the peer (LAN) connection between the source and destination cluster is down. Possible disk problems and data corruption could occur. This warning does not apply if resources are migrated during an actual cluster site failure.
To manually migrate cluster resources from one cluster to another:
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Clusters, then click the BCC Manager link.
4 Specify a cluster name, or browse and select one.
5 Select one or more cluster resources, then click BCC Migrate.
6 Select the cluster where you want to migrate the selected resources, then click OK.
The resources migrate to their preferred node on the destination cluster. If you select Any Configured Peer as the destination cluster, the Business Continuity Clustering software
chooses a destination cluster for you. The destination cluster that is chosen is the first cluster that is up in the peer clusters list for this resource.
7.2 Changing Cluster Peer Credentials
You can change the credentials that are used by a one peer cluster to connect to another peer cluster. You might need to do this if the administrator username or password changes for any clusters in the business continuity cluster. To do this, you change the username and password for the administrative user who the selected cluster uses to connect to another selected peer cluster.
IMPORTANT: Make sure the new administrator username meets the requirements specified in
Section 4.3, “Configuring a BCC Administrator User,” on page 47.
1 Start your Internet browser and enter the URL for iManager.
84 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 85
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
novdocx (en) 13 May 2009
IMPORTANT: In order to add or change cluster peer credentials, you must access iManager on a server that is in the same eDirectory credentials for.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Cluster Administration, then click the Management link.
4 Specify a cluster name, or browse and select one.
5 Click Connections and select a peer cluster.
6 Edit the administrator username and password that the selected cluster will use to connect to
the selected peer cluster, then click OK.
When specifying a username, you do not include the Novell eDirectory context for the user name.
NOTE: If the business continuity cluster has clusters in multiple eDirectory trees, and you specify a common username and password, each eDirectory tree in the business continuity cluster must have the same username and password.
TM
tree as the cluster you are adding or changing peer
7.3 Viewing the Current Status of a Business Continuity Cluster
You can view the current status of your business continuity cluster by using either iManager or the server console of a cluster in the business continuity cluster.
Section 7.3.1, “Using iManager to View the Cluster Status,” on page 85
Section 7.3.2, “Using Console Commands to View the Cluster Status,” on page 86
7.3.1 Using iManager to View the Cluster Status
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Clusters, then click the BCC Manager link.
4 Specify a cluster name, or browse and select one.
5 Use this page to see if all cluster peer connections are up or if one or more peer connections are
down. You can also see the status of the BCC resources in the business continuity cluster.
Managing a Business Contiuity Cluster 85
Page 86
7.3.2 Using Console Commands to View the Cluster Status
At the server console of a server in the business continuity cluster, enter the following commands to get different kinds of status information:
cluster view cluster status cluster connections
7.4 Generating a Cluster Report
You can generate a report for each cluster in the business continuity cluster to list information on a specific cluster, such as current cluster configuration, cluster nodes, and cluster resources. You can print or save the report by using your browser.
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
novdocx (en) 13 May 2009
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In the left column, click Clusters, then click the Cluster Manager link.
4 Specify a cluster name, or browse and select one.
5 Click the Run Report button.
7.5 Disabling Business Continuity Cluster Resources
After enabling a resource for business continuity, it is possible to disable it. You might want to disable BCC for a cluster resource in any of the following cases:
You accidentally enabled the resource for business continuity.
You no longer want the cluster resource to be able to fail over between peer clusters.
You plan to delete the cluster resource.
You plan to remove the peer cluster from the business continuity cluster. In this case, you must
disable BCC for each cluster resource before you disable BCC for the cluster.
IMPORTANT: If you disable Business Continuity Clustering for a cluster by using either iManager
cluster disable
or the enabled for business continuity are automatically disabled for business continuity. If you re-enable Business Continuity Clustering for the cluster, you must again re-enable each of its cluster resources that you want to be enabled for business continuity.
console command, the cluster resources in that cluster that that have been
This can be a time-consuming process if you have many cluster resources that are enabled for business continuity. For this reason, you should use caution when disabling Business Continuity Clustering for an entire cluster.
86 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 87
If BCC enabled resources need to be BCC disabled, remove the secondary peer clusters from the resource’s assigned list, then disable BCC only from the primary cluster, either by using iManager or command line. Do not BCC disable the same resource from multiple peer clusters.
To disable BCC for a cluster resource:
1 Start your Internet browser and enter the URL for iManager.
The URL is http://server_ip_address/nps/iManager.html. Replace server_ip_address with the IP address or DNS name of the server that has iManager and the Identity Manager preconfigured templates for iManager installed.
2 Specify your username and password, specify the tree where you want to log in, then click
Login.
3 In Roles and Tasks, click Clusters, then click the Cluster Options link.
4 Specify the cluster name, or browse and select it.
5 Select the desired cluster resource from the list of Cluster objects
6 Click the Details link.
7 On the Preferred Nodes tab, remove the secondary peer clusters from the Assigned list, then
disable BCC for the resource on the primary peer cluster.
7a Click the Preferred Nodes tab.
novdocx (en) 13 May 2009
7b From the Assigned Nodes list, select the servers in the peer clusters you want to unassign
from the resource, then click the left-arrow button to move the selected servers to the Unassigned Nodes list.
The primary peer cluster and the node where the resource is running cannot be moved from the Assigned list to the Unassigned list.
7c Click Apply to save node assignment changes.
8 On the Details page, click the Business Continuity tab, deselect the Enable Business Continuity
Features check box, then click Apply.
9 Wait for Identity Manager to synchronize the changes.
This could take from 30 seconds to one minute, depending on your configuration.
10 Delete the Cluster Resource object on the clusters where you no longer want the resource to
run.
7.6 Resolving Business Continuity Cluster Failures
There are several failure types associated with a business continuity cluster that you should be aware of. Understanding the failure types and knowing how to respond to each can help you more quickly recover a cluster. Some of the failure types and responses differ, depending on whether you have implemented SAN-based mirroring or host-based mirroring. Promoting or demoting LUNs is sometimes necessary when responding to certain types of failures.
Managing a Business Contiuity Cluster 87
Page 88
NOTE: The terms promote and demote are used here in describing the process of changing LUNs to a state of primary or secondary, but your SAN vendor documentation might use different terms such as mask and unmask.
Section 7.6.1, “SAN-Based Mirroring Failure Types and Responses,” on page 88
Section 7.6.2, “Host-Based Mirroring Failure Types and Responses,” on page 89
7.6.1 SAN-Based Mirroring Failure Types and Responses
SAN-based mirroring failure types and responses are described in the following sections:
“Primary Cluster Fails but Primary SAN Does Not” on page 88
“Primary Cluster and Primary SAN Both Fail” on page 88
“Secondary Cluster Fails but Secondary SAN Does Not” on page 89
“Secondary Cluster and Secondary SAN Both Fail” on page 89
“Primary SAN Fails but Primary Cluster Does Not” on page 89
“Secondary SAN Fails but Secondary Cluster Does Not” on page 89
novdocx (en) 13 May 2009
“Intersite SAN Connectivity Is Lost” on page 89
“Intersite LAN Connectivity Is Lost” on page 89
Primary Cluster Fails but Primary SAN Does Not
This type of failure can be temporary (transient) or long-term. There should be an initial response and then a long-term response based on whether the failure is transient or long-term. The initial response is to restore the cluster to normal operations. The long-term response is total recovery from the failure.
Promote the secondary LUN to primary. Cluster resources load (and become primary on the second cluster). If the former primary SAN has not been demoted to secondary, you might need to demote it manually. The former primary SAN must be demoted to secondary before bringing cluster servers back up. Consult your SAN hardware documentation for instructions on demoting and promoting SANs. You can use the
cluster resetresources
console command to change resource states to
offline and secondary.
Prior to bringing up the cluster servers, you must ensure that the SAN is in a state in which the cluster resources cannot come online and cause a divergence in data. Divergence in data occurs when connectivity between SANs has been lost and both clusters assert that they have ownership of their respective disks.
Primary Cluster and Primary SAN Both Fail
Bring the primary SAN back up and follow your SAN vendor’s instructions to remirror and, if necessary, promote the former primary SAN back to primary. Then bring up the former primary cluster servers and fail back the cluster resources.
88 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 89
Secondary Cluster Fails but Secondary SAN Does Not
No additional response is necessary for this failure other than recovering the secondary cluster. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
Secondary Cluster and Secondary SAN Both Fail
Bring the secondary SAN back up and follow your SAN vendor's instructions to remirror. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
Primary SAN Fails but Primary Cluster Does Not
When the primary SAN fails, the primary cluster also fails. Bring the primary SAN back up and follow your SAN vendor’s instructions to remirror and, if necessary, promote the former primary SAN back to primary. You might need to demote the LUNs and resources to secondary on the primary SAN before bringing them back up. You can use the
cluster resetresources
console command to change resource states to offline and secondary. Bring up the former primary cluster servers and fail back resources.
novdocx (en) 13 May 2009
Secondary SAN Fails but Secondary Cluster Does Not
When the secondary SAN fails, the secondary cluster also fails. Bring the secondary SAN back up and follow your SAN vendor’s instructions to remirror. Then bring the secondary cluster back up. When you bring the secondary SAN and cluster back up, resources are still in a secondary state.
Intersite SAN Connectivity Is Lost
Recover your SANs first, then remirror from the good side to the bad side.
Intersite LAN Connectivity Is Lost
Users might not be able to access servers in the primary cluster but can possibly access servers in the secondary cluster. If both clusters are up, nothing additional is required. An error is displayed. Wait for connectivity to resume.
If you have configured the automatic failover feature, see Appendix C, “Setting Up Auto-Failover,”
on page 133.
7.6.2 Host-Based Mirroring Failure Types and Responses
“Primary Cluster Fails but Primary SAN Does Not” on page 90
“Primary Cluster and Primary SAN Both Fail” on page 90
“Secondary Cluster Fails but Secondary SAN Does Not” on page 90
“Secondary Cluster and Secondary SAN Both Fail” on page 90
“Primary SAN Fails but Primary Cluster Does Not” on page 90
“Secondary SAN Fails but Secondary Cluster Does Not” on page 90
“Intersite SAN Connectivity Is Lost” on page 90
“Intersite LAN Connectivity Is Lost” on page 91
Managing a Business Contiuity Cluster 89
Page 90
Primary Cluster Fails but Primary SAN Does Not
Response for this failure is the same as for SAN-based mirroring described in Primary Cluster Fails
but Primary SAN Does Not in Section 7.6.1, “SAN-Based Mirroring Failure Types and Responses,” on page 88. Do not disable MSAP (Multiple Server Activation Prevention), which is enabled by
default.
Primary Cluster and Primary SAN Both Fail
Bring up your primary SAN or iSCSI target before bringing up your cluster servers. Then run the
Cluster Scan For New Devices
command from a secondary cluster server. Ensure that
remirroring completes before bringing downed cluster servers back up.
If necessary, promote the former primary SAN back to primary. Then bring up the former primary cluster servers and fail back the cluster resources.
Secondary Cluster Fails but Secondary SAN Does Not
No additional response is necessary for this failure other than recovering the secondary cluster. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
novdocx (en) 13 May 2009
Secondary Cluster and Secondary SAN Both Fail
Bring up your secondary SAN or iSCSI target before bringing up your cluster servers. Then run the
Cluster Scan For New Devices
command on a primary cluster server to ensure that remirroring takes place. When you bring the secondary cluster back up, the LUNs are still in a secondary state to the primary SAN.
Primary SAN Fails but Primary Cluster Does Not
If your primary SAN fails, all nodes in your primary cluster also fail. Bring up your primary SAN or iSCSI target and then bring up your cluster servers. Then run the
Devices
command from a secondary cluster server. Ensure that remirroring completes before
Cluster Scan For New
bringing downed cluster servers back up.
If necessary, promote the former primary SAN back to primary. You might need to demote the LUNs and resources to secondary on the primary SAN before bringing them back up. You can use
cluster resetresources
the
console command to change resource states to offline and
secondary. Bring up the former primary cluster servers and fail back resources.
Secondary SAN Fails but Secondary Cluster Does Not
Bring up your secondary SAN or iSCSI target before bringing up your cluster servers. Then run the
Cluster Scan For New Devices
command on a primary cluster server to ensure remirroring takes place. Then bring the secondary cluster back up. When you bring the secondary SAN and cluster back up, resources are still in a secondary state.
Intersite SAN Connectivity Is Lost
You must run the
Cluster Scan For New Devices
remirroring takes place. Recover your SANs first, then remirror from the good side to the bad side.
90 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
command on both clusters to ensure that
Page 91
Intersite LAN Connectivity Is Lost
Users might not be able to access servers in the primary cluster but can possibly access servers in the secondary cluster. If both clusters are up, nothing additional is required. An error is displayed. Wait for connectivity to resume.
If you have configured the automatic failover feature, see Appendix C, “Setting Up Auto-Failover,”
on page 133.
novdocx (en) 13 May 2009
Managing a Business Contiuity Cluster 91
Page 92
novdocx (en) 13 May 2009
92 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 93
8
Virtual IP Addresses
With the release of NetWare® 6.5, Novell® enhanced the TCP/IP stack to support virtual IP addresses. This feature is another high-availability offering that enables administrators to easily manage the name-to-IP address associations of business services. It complements the existing load balancing and fault tolerance features of the TCP/IP stack and enhances the availability of servers that reside on multiple subnets.
A virtual IP address is an IP address that is bound to a virtual Network Interface Card (NIC) and is driven by a new virtual driver named virtual entity that has no physical hardware counterpart. A virtual NIC can be thought of as a conventional TCP/IP loopback interface with added external visibility. Virtual IP addresses can also be thought of as conventional loopback addresses with the 127.0.0.0 IP network constraint relaxed. A server with a virtual NIC and a virtual IP address acts as an interface to a virtual internal IP network that contains the server as the one and only host.
Regardless of their virtual nature, virtual IP addresses and virtual NICs behave like physical IP addresses and physical NICs, and they are similarly configured by using either the INETCFG server-based utility or the Novell Remote Manager Web-based utility.
vnic.lan
. As the name suggests, this virtual NIC is a purely
novdocx (en) 13 May 2009
8
Section 8.1, “Virtual IP Address Definitions and Characteristics,” on page 93
Section 8.2, “Virtual IP Address Benefits,” on page 94
Section 8.3, “Reducing the Consumption of Additional IP Addresses,” on page 98
Section 8.4, “Configuring Virtual IP Addresses,” on page 99
8.1 Virtual IP Address Definitions and Characteristics
Section 8.1.1, “Definitions,” on page 93
Section 8.1.2, “Characteristics,” on page 94
8.1.1 Definitions
Virtual driver: The
Virtual board (NIC): Any board configured to use the virtual driver.
Virtual IP address: Any IP address that is bound to a virtual board.
Virtual IP network: The IP network that the virtual IP address is a part of. This is defined by the
virtual IP address together with the IP network mask that it is configured with.
vnic.lan
driver provided by Novell.
Host mask: The IP network mask consisting of all 1s - FF.FF.FF.FF (255.255.255.255).
Physical IP address: Any IP address that is not a virtual IP address. It is an IP address that is
configured over a physical hardware NIC.
Physical IP network: An IP network that a physical IP address is a part of. A physical IP network identifies a logical IP network that is configured over a physical hardware wire.
Virtual IP Addresses
93
Page 94
8.1.2 Characteristics
Virtual IP addresses are unique in that they are bound to a virtual “ether” medium instead of to a “physical” network medium such as Ethernet. In other words, the virtual IP address space is different than the physical IP address space. As a result, virtual IP network numbers need to be different from physical IP network numbers. However, this mutual exclusivity of the IP address space for the physical and virtual networks doesn’t preclude the possibility of configuring multiple virtual IP networks in a single network domain.
8.2 Virtual IP Address Benefits
In spite of their simplicity, virtual IP addresses offer two main advantages over their physical counterparts:
Section 8.2.1, “High Availability,” on page 94
Section 8.2.2, “Unlimited Mobility,” on page 97
Section 8.2.3, “Support for Host Mask,” on page 97
Section 8.2.4, “Source Address Selection for Outbound Connections,” on page 97
novdocx (en) 13 May 2009
These advantages exist because virtual IP addresses are purely virtual and are not bound to a physical network wire.
8.2.1 High Availability
If a virtual IP address is defined on a multihomed server with more than one physical NIC, a virtual IP address is a highly reachable IP address on the server when compared to any of the physical IP addresses. This is especially true in the event of server NIC failures. This assumes that the server is running a routing protocol and is advertising its “internal” virtual IP network—which only it knows about and can reach—to other network nodes.
Physical IP addresses might not be reachable because:
TCP/IP protocols use link-based (network-based) addressing to identify network nodes. As a
result, the routing protocols preferentially deliver a packet to the server through the network that the target IP address is part of.
Dynamic routing protocols are extremely resilient to intermediate link and router failures, but
they do not adapt well to failures of links at the last hop that ultimately delivers a packet to its destination.
This is because the last hop link is typically a stub link that does not carry any routing heartbeats. Therefore, if one of the physical cards in a server fails, the server can become inaccessible, along with any service that it hosts on the corresponding physical IP address. This can occur in spite of the fact that the server is still up and running and can be reached through the other network card.
The virtual IP address feature circumvents this problem by creating a virtual IP network different from any of the existing physical IP networks. As a result, any packet that is destined for the virtual IP address is forced to use a virtual link as its last hop link. Because it is purely virtual, this last hop link can be expected to always be up. Also, because all other real links are forcibly made to act as intermediate links, their failures are easily handled by the dynamic routing protocols.
94 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 95
The following figure illustrates a multihomed server with all nodes running a dynamic routing protocol.
Figure 8-1 Multihomed Server Running a Dynamic Routing Protocol
Server
2.2.2.11.1.1.1
X
novdocx (en) 13 May 2009
1.1.1.2
Router 2Router 1
3.3.3.1
3.3.3.3
Client
2.2.2.2
3.3.3.2
In this network, the server is a multihomed server hosting a critical network service. For simplicity, assume that all nodes are running some dynamic routing protocol.
If the client attempts to communicate with the server with the 1.1.1.1 IP address, it tries to reach the server through the nearest router, which is Router 1. If the 1.1.1.1 interface were to fail, Router 1 would continue to advertise reachability to the 1.0.0.0/FF.0.0.0 network and the client would continue to forward packets to Router 1. Thes undeliverable packets would ultimately be dropped by Router 1. Therefore, in spite of the fact that the service is still up and running and can be reached through the other active interface, it is rendered unreachable. In this scenario, a recovery would involve the ability of the client application to retry the alternate IP address 2.2.2.1 returned by the name server.
Consider the same scenario with the server configured with a virtual IP address and the client communicating with the virtual IP address instead of one of the server’s real IP addresses, as shown in the following figure.
Virtual IP Addresses 95
Page 96
Figure 8-2 Multihomed Server Using Virtual IP Addresses
4.4.4.1 (Virtual IP)
Server
2.2.2.11.1.1.1
X
novdocx (en) 13 May 2009
1.1.1.2
Router 2Router 1
3.3.3.1
3.3.3.3
Client
2.2.2.2
3.3.3.2
In this configuration, if the 1.1.1.1 interface were to fail, the client would ultimately learn the new route through Router 2 and would correctly forward packets to Router 2 instead of Router 1. Thus, despite physical interface failures, a virtual IP address on a multihomed server acts as an always­reachable IP address for the server.
Generally speaking, if a connection between two machines is established by using a virtual IP address as the end-point address at either end, the connection is resilient to interface failures at either end.
There are two important side effects that directly follow from the highly reachable nature of virtual IP addresses:
They completely and uniquely identify a multihomed server
A multihomed server with a virtual IP address no longer needs to carry multiple DNS entries for its name in the naming system.
They significantly enhance the LAN redundancy inherent in a multihomed server
If one of the subnets that a server interfaces to fails completely or is taken out of service for maintenance, the routing protocols reroute the packets addressed to the virtual IP address through one of the other active subnets.
The resilience against interface failures provided by virtual IP addresses depends on the fault resilience provided by the dynamic routing protocols, as well as on fault recovery features such as retransmissions built into the application logic.
96 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 97
8.2.2 Unlimited Mobility
Unlike physical IP addresses, which are limited in their mobility, virtual IP addresses are highly mobile. The degree of mobility is determined by the number of servers that an IP address on a specific server could be moved to. In other words, if you choose a physical IP address as an IP address of a network resource, you are limiting the set of potential servers to which this resource could transparently fail over to.
If you choose a virtual IP address, the set of servers that the resource could be transparently moved to is potentially unlimited. This is because of the nature of virtual IP addresses; they are not bound to a physical wire and, as a result, they carry their virtual network to wherever they are moved. Again, there is an implicit assumption that the location of a virtual IP address is advertised to the owning server through some routing protocol. The ability to move an IP address across different machines becomes particularly important when you need to transparently move or fail over a network resource that is identified by an IP address (which could be a shared volume or a mission-critical service) to another server on another network.
This unlimited mobility of virtual IP addresses is an advantage to network administrators, offering more ease of manageability and greatly minimizing network reorganization overhead. For network administrators, shuffling services between different IP networks is the rule rather than the exception. The need often arises to move a machine hosting a particular service to some other IP network, or to move a service hosted on a particular machine to be rehosted on some other machine connected to a different IP network. If the service is hosted on a physical IP address, accommodating these changes involves rehosting the service on a different IP address pulled out from the new network, and appropriately changing the DNS entry for the service to point to the new IP address. However, if the service is hosted on a virtual IP address, the necessity of changing the DNS entries for the service is eliminated.
novdocx (en) 13 May 2009
8.2.3 Support for Host Mask
Virtual boards support configuring virtual IP addresses with a host mask. This results in a single address being used rather than an entire subnet. See Section 8.3, “Reducing the Consumption of
Additional IP Addresses,” on page 98.
8.2.4 Source Address Selection for Outbound Connections
Full resilience of connections to interface failures can be ensured only when the connections are established between machines through using virtual IP addresses as end point addresses. This means an application that initiates outbound connections to a virtual IP address should also use a virtual IP address as its local end point address.
This isn’t difficult if the application binds its local socket end point address with a virtual IP address. However, there are some legacy applications that bind their sockets to a wildcard address (such as
0.0.0.0). When these applications initiate an outbound connection to other machines, TCP/IP chooses the outbound interface’s IP address as the local socket end point address. In order for these legacy applications to take advantage of the fault resilience provided by the virtual IP address feature, the default source address selection behavior of TCP/IP has been enhanced to accommodate the use of a virtual IP address as the source IP address. As a result, whenever a TCP or UDP application initiates an outbound connection with a wildcard source IP address, TCP/IP chooses the first bound virtual IP address as the source IP address for the connection.
Virtual IP Addresses 97
Page 98
This enhanced source address selection feature can be enabled or disabled globally as well as on a per-interface basis. This feature is enabled by default on all interfaces.
8.3 Reducing the Consumption of Additional IP Addresses
In any network environment, one of the first obstacles is how clients locate and connect to the services. A business continuity cluster can exacerbate this problem because services can migrate to nodes on a completely different network segment. Although there are many potential solutions to this problem, such as DNS and SLP, none of them offers the simplicity and elegance of virtual IP addresses. With virtual IP addresses, the IP address of the service can follow the service from node to node in a single cluster, as well as from node to node in separate, distinct clusters. This makes the client reconnection problem trivial; the client just waits for the new route information to be propagated to the routers on the network. No manual steps are required, such as modifying a DNS server.
The only drawback in using virtual IP addresses is the consumption of additional IP addresses. This constraint stems from the requirement that virtual IP network addresses must be different from all other real IP network addresses. Although this constraint is not particularly severe in enterprises that use private addressing (where the IP address space is potentially large), it could become limiting in organizations that do not use private addresses.
novdocx (en) 13 May 2009
To use a virtual IP address in a business continuity cluster, we recommend using a host mask. To understand why, consider the fact that each service in a clustered environment must have its own unique IP address or, a unique virtual IP address. Furthermore, consider that each virtual IP address belongs to a virtual IP network whose route is being advertised by a single node within a cluster. Because Novell Cluster Services another, the virtual IP network must migrate to the same node as the service. If multiple virtual IP addresses belong to a given virtual IP network, one of two events must occur:
All services associated with the virtual IP addresses on a given virtual IP network must fail
over together.
The virtual IP addresses on a given virtual IP network must go unused, thereby wasting a
portion of the available address space.
Neither of these situations is desirable. Fortunately, the use of host masks remedies both.
In enterprises that use fixed-length subnetting together with a dynamic routing protocol like RIP-I, each virtual IP address could consume a large number of host IP addresses. One way to circumvent this problem is to configure a virtual IP address with a host mask of all 1s (that is, FF.FF.FF.FF), thereby consuming only one host IP address. Of course, the viability of this option depends on the ability of the RIP-I routers on the network to recognize and honor the advertised host routes.
In autonomous systems that use variable-length subnet masking (VLSM) together with routing protocols like RIP-II or OSPF, the consumption of additional IP addresses is not a major problem. You could simply configure a virtual IP address with an IP network mask as large as possible (including a host mask of all 1s), thereby limiting the number of addresses consumed by the virtual IP address space.
TM
can migrate a service and its virtual IP address from one node to
98 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Page 99
8.4 Configuring Virtual IP Addresses
The routers in a virtual IP address configuration must be running the RIP I or RIP II protocols. For a business continuity cluster, RIP II is the preferred protocol and should be used whenever possible. In NetWare, this can be accomplished by configuring the NetWare RIP Bind Options to use RIP I and RIP II, or RIP II only. Also, the command
autoexec.ncf
to the
file of any NetWare routers in this configuration.
SET RIP2 AGGREGATION OVERRIDE=ON
After the appropriate virtual IP addresses and host masks have been determined, you can enable virtual IP addresses in a business continuity cluster by using the following process:
must be added
novdocx (en) 13 May 2009
1. The
autoexec.ncf
file on each node in both clusters must be modified to add the following two lines. The first line loads the virtual driver and creates a virtual board named VNIC. The second line disables RIP 2 route aggregation on the cluster nodes.
LOAD VNIC NAME=VNIC
SET RIP2 AGGREGATION OVERRIDE=ON
2. The command to bind a virtual IP address for the service must be added to the cluster resource load script.
The following is an example of a cluster resource load script for a standard NetWare volume called
HOMES
. This example uses host masks and assumes the virtual board has been named
VNIC. Notice that the command to add a secondary IP address has been replaced with the
IP VNIC Mask=255.255.255.255 Address=10.1.1.1
command, which binds the virtual IP
BIND
address 10.1.1.1 to the virtual board.
nss /poolactivate=HOMES
mount HOMES VOLID=254
CLUSTER CVSBIND ADD BCC_HOMES_SERVER 10.1.1.1
NUDP ADD BCC_HOMES_SERVER 10.1.1.1
BIND IP VNIC Mask=255.255.255.255 Address=10.1.1.1
3. The command to unbind the virtual IP address must be added to the cluster resource unload script.
The following is the matching cluster resource unload script for the same NetWare volume discussed above. Notice the command to delete the secondary IP address has been replaced with the
UNBIND IP VNIC Address=10.1.1.1
command, which unbinds the virtual IP
address 10.1.1.1 from the virtual board.
UNBIND IP VNIC Address=10.1.1.1
CLUSTER CVSBIND DEL BCC_HOMES_SERVER 10.1.1.1
NUDP DEL BCC_HOMES_SERVER 10.1.1.1
nss /pooldeactivate=HOMES /overridetype=question
4. If the cluster resource is a clustered-enabled pool or volume, the IP address of that resource
®
needs to be changed to the virtual IP address. You can do this by using ConsoleOne
, Novell Remote Manager, or iManager. This change is not needed for any non-volume cluster resources like DHCP.
Virtual IP Addresses 99
Page 100
8.4.1 Displaying Bound Virtual IP Addresses
novdocx (en) 13 May 2009
To verify that a virtual IP address is bound, enter
display secondary ipaddress
at the server console of the cluster server where the virtual IP address is assigned. This displays all bound virtual IP addresses. A maximum of 256 virtual IP addresses can be bound.
100 BCC 1.1 SP2: Administration Guide for NetWare 6.5 SP8
Loading...