HP Ignite-UX White Paper

Configuring an Ignite-UX server under HP Serviceguard
Table of Contents
About this document............................................................................................................................. 3
Intended audience ............................................................................................................................... 3
Advantage of running Ignite-UX under Serviceguard ................................................................................ 3
Setting up Ignite-UX to run under Serviceguard ........................................................................................ 3
Serviceguard packages and scripts .................................................................................................... 3
Procedure to set up Ignite under Serviceguard ..................................................................................... 4
1. Create an LVM volume group to manage the shared file systems ................................................... 4
2. Create logical volumes for the shared file systems ........................................................................ 5
3. Create mount points for the shared file systems ............................................................................ 6
4. Add the -s option to /usr/lbin/tftpd .................................................................................. 7
5. Mount /var/opt/ignite and create required directories on the first cluster node ........................ 7
6. Unmount /var/opt/ignite on the first cluster node and deactivate the volume group .................. 8
7. Configure group for exclusive access and export ......................................................................... 8
8. Import volume group data on all cluster nodes ............................................................................. 8
9. Copy NFS toolkit cluster scripts in place ...................................................................................... 8
10. Create package config and scripts .......................................................................................... 9
11. Edit parameters in Ignite scripts and config files on the first node as follows. ................................ 9
12. Copy completed package directory to all nodes ..................................................................... 11
13. Add the new package to the cluster on the first node ............................................................... 11
14. Bring up the package on each node and install Ignite-UX ......................................................... 12
15. Update the Ignite server IP address ........................................................................................ 18
16. Create archive directories for cluster nodes ............................................................................ 19
17. Add clients and create recovery archives ............................................................................... 19
Best practices .................................................................................................................................... 19
Upgrading Ignite-UX ....................................................................................................................... 19
Inactive cluster nodes and swverify .............................................................................................. 19
Other depots under shared file systems ............................................................................................. 19
Booting from the cluster using /etc/bootptab ................................................................................ 20
Edit scripts on all cluster nodes for NFS client mounts ......................................................................... 20
When using the Ignite UI or commands, always use the virtual server hostname and IP for the package ... 20
Managing recovery images for cluster nodes..................................................................................... 20
For more information .......................................................................................................................... 20
About this document
This white paper describes the procedure for installing Ignite-UX on an HP Serviceguard cluster running HP-UX 11i v3. It is assumed you have a cluster up and running, there is shared storage available for recovery archives, Serviceguard NFS Toolkit is installed, and Ignite-UX is not installed. This paper covers configuring Ignite-UX to use an LVM shared volume and was written using the September, 2008 DCOE, which contains Serviceguard A.11.18.00 (T1905CA) and Serviceguard NFS Toolkit A.11.31.03 (B5140BA).
Intended audience
This document is intended for system and network administrators responsible for installing, configuring, and managing HP-UX systems. Familiarity with Ignite-UX and HP Serviceguard is assumed. Administrators are expected to have knowledge of operating system concepts, commands, and configuration.
Advantage of running Ignite-UX under Serviceguard
Serviceguard clusters are made up of HP Integrity or HP 9000 servers configured with software and hardware redundancies so your environment continues to run even when there is a failure. Each server in a cluster is called a node.
Serviceguard allows Ignite-UX to run in a clustered environment. One advantage of this is the creation of a highly available recovery server. This setup allows Serviceguard to monitor cluster nodes, networks, and processes, and handle moving the Ignite-UX recovery service to another node in the case of failure.
Setting up Ignite-UX to run under Serviceguard
HP Serviceguard is available as a recommended product in the HP-UX 11i v3 High Availability OE (HA-OE) and the Data Center OE (DC-OE). The Ignite-UX product is included as an optional product on all the HP-UX 11i v3 OEs or can be downloaded via the Ignite-UX product website at:
http://h71028.www7.hp.com/enterprise/w1/en/os/hpux11i-system-management-ignite­ux.html?jumpid=ex_r1533_us/en/large/tsg/go_ignite-ux.
For information on how to configure a Serviceguard cluster, see the Serviceguard documentation available at: http://www.hp.com/go/hpux-serviceguard-docs
Serviceguard packages and scripts
Serviceguard packages group together applications and the services they depend on. A package and a set of scripts must be created for Ignite. The scripts configure Serviceguard for:
Processes to monitor - for example, NFS is used by Ignite Networks to monitor - this provides failover if a network interface goes down Storage to manage – for example, Ignite recovery archives on a RAID array IP addresses to manage - Ignite "service" has a static IP regardless of the node it is
running on
File systems to export - only the active cluster node has exported file systems
Procedure to set up Ignite under Serviceguard
1. Create an LVM volume group to manage the shared file systems
This volume group will contain /etc/opt/ignite, /opt/ignite/boot, and /var/opt/ignite. This must be on a RAID array and must be accessible by each of the other cluster nodes. You must have approximately 1 GB of free disk space available in /opt/ignite/boot. For this example, we have a 104 GB disk.
If you use a location other than /var/opt/ignite to store archives on the Ignite server, you will need to change these instructions so your custom location is available via NFS as part of this package. If your archives are stored outside the cluster, using a highly available Ignite package in a Serviceguard cluster does not improve the availability of your archives - if the archive server is down, you will not be able to perform any recoveries.
Execute the following on the first cluster node.
# diskinfo /dev/rdsk/c13t0d2 SCSI describe of /dev/rdsk/c13t0d2: vendor: COMPAQ product id: HSV111 (C)COMPAQ type: direct access size: 104857600 Kbytes bytes per sector: 512 # # mkdir /dev/vg01 # chmod 755 /dev/vg01 # mknod /dev/vg01/group c 64 0x010000
For the mknod command, the major number is always 64 and the hexadecimal minor number has the form 0xhh0000 where hh must be unique to the volume group you are creating. You must use the same minor number for the volume group on all the nodes. You need to determine in advance what minor number is available on all cluster members for this new volume group by reviewing what has been used on all cluster members. Once you determine what minor number is available, use it instead of the 0x010000 value used in the example.
# chmod 640 /dev/vg01/group # pvcreate -f /dev/rdsk/c13t0d2 Physical volume "/dev/rdsk/c13t0d2" has been successfully created. # vgcreate -s 16 /dev/vg01 /dev/dsk/c13t0d2 Increased the number of physical extents per physical volume to 6399. Volume group "/dev/vg01" has been successfully created. Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
The name of the volume group is not important; you may use any name supported by Serviceguard instead of vg01. If you do use an alternative name, substitute any reference to vg01 in this documentation with the name you have chosen.
2. Create logical volumes for the shared file systems
You will need logical volumes for /etc/opt/ignite, /var/opt/ignite, and
/opt/ignite/boot on the first cluster node.
# lvcreate -L 1000 -n lvol1 /dev/vg01 Warning: rounding up logical volume size to extent boundary at size "1008" MB. Logical volume "/dev/vg01/lvol1" has been successfully created with character device "/dev/vg01/rlvol1". Logical volume "/dev/vg01/lvol1" has been successfully extended. Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # lvcreate -L 100000 -n lvol2 /dev/vg01 Logical volume "/dev/vg01/lvol2" has been successfully created with character device "/dev/vg01/rlvol2". Logical volume "/dev/vg01/lvol2" has been successfully extended. Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # lvcreate -L 1000 -n lvol3 /dev/vg01 Warning: rounding up logical volume size to extent boundary at size "1008" MB. Logical volume "/dev/vg01/lvol3" has been successfully created with character device "/dev/vg01/rlvol3". Logical volume "/dev/vg01/lvol3" has been successfully extended. Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # newfs -F vxfs /dev/vg01/rlvol1 version 6 layout 1032192 sectors, 1032192 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vg01/rlvol2 version 6 layout 102400000 sectors, 102400000 blocks of size 1024, log size 16384 blocks largefiles supported # newfs -F vxfs /dev/vg01/rlvol3 version 6 layout 1032192 sectors, 1032192 blocks of size 1024, log size 16384 blocks largefiles supported # lvdisplay /dev/vg01/lvol1
--- Logical volumes --­LV Name /dev/vg01/lvol1 VG Name /dev/vg01 LV Permission read/write LV Status available/syncd
Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 1008 Current LE 63 Allocated PE 63 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default # lvdisplay /dev/vg01/lvol2
--- Logical volumes --­LV Name /dev/vg01/lvol2 VG Name /dev/vg01 LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 100000 Current LE 6250 Allocated PE 6250 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default # lvdisplay /dev/vg01/lvol3
--- Logical volumes --­LV Name /dev/vg01/lvol3 VG Name /dev/vg01 LV Permission read/write LV Status available/syncd Mirror copies 0 Consistency Recovery MWC Schedule parallel LV Size (Mbytes) 1008 Current LE 63 Allocated PE 63 Stripes 0 Stripe Size (Kbytes) 0 Bad block on Allocation strict IO Timeout (Seconds) default
The file systems created above were created with default options. If you require custom options, you may specify them. If your Ignite server will be storing archives in the default location, you must ensure the file system mounted at /var/opt/ignite supports largefiles.
3. Create mount points for the shared file systems
You will need /var/opt/ignite, /opt/ignite/boot, and /etc/opt/ignite on all cluster nodes with the correct permissions and ownership.
# mkdir –p /etc/opt/ignite # mkdir –p /var/opt/ignite # mkdir –p /opt/ignite/boot # chown bin:bin /var/opt/ignite # chmod 555 /var/opt/ignite # chown bin:bin /etc/opt/ignite # chmod 755 /etc/opt/ignite # chown bin:bin /opt/ignite/boot # chmod 555 /opt/ignite/boot
4. Add the -s option to /usr/lbin/tftpd
Add the -s option to the /usr/lbin/tftpd file on all cluster nodes that can run the Serviceguard package containing Ignite-UX. Each cluster node that can run Ignite-UX must have an entry in /etc/inetd.conf that is similar to the following:
tftp dgram udp wait root /usr/lbin/tftpd tftpd -s \ /opt/ignite\ /var/opt/ignite
For more information about the tftpd -s option, see tftpd (1m).
The directories /opt/ignite and /var/opt/ignite are also required for Ignite-UX to work correctly. If ignite-UX is installed on only one cluster node, these directory entries will not be present and must be manually added.
The /etc/inetd.conf file must be reconfigured for the changes to take effect. Run the "inetd -c" command to reconfigure the /etc/inetd.conf file.
If the -s option is not present in the /usr/lbin/tftpd file, the kernel routing table will determine the IP address used to reply to the client. The booting client might not accept the reply, because this might not be the expected IP address. This might result in intermittent network boot errors. On HP Integrity systems, errors such as PXE-E18 might be displayed, whereas on HP9000 systems, it might be difficult to diagnose the errors. In cases where the -s option is not added to the /usr/lbin/tftpd file, the possibility of the error occurring depends on package movements and the order in which packages are started in the cluster node on which the Ignite-UX package is running.
5. Mount /var/opt/ignite and create required directories on
the first cluster node
Create /var/opt/ignite/data, /var/opt/ignite/clients, and /var/opt/ignite/recovery/archives on the first cluster node, then set permissions and ownership. These directories are exported when the service runs, so they must be there prior to installing Ignite.
# mount /dev/vg01/lvol2 /var/opt/ignite # mkdir /var/opt/ignite/data # mkdir /var/opt/ignite/clients # mkdir –p /var/opt/ignite/recovery/archives # chown bin:bin /var/opt/ignite/clients
Loading...
+ 15 hidden pages