Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology,
Inc.
The software described in this document is furnished under a license agreement. You can obtain a copy of that license by
visiting Avid's Web site at www.avid.com. The terms of that license are also available in the product in the same directory as
the software. The software may not be reverse assembled and may be used or copied only in accordance with the terms of the
license agreement. It is against the law to copy the software on any medium except as specifically allowed in the license
agreement.
Avid products or portions thereof are protected by one or more of the following United States Patents: 4,970,663; 5,267,351;
5,309,528; 5,355,450; 5,396,594; 5,440,348; 5,467,288; 5,513,375; 5,528,310; 5,557,423; 5,577,190; 5,584,006; 5,627,765;
5,640,601; 5,644,364; 5,654,737; 5,715,018; 5,719,570; 5,724,605; 5,726,717; 5,729,673; 5,745,637; 5,752,029; 5,754,851;
5,799,150; 5,812,216; 5,828,678; 5,842,014; 5,852,435; 5,999,406; 6,038,573; 6,061,758; 6,141,007; 6,211,869; 6,532,043;
6,546,190; 6,596,031; 6,636,869; 6,747,705; 6,763,523; 6,766,357; 6,813,622; 6,847,373; 7,081,900; RE40,107; D352,278;
D372,478; D373,778; D392,267; D392,268; D392,269; D395,291; D396,853; D398,912. Other patents are pending.
This document is protected under copyright law. An authorized licensee of Avid Interplay may reproduce this publication for the
licensee’s own use in learning how to use the software. This document may not be reproduced or distributed, in whole or in
part, for commercial purposes, such as selling copies of this document or providing support or educational services to others.
This document is supplied as a guide for Interplay Framework. Reasonable care has been taken in preparing the information it
contains. However, this document may contain omissions, technical inaccuracies, or typographical errors. Avid Technology,
Inc. does not accept responsibility of any kind for customers’ losses due to the use of this document. Product specifications
are subject to change without notice.
Permission to use, copy, modify, distribute, and sell this software [i.e., the TIFF library] and its documentation for any purpose
is hereby granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of
the software and related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any
advertising or publicity relating to the software without the specific, prior written permission of Sam Leffler and Silicon
Graphics.
THE SOFTWARE IS PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR
OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT
OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
DATA OR PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF
LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The following disclaimer is required by the Independent JPEG Group:
This software is based in part on the work of the Independent JPEG Group.
This Software may contain components licensed under the following conditions:
Copyright (c) 1989 The Regents of the University of California. All rights reserved.
Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph
are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such
distribution and use acknowledge that the software was developed by the University of California, Berkeley. The name of the
University may not be used to endorse or promote products derived from this software without specific prior written
permission. THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES,
INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE.
Copyright (C) 1989, 1991 by Jef Poskanzer.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission
notice appear in supporting documentation. This software is provided "as is" without express or implied warranty.
Copyright 1995, Trinity College Computing Center. Written by David Chappell.
2
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission
notice appear in supporting documentation. This software is provided "as is" without express or implied warranty.
Copyright 1996 Daniel Dardailler.
Permission to use, copy, modify, distribute, and sell this software for any purpose is hereby granted without fee, provided that
the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Daniel Dardailler not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior permission. Daniel Dardailler makes no representations about the
suitability of this software for any purpose. It is provided "as is" without express or implied warranty.
Modifications Copyright 1999 Matt Koss, under the same license as above.
Copyright (c) 1991 by AT&T.
Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this
entire notice is included in all copies of any software which is or includes a copy or modification of this software and in all
copies of the supporting documentation for such software.
THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR,
NEITHER THE AUTHOR NOR AT&T MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE
MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
This product includes software developed by the University of California, Berkeley and its contributors.
The following disclaimer is required by Nexidia Inc.:
Manufactured under license from the Georgia Tech Research Corporation, U.S.A. Patent Pending.
The following disclaimer is required by Paradigm Matrix:
Portions of this software licensed from Paradigm Matrix.
The following disclaimer is required by Ray Sauers Associates, Inc.:
“Install-It” is licensed from Ray Sauers Associates, Inc. End-User is prohibited from taking any action to derive a source code
equivalent of “Install-It,” including by reverse assembly or reverse compilation, Ray Sauers Associates, Inc. shall in no event be
liable for any damages resulting from reseller’s failure to perform reseller’s obligation; or any damages arising from use or
operation of reseller’s products or the software; or any other damages, including but not limited to, incidental, direct, indirect,
special or consequential Damages including lost profits, or damages resulting from loss of use or inability to use reseller’s
products or the software for any reason including copyright or patent infringement, or lost data, even if Ray Sauers Associates
has been advised, knew or should have known of the possibility of such damages.
The following disclaimer is required by Videomedia, Inc.:
“Videomedia, Inc. makes no warranties whatsoever, either express or implied, regarding this product, including warranties with
respect to its merchantability or its fitness for any particular purpose.”
“This software contains V-LAN ver. 3.0 Command Protocols which communicate with V-LAN ver. 3.0 products developed by
Videomedia, Inc. and V-LAN ver. 3.0 compatible products developed by third parties under license from Videomedia, Inc. Use
of this software will allow “frame accurate” editing control of applicable videotape recorder decks, videodisc recorders/players
and the like.”
The following disclaimer is required by Altura Software, Inc. for the use of its Mac2Win software and Sample
Source Code:
Avid Interplay contains components licensed from LavanTech. These components may only be used as part of and in
connection with Avid Interplay.
Attn. Government User(s). Restricted Rights Legend
U.S. GOVERNMENT RESTRICTED RIGHTS. This Software and its documentation are “commercial computer software” or
“commercial computer software documentation.” In the event that such Software or documentation is acquired by or on behalf
of a unit or agency of the U.S. Government, all rights with respect to this Software and documentation are subject to the terms
of the License Agreement, pursuant to FAR §12.212(a) and/or DFARS §227.7202-1(a), as applicable.
Trademarks
888 I/O, Adrenaline, AirPlay, AirSPACE, AirSPACE HD, AirSpeed, ALEX, Alexis, AniMatte, AudioSuite, AudioVision, AutoSync,
Avid, Avid Advanced Response, Avid DNA, Avid DNxcel, Avid DNxHD, AVIDdrive, AVIDdrive Towers, Avid DS Assist Station,
Avid ISIS, Avid Learning Excellerator, Avid Liquid, Avid Mojo, AvidNet, AvidNetwork, Avid Remote Response, AVIDstripe,
Avid Unity, Avid Unity ISIS, Avid Xpress, AVoption, AVX, CamCutter, CaptureManager, ChromaCurve, ChromaWheel,
Conectiv, CountDown, DAE, Dazzle, Deko, DekoCast, D-Fi, D-fx, DigiDelivery, Digidesign, Digidesign Audio Engine,
Digidesign Intelligent Noise Reduction, DigiDrive, Digital Nonlinear Accelerator, DigiTranslator, DINR, DNxchange, do more,
D-Verb, Equinox, ExpertRender, Face Robot, FACE ROBOT, FastBreak, Fast Track, FieldPak, Film Composer, FilmScribe,
FluidMotion, FXDeko, HIIP, HyperSPACE, HyperSPACE HDCAM, IllusionFX, Image Independence, iNEWS,
iNEWS ControlAir, Instinct, Interplay, Intraframe, iS9, iS18, iS23, iS36, ISIS, IsoSync, LaunchPad, Lightning, Lo-Fi,
Magic Mask, make manage move | media, Marquee, Matador, Maxim, MCXpress, Media Browse, Media Composer,
MediaDock, MediaDock Shuttle, Media Fusion, Media Illusion, MediaLog, Media Reader, Media Recorder, MEDIArray,
MediaShare, MediaStream, Meridien, MetaFuze, MetaSync, MicroTrack, MissionControl, MovieBox, NaturalMatch, Nearchive,
NetReview, NewsCutter, Nitris, OMF, OMF Interchange, OMM, Open Media Framework, Open Media Management, PCTV,
Pinnacle MediaSuite, Pinnacle Studio, Pinnacle Systems, ProEncode, Pro Tools, QuietDrive, Recti-Fi, RetroLoop, rS9, rS18,
SalesView, Sci-Fi, ScriptSync, SecureProductionEnvironment, Show Center, Softimage, Sound Designer II, SPACE,
SPACEShift, SpectraGraph, SpectraMatte, Sputnik, SteadyGlide, SubCap, Symphony, SynchroScience, TARGA, Thunder,
Thunder Station, TimeLiner, Torq, Trilligent, UnityRAID, Vari-Fi, Velvet, Video RAID, Video Slave Driver, VideoSPACE, Xdeck,
and X-Session are either registered trademarks or trademarks of Avid Technology, Inc. in the United States and/or other
countries.
Adobe and Photoshop are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States
and/or other countries. Apple and Macintosh are trademarks of Apple Computer, Inc., registered in the U.S. and other
countries. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other
countries. All other trademarks contained herein are the property of their respective owners.
GOT FOOTAGE?
Editors — Filmmakers — Special Effects Artists — Game Developers — Animators — Educators — Broadcasters — Content
creators of every genre — Just finished an incredible project and want to share it with the world?
Send us your reels and we may use your footage in our show reel or demo!*
For a copy of our release and Avid’s mailing address, go to www.avid.com/footage.
*Note: Avid cannot guarantee the use of materials submitted.
Avid Interplay Engine Failover Guide • 0130-07643-02 Rev C • June 2008 • This document is distributed by
Avid in online (electronic) form only, and is not available for purchase in printed form.
Congratulations on the purchase of your Avid® Interplay™, a powerful system for managing
media in a shared storage environment.
This guide is intended for all Avid Interplay administrators who are responsible for
installing, configuring, and maintaining an Avid Interplay Engine with the Automatic Server
Failover module integrated.
The documentation describes the features and hardware of all models. Therefore, your
n
system might not contain certain features and hardware that are covered in the
documentation.
Symbols and Conventions
Avid documentation uses the following symbols and conventions:
Symbol or Convention Meaning or Action
n
c
w
>This symbol indicates menu commands (and subcommands) in the
(Windows), (Windows
only), (Macintosh), or
(Macintosh only)
A note provides important related information, reminders,
recommendations, and strong suggestions.
A caution means that a specific action you take could cause harm to
your computer or cause you to lose data.
A warning describes an action that could cause you physical harm.
Follow the guidelines in this document or on the unit itself when
handling electrical equipment.
order you select them. For example, File > Import means to open the
File menu and then select the Import command.
This symbol indicates a single-step procedure. Multiple arrows in a list
indicate that you perform one of the actions listed.
This text indicates that the information applies only to the specified
operating system, either Windows or Macintosh OS X.
Symbol or Convention Meaning or Action
Bold fontBold font is primarily used in task instructions to identify user interface
items and keyboard sequences.
Italic fontItalic font is used to emphasize certain words and to indicate variables.
Courier Bold font
Ctrl+key or mouse actionPress and hold the first key while you press the last key or perform the
If You Need Help
If you are having trouble using your Avid product:
1. Retry the action, carefully following the instructions given for that task in this guide. It
is especially important to check each step of your workflow.
2. Check the latest information that might have become available after the documentation
was published:
-If the latest information for your Avid product is provided as printed release notes,
they ship with your application and are also available online.
If the latest information for your Avid product is provided as a ReadMe file, it is
supplied on your Avid installation CD or DVD as a PDF document
(README_product.pdf) and is also available online.
You should always check online for the most up-to-date release notes or ReadMe
because the online version is updated whenever new information becomes
available. To view these online versions, select ReadMe from the Help menu, or visit
the Knowledge Base at
Courier Bold font identifies text that you type.
mouse action. For example, Command+Option+C or Ctrl+drag.
www.avid.com/readme.
10
3. Check the documentation that came with your Avid application or your hardware for
maintenance or hardware-related issues.
4. Visit the online Knowledge Base at www.avid.com/onlinesupport. Online services are
available 24 hours per day, 7 days per week. Search this online Knowledge Base to find
answers, to view error messages, to access troubleshooting tips, to download updates,
and to read or join online message-board discussions.
Viewing User Documentation on the Interplay Portal
Viewing User Documentation on the Interplay
Portal
You can quickly access the Interplay user documentation from any system in the Interplay
environment. Type the following line in your Web browser:
http://
where Interplay_Engine_name is the name of the computer running the Interplay Engine
software. For example, the following line opens the portal Web page on a system named
DocWG:
http://DocWG
•Click the “Avid Interplay User Documentation” link to access the User Information
Interplay_Engine_name
Center page. On this page, select the Avid Interplay Framework User’s Guide from the
list of user’s
guides.
Accessing the Online Library
The Avid Interplay Online Library DVD contains all the Avid Interplay product
documentation in PDF format.The Online Library includes a Master Glossary of all
specialized terminology used in the documentation for Avid products.
Most Avid online libraries also include multimedia content such as feature presentations.
This multimedia content is an excellent first resource for learning how to use your
application or for helping you understand a particular feature or workflow.
You need Adobe® Reader® to view the documentation online. You can download the latest
n
version from the Adobe web site.
To access the online library from the Online Library DVD:
1. Insert the Online Library DVD into the drive.
2. Double-click the Mainmenu file.
How to Order Documentation
To order additional copies of this documentation from within the United States, call Avid
Sales at 800-949-AVID (800-949-2843). If you are placing an order from outside the United
States, contact your local Avid representative.
11
Avid Training Services
Avid makes lifelong learning, career advancement, and personal development easy and
convenient. Avid understands that the knowledge you need to differentiate yourself is always
changing, and Avid continually updates course content and offers new training delivery
methods that accommodate your pressured and competitive work environment.
To learn about Avid's new online learning environment, Avid Learning Excellerator™
(ALEX), visit
For information on courses/schedules, training centers, certifications, courseware, and
books, please visit
(800-949-2843).
http://learn.avid.com.
www.avid.com/training or call Avid Sales at 800-949-AVID
12
1Automatic Server Failover Introduction
This chapter covers the following topics:
•Server Failover Overview
•How Server Failover Works
•Installing the Failover Hardware Components
•Clustering Terminology
Server Failover Overview
The automatic server failover mechanism in Avid Interplay allows client access to the
Interplay Engine in the event of failures or during maintenance, with minimal impact on the
availability. A failover server is activated in the event of application, operating system, or
hardware failures. The server can be configured to notify the administrator about such
failures using email.
c
Additional monitoring of the hardware and software components of a high-availability
solution is always required. Avid delivers Interplay preconfigured, but additional
attention on the customer side is required to prevent outage (for example, when a
private network fails, RAID disk fails, or a power supply loses power). In a mission
critical environment, monitoring tools and tasks are needed to be sure there are no
silent outages. If another (unmonitored) component fails, only an event is generated,
and while this does not interrupt availability, it might go unnoticed and lead to
problems. Additional software reporting such issues to the IT administration lowers
downtime risk.
The failover cluster is a system made up of two server nodes and a shared-storage device
connected over Fibre Channel. These are to be deployed in the same location given the
shared access to the storage device. The cluster uses the concept of virtual servers to specify
groups of resources that failover together.
The following diagram illustrates the failover cluster architecture for an Avid Unity
MediaNetwork environment. (This diagram and the next diagram do not show the complete
Interplay environment. For more detailed illustrations, see Avid Interplay Best Practices.)
1 Automatic Server Failover Introduction
Two-node cluster in an Avid Unity
MediaNetwork environment
Interplay Engine - Cluster Node
Infortrend
cluster
shared-storage
RAID array
Interplay Engine - Cluster Node
Interplay clients
Intranet
Network Switch
Private network
for heartbeat
Fibre Switch
LEGEND
Fibre connection
1 GB Ethernet connection
Two-node cluster in an Avid Unity
ISIS environment
Interplay Engine - Cluster Node
Infortrend
cluster
shared-storage
RAID array
Interplay Engine - Cluster Node
Interplay clients
Intranet
Avid Network Switch
Private network
for heartbeat
LEGEND
Fibre connection
1 GB Ethernet connection
VLAN 20
VLAN 10
The following diagram illustrates the failover cluster architecture for an Avid Unity ISIS
environment. In this environment, each cluster node is “dual-connected” to the network
switch: one network interface is connected to the VLAN 10 subnet and the other is
connected to the VLAN 20 subnet.
14
If you are already using clusters, the Avid Interplay Engine will not interfere with your
When the Microsoft® Windows® cluster service is running on the machines and the server is
deployed in cluster mode, the Interplay Engine and its accompanying services are exposed to
users as a virtual server. To clients, however, connecting to the clustered virtual Interplay
Engine appears to be the same process as connecting to a single, physical machine. The user
or client application does not know which node is actually hosting the virtual server.
When the server is online, the resource monitor regularly checks its availability and
automatically restarts the server or initiates a failover to the other node if a failure is
detected. The exact behavior can be configured using the Windows Cluster Administrator
console. Given that clients connect to the virtual network name and IP address, which are
also taken over by the failover node, this minimizes the impact on the availability of the
server.
The following diagram illustrates the components of a cluster group, including sample IP
addresses. For a list of required IP addresses and node names, see
Network Names” on page 30.
How Server Failover Works
“List of IP Addresses and
15
1 Automatic Server Failover Introduction
Server Failover Requirements
You should make sure the server failover system meets the following requirements.
Hardware
A dual-server failover cluster-capable system with an Infortrend® cluster shared-storage
RAID disk set is needed. The automatic server failover system was developed on and tested
with the following:
•Intel Server Chassis SR2500 Packaged Cluster, which is the recommended hardware:
The servers in a cluster are connected using one or more cluster shared-storage buses and
one or more physically independent networks acting as a heartbeat.
Server Software
Two licenses of Windows Server 2003 Enterprise Edition or Windows Server 2003
Datacenter Edition are needed.
16
Space Requirements
The default disk configuration for the cluster shared RAID array is as follows:
•Quorum disk - 4GB
•MSDTC disk - 5GB
•Database disk - 925GB or larger
Antivirus Software
You can run antivirus software on a cluster, if the antivirus software is cluster-aware. For
information about cluster-aware versions of your antivirus software, contact the antivirus
vendor. If you are running antivirus software on a cluster, make sure you exclude these
locations from the virus scanning: Q:\ (Quorum disk), C:\Windows\Cluster, and
S:\Workgroup_Databases (database).
Installing the Failover Hardware Components
Functions You Need To Know
Before you set up a cluster in an Avid Interplay environment, you should be familiar with the
following functions:
•Microsoft Windows Active Directory domains and domain users
•Microsoft Windows clustering (current version, as there are changes from prior version)
•Disk configuration (format, partition, naming)
•Network configuration
Installing the Failover Hardware Components
A failover cluster system includes the following components:
•Two Interplay Engine nodes or two Interplay Archive nodes (two SR 2400 or two
SR 2500 servers)
•One Infortrend cluster shared-storage RAID array
The following topics provide information about installing the failover hardware
components:
•“SR2400 Slot Locations” on page 17
•“SR2500 Slot Locations” on page 18
•“Making Failover Cluster Connections in an Avid Unity ISIS Environment” on page 19
•“Failover Cluster Connections in an Avid Unity ISIS Environment (SR2400)” on page
20
•“Failover Cluster Connections in an Avid Unity ISIS Environment (SR2500)” on page
21
•“Making Failover Cluster Connections in an Avid Unity MediaNetwork Environment”
on page 22
•“Failover Cluster Connections in an Avid Unity MediaNetwork Environment (SR2400)”
on page 23
•“Failover Cluster Connections in an Avid Unity MediaNetwork Environment (SR2500)”
on page 24
17
1 Automatic Server Failover Introduction
PCI slots
Serial A to F/C switch if needed
SCSI B
Video
USB
1
2
SR2400 Back Panel
Mouse
Keyboard
RJ 45 to
serial B
Power supply
Slot 2
Slot 1
Slot 3
Small form factor slots not used
1 GB
Ethernet
SR2400 Slot Locations
The SR2400 is supported as a server for the Interplay applications. This section describes
the slot locations that are specific to the Interplay components in a cluster configuration.
Use the following figure and table as guides to configuring your SR2400 systems in an
Interplay cluster environment.
On the SR2400, all boards must be installed starting in the top slot, and the second board
n
must be in the middle slot. The second board cannot be in the bottom slot with the middle
slot left open.
SR2400 Back Panel Configuration for Avid Unity Environment
SlotAvid Unity ISISAvid Unity MediaNetwork
3Intel Pro 1000MTATTO
2QLogic
1EmptyIntel Pro 1000MT
18
®
CardQLogic Card
a
a. Unity MediaNetwork environment: the Pro 1000MT card is shipped in slot 3 (top). You must move the
card to slot 1 (bottom) and install the ATTO card in slot 3 (top). The Pro 1000MT is not used in an Unity
MediaNetwork environment.
SR2500 Slot Locations
PCI-X slots
Power supplies
Serial A to F/C switch if needed
Video
USB
1
2
SR2500 Back Panel
Mouse
Keyboard
RJ 45 to
serial B
Primary power
supply on bottom
Slot 2
Slot 1
Slot 3
PCIe slots (small form factor)
1 GB
Ethernet
Slot 2
Slot 1
The SR2500 is supported as a server for the Interplay applications. This section describes
the slot locations that are specific to the Interplay components in a cluster configuration.
Use the following figure and table as guides to configuring your SR2500 systems in an
Interplay cluster environment.
Installing the Failover Hardware Components
SR2500 Back Panel Configuration for Avid Unity Environment
Its important to match the slot locations in the following tables because they match the order
n
that the drivers are loaded on the SR2500 Recovery DVDs.
2EmptyEmpty
1QLogic Card
a
QLogic Card
a
2Intel Pro 1000PTIntel Pro 1000PT
1EmptyEmpty
a. The SR2500 server might ship with the QLogic card in PCI-X slot 2 (middle). You must move the QLogic card to
PCI-X slot 1 (bottom), because this configuration matches the order that the drivers are loaded on the SR2500
Recovery DVDs.
19
1 Automatic Server Failover Introduction
Making Failover Cluster Connections in an Avid Unity ISIS
Environment
Make the following cable connections to add a failover cluster to an Avid Unity ISIS
environment:
•First cluster node:
-Left on-board network interface connector to ISIS left subnet (VLAN 10)
-Right on-board network interface connector to ISIS right subnet (VLAN 20)
-QLogic card connector to RAID array, Fibre Channel 1 left connector
•Second cluster node:
-Left on-board network interface connector to ISIS left subnet (VLAN 10)
-Right on-board network interface connector to ISIS right subnet (VLAN 20)
-QLogic card connector to RAID array, Fibre Channel 0 left connector
•Right connector on PCI adapter network interface in the first cluster node to right
connector on PCI adapter network interface in second cluster node (private network for
heartbeat)
•All switches on the cluster shared-storage RAID array are in the default “enable”
position (left)
20
For more details, see the illustrations in:
•“Failover Cluster Connections in an Avid Unity ISIS Environment (SR2400)” on page
20.
•“Failover Cluster Connections in an Avid Unity ISIS Environment (SR2500)” on page
21.
Private network
for heartbeat
PCI adapter network interface
right connector
To ISIS right
subnet
To ISIS left
subnet
To ISIS right subnet
To ISIS left
subnet
QLogic card
Right on-board
network interface
Left on-board
network interface
PCI adapter network interface
right connector
LEGEND
Fibre connection
1GB Ethernet connection
QLogic card
Interplay Engine
Cluster Node
Interplay Engine
Cluster Node
Right on-board
network interface
Left on-board
network interface
Cluster Shared-Storage RAID Array
Fibre Channel 1
left connector
Fibre Channel 0
left connector
FC CH0
FC CH1
All switches
set to default
“enabled” left
SR2400
Back Panel
SR2400
Back Panel
Installing the Failover Hardware Components
Failover Cluster Connections in an Avid Unity ISIS Environment
(SR2400)
The following illustration shows the required cable connections when adding a failover
cluster in an Avid Unity ISIS environment (SR2400 servers). For a description of the
connections, see
on page 19.
“Making Failover Cluster Connections in an Avid Unity ISIS Environment”
21
1 Automatic Server Failover Introduction
Private network
for heartbeat
PCI adapter network interface
right connector
To ISIS right
subnet
To ISIS left
subnet
To ISIS right subnet
To ISIS left
subnet
QLogic card
Right on-board
network interface
Left on-board
network interface
PCI adapter network interface right connector
LEGEND
Fibre connection
1GB Ethernet connection
QLogic card
Interplay Engine
Cluster Node
Interplay Engine
Cluster Node
Right on-board
network interface
Left on-board
network interface
Cluster Shared-Storage RAID Array
Fibre Channel 1
left connector
Fibre Channel 0
left connector
FC CH0
FC CH1
All switches
set to default
“enabled” left
SR2500
Back Panel
SR2500
Back Panel
Slot 3
Slot 2
Slot 1
Failover Cluster Connections in an Avid Unity ISIS Environment
(SR2500)
The following illustration shows the required cable connections when adding a failover
cluster in an Avid Unity ISIS environment (SR2500 servers). For a description of the
connections, see
on page 19.
“Making Failover Cluster Connections in an Avid Unity ISIS Environment”
22
Installing the Failover Hardware Components
Making Failover Cluster Connections in an Avid Unity MediaNetwork
Environment
The Interplay Engine Cluster nodes (SR2400 servers) ship with an Intel Pro 1000 MT card
in slot 3 (top). You need to move this card to slot 1 (bottom). Then add an ATTO host bus
adapter in slot 3 (top).
Make the following cable connections to add a failover cluster to an Unity MediaNetwork
environment:
•First cluster node:
®
-Left on-board network interface connector to Ethernet
-QLogic card connector to RAID array, Fibre Channel 1 left connector
-ATTO card connector to Unity MediaNetwork FC switch
•Second cluster node:
-Left on-board network interface connector to Ethernet
-QLogic card connector to RAID array, Fibre Channel 0 left connector
-ATTO card connector to Unity MediaNetwork FC switch
public network
public network
•Right on-board network interface connector on the first cluster node to right on-board
network interface connector on the second cluster node (private network for heartbeat)
•All switches on the cluster shared-storage RAID array are in the default “enable”
position (left)
For more details, see the illustration in:
•“Failover Cluster Connections in an Avid Unity MediaNetwork Environment (SR2400)”
on page 23.
•“Failover Cluster Connections in an Avid Unity MediaNetwork Environment (SR2500)”
on page 24.
23
1 Automatic Server Failover Introduction
Private network
for heartbeat
ATTO card
ATTO card
To Ethernet
Public Network
Right on-board
network interface
Left on-board
network interface
To MediaNetwork
FC switch
LEGEND
Fibre connection
1GB Ethernet connection
To
MediaNetwork
FC switch
Interplay Engine
Cluster Node
Interplay Engine
Cluster Node
Right on-board
network interface
Left on-board
network interface
To Ethernet
Public Network
QLogic card
QLogic card
PCI adapter network interface - not used
PCI adapter network interface - not used
Fibre Channel 1
left connector
Fibre Channel 0
left connector
FC CH0
FC CH1
All switches set to
default “enabled” left
Cluster Shared-Storage RAID Array
SR2400
Back Panel
SR2400
Back Panel
Failover Cluster Connections in an Avid Unity MediaNetwork
Environment (SR2400)
The following illustration shows the required cable connections when adding a failover
cluster in an Avid Unity MediaNetwork environment (SR2400 servers). For a description of
the connections, see
“Making Failover Cluster Connections in an Avid Unity MediaNetwork
Environment” on page 22.
24
Installing the Failover Hardware Components
Slot 3
Slot 1
Private network
for heartbeat
ATTO card
ATTO card
To Ethernet
public network
Right on-board
network interface
Left on-board
network interface
To MediaNetwork
FC switch
LEGEND
Fibre connection
1GB Ethernet connection
To
MediaNetwork
FC switch
Interplay Engine
Cluster Node
Interplay Engine
Cluster Node
Right on-board
network interface
Left on-board
network interface
To Ethernet
public network
QLogic card
QLogic card
Fibre Channel 1
left connector
Fibre Channel 0
left connector
FC CH0
FC CH1
All switches set to
default “enabled” left
Cluster Shared-Storage RAID Array
SR2500
Back Panel
SR2500
Back Panel
PCI adapter network
interface - not used
Slot 2
Failover Cluster Connections in an Avid Unity MediaNetwork
Environment (SR2500)
The following illustration shows the required cable connections when adding a failover
cluster in an Avid Unity MediaNetwork environment (SR2500 servers). For a description of
the connections, see
Environment” on page 22.
“Making Failover Cluster Connections in an Avid Unity MediaNetwork
25
1 Automatic Server Failover Introduction
Clustering Terminology
Clustering is not always straightforward, so it is important that you get familiar with the
terminology of server clusters before you start. A good source of information is the
Microsoft Technology Center for Clustering Services under:
Here is a brief summary of the major concepts and terms:
•Nodes: Individual computers in a cluster configuration.
•Cluster service: The group of components on each node that perform a cluster-specific
activity.
•Resource: Cluster components (hardware and software) that are managed by the cluster
service. Resources are physical hardware devices such as disk drives, and logical items
such as IP addresses and applications.
•Online resource: A resource that is available and is providing its service.
•Quorum resource: A special common cluster resource. This resource plays a critical role
in cluster operations.
26
•Resource group: A collection of resources that are managed by the cluster service as a
single, logical unit.
2 Automatic Server Failover Installation
This chapter describes the processes for configuring the automatic server failover. It is
crucial that you follow the instructions given in this chapter completely, otherwise the
automatic server failover will not work.
This chapter covers the following topics:
•Server Failover Installation Overview
•Before You Begin the Server Failover Installation
•Preparing the Server for the Cluster Service
•Configuring the Cluster Service
•Configuring Rules for the Cluster Networks
•After Setting Up the Cluster
•Installing the Distributed Transaction Coordinator
•Installing the Interplay Engine
•Disabling Any Web Servers
•Installing the Interplay Engine on the First Node
•Installing the Interplay Engine on the Second Node
•Bringing the Interplay Engine Online
•Testing the Complete Installation
•Updating a Clustered Installation (Rolling Upgrade)
•Uninstalling the Interplay Engine on a Clustered System
2 Automatic Server Failover Installation
Server Failover Installation Overview
Installation and configuration of the automatic server failover consists of the following
major tasks:
•Make sure that the network is correctly set up and that you have reserved IP host names
and static IP addresses (see “Before You Begin the Server Failover Installation” on page
28).
•Prepare the servers for the cluster service (see “Preparing the Server for the Cluster
Service” on page 31). This includes configuring the nodes for the network and
formatting the drives.
•Configure the cluster service (see “Configuring the Cluster Service” on page 41,
“Configuring Rules for the Cluster Networks” on page 49, and “After Setting Up the
Cluster” on page 52).
•Install the Distributed Transaction Coordinator (MSDTC group) (see “Installing the
Distributed Transaction Coordinator” on page 56).
•Install the Interplay Engine on both nodes (see “Installing the Interplay Engine” on page
62).
•Test the complete installation (see “Testing the Complete Installation” on page 82).
Do not install any other software on the cluster machines except the Interplay engine. For
n
example, Media Indexer software needs to be installed on a different server. For complete
installation instructions, see the Avid Interplay Software Installation and Configuration
Guide.
For more details about server clusters, see the Microsoft document “Guide to Creating and
Configuring a Server Cluster under Windows Server 2003,” available at:
Before you begin the installation process, you need to do the following:
•Make sure that the facility has a network that is qualified to run Active Directory and
DNS services.
•Determine the subnet mask, the gateway, DNS, and WINS server addresses on the
network.
•Install and set up an Avid Unity client on both servers. See the Avid Unity MediaNetwork File Manager Setup Guide or the Avid Unity ISIS System Setup Guide.
28
Before You Begin the Server Failover Installation
•Create or select two domain user accounts:
-Cluster Service Account (Server Execution User): Create or select an account
(sometimes called the cluster user account) that is used to start the cluster service
and is also used by the Interplay Engine service. This account must be a domain
user and it must be a unique name that will not be used for any other purpose. The
procedures in this document use sqauser as an example of a Cluster Service
Account. This account is automatically added to the Local Administrators group on
each node by the Interplay Engine software during the installation process.
The Server Execution User is critical to the operation of the Interplay Engine. If
necessary, you can change the name of the Server Execution User after the
installation. For more information, see “Troubleshooting the Server Execution User
Account” and “Re-creating the Server Execution User” in the Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide and the Interplay ReadMe.
For information on creating a cluster user account, see the Microsoft document
“Guide to Creating and Configuring a Server Cluster under Windows Server 2003.”
-Cluster Installation Account: Create or select a user account to use during the
installation process. This user account must be a domain user account with
privileges to add servers to the domain. This user account is required only during
the installation of the cluster.
Do not use the same username and password for the Cluster Service Account and the
n
Cluster Installation Account. These accounts have different functions and require different
privileges.
•Create an Avid Unity user account with read and write privileges. This account is not
needed for the installation of Interplay Engine, but is required for the operation of
Interplay
of the Cluster Service Account.
•Make sure the network includes an Active Directory domain before you install or
configure the cluster.
•Reserve static IP addresses for all network interfaces and host names. See “List of IP
Addresses and Network Names” on page 30.
Engine. The user name and password must match the user name and password
29
2 Automatic Server Failover Installation
List of IP Addresses and Network Names
You need to reserve IP host names and static IP addresses on the in-network DNS server
before you begin the installation process. An Avid Unity ISIS environment needs 8 IP
addresses and an Avid Unity MediaNetwork needs 5
provides a list of example names that you can use when configuring the cluster. The
procedures in this chapter use these example names.
Make sure that these IP addresses are outside of the range that is available to DHCP so they
n
cannot automatically be assigned to other machines.
If your Active Directory domain or DNS includes more than one cluster, to avoid conflicts,
n
you need to make sure the cluster names, MSDTC names, and IP addresses are different for
each cluster.
All names must be valid and unique network host names.
n
Required IP Addresses and Node Names
Node or Service Item Required Example Name Where Used
IP addresses. The following table
First Cluster Node •1 Host Name
•2 ISIS IP addresses - public
(one for left and one for right)
or
1 MediaNetwork IP address public
•1 IP address - private
(Heartbeat)
Second Cluster Node •1 Host Name
•2 ISIS IP addresses - public
(one for left and one for right)
or
1 MediaNetwork IP address public
•1 IP address - private
(Heartbeat)
SECLUSTER1See “Configuring the
Cluster Service on the
First Node” on page 42
and “Creating a Resource
Group for the Distributed
Transaction Coordinator”
on page 57.
SECLUSTER2See “Configuring the
Cluster Service on the
Second Node” on page 47
and “Creating a Resource
Group for the Distributed
Transaction Coordinator”
on page 57.
30
Preparing the Server for the Cluster Service
Required IP Addresses and Node Names (Continued)
Node or Service Item Required Example Name Where Used
Cluster service•1 Network Name
(virtual host name)
•1 ISIS IP address
or
1 MediaNetwork IP address
MSDTC service —
Distributed Transaction
Coordinator
Interplay Engine
service
•1 Network Name
(virtual host name)
•1 ISIS IP address
or
1 MediaNetwork IP address
•1 Network Name
(virtual host name)
•2 ISIS IP addresses - public
(one for left and one for right)
or
1 MediaNetwork IP address public
SECLUSTERSee “Configuring the
Cluster Service on the
First Node” on page 42.
CLUSTERMSDTCSee “Assigning a Network
Name to the MSDTC
Group” on page 59.
SEENGINESee “Specifying the
Interplay Engine Details”
on page 66 and
“Specifying the Interplay
Engine Name” on page 68.
Preparing the Server for the Cluster Service
Before you configure the cluster service, you need to complete the tasks in the following
procedures:
•“Setting the QLogic HBA Link Speed” on page 32
•“Renaming the Local Area Network Interface on Each Node” on page 33
•“Configuring the Binding Order Networks on Each Node” on page 38
•“Configuring the Private Network Adapter on Each Node” on page 35
•“Configuring the Public Network Adapter on Each Node” on page 40
•“Joining Both Servers to the Active Directory Domain” on page 40
•“Configuring the Cluster Shared-Storage RAID Disks on Each Node” on page 40
31
2 Automatic Server Failover Installation
Setting the QLogic HBA Link Speed
To avoid possible problems with the Infortrend RAID array, Avid recommends that you
change the QLogic HBA link speed (data rate) from the default setting to 2 Gbps. You need
to specify this setting on both the SR2400 server and the SR2500 server. Change the setting
by using the SAN Surfer utility on both nodes.
To set the QLogic HBA link speed:
1. On the first node, click Start, and select Programs > QLogic Management Suite > San
Surfer.
The San Surfer FC HBA Manager dialog box opens.
32
2. In the left pane, select Port 1.
3. Click the Settings tab.
4. In the HBA Port Settings section, click the arrow pointer for the Data Rate list and
change the default setting from Auto to 2
Gbps.
5. Click Save.
Preparing the Server for the Cluster Service
6. When prompted for a password, enter
config
and click OK.
7. On the other node, repeat steps 1 through 6.
8. Verify that the SAN Surfer data rate is set to 2 Gbps on both nodes.
Renaming the Local Area Network Interface on Each Node
You need to rename the LAN interface on each node to appropriately identify each network.
Although you can use any name for the network connections, Avid suggests that you use the
naming conventions provided in the table in the following procedure.
Make sure you use the same name on both nodes. The names and network connections on
both nodes must match.
To rename the local area network connections:
1. Open the Network Connections window.
a.Click Start and select Control Panel.
b.Right-click Network Connections, and select Open.
The Network Connections window opens.
2. Right-click one of the listed network connections and select Rename.
You need to match the numbered connection with the appropriate device. For example,
you can start by determining which connection refers to the left on-board network
interface and select that connection.
33
2 Automatic Server Failover Installation
SR2400 back view
Right on-board network interface
Left on-board network interface
Right PCI adapter network interface
Left PCI adapter network interface
This illustration shows
Unity ISIS environment.
c
Both nodes must use identical network interface names. Although you can use any
name for the network connections, Avid suggests that you use the naming conventions
provided in the following table.
3. Depending on your Avid Unity network and the device you selected, type a new name
for the network connection and press Enter.
Use the following illustration and table for reference. The illustration uses an SR2400 in
an Avid Unity ISIS environment as an example.
Naming Network Connections
Avid Unity
Network Interface
Avid Unity ISIS
New Names
MediaNetwork
New NamesComment
Left on-board network
interface
Right on-board network
interface
Left PCI adapter network
interface
34
Left-subnet numberPublicISIS - Public network.
Use the subnet number of the
interface. The examples in this
document use Left-74.
MediaNetwork: Public network
Right-subnet numberPrivateISIS - Public network
Use the subnet number of the
interface. The examples in this
document use Right-75.
MediaNetwork - Private network
Not used Not usedDisabled
used for heartbeat between the two
servers in the cluster
Preparing the Server for the Cluster Service
Naming Network Connections (Continued)
Avid Unity
Network Interface
Avid Unity ISIS
New Names
MediaNetwork
New NamesComment
Right PCI adapter
network interface.
4. Repeat steps 2 and 3 for each network connection.
PrivateNot usedISIS - Private network used for
heartbeat between the two servers
in the cluster.
MediaNetwork - Disabled
The following Network Connections window shows the new names used in an Avid
Unity ISIS environment.
5. Close the Network Connections window.
Configuring the Private Network Adapter on Each Node
To configure the private network adapter for the heartbeat connection:
1. Open the Network Connections window.
2. Right-click the Private network connection and select Properties.
The Private Properties dialog box opens.
3. On the General tab, click the Internet Protocol (TCP/IP) check box. Make sure all other
components are unchecked.
35
2 Automatic Server Failover Installation
Select this check box.
All others are unchecked.
36
4. Select Internet Protocol (TCP/IP) and click Properties.
The Internet Protocol (TCP/IP) Properties dialog box opens.
Preparing the Server for the Cluster Service
Type the private IP
address for the node
you are configuring.
5. On the General tab of the Internet Protocol (TCP/IP) Properties dialog box:
a.Select “Use the following IP address.”
b.IP address: type the IP address for the Private network connection for the node you
are configuring. See “List of IP Addresses and Network Names” on page 30.
When performing this procedure on the second node in the cluster make sure you use the
n
static private IP addresses for that node. In this example, use 192. 168. 100. 2.
c.Subnet mask: type the subnet mask address
Make sure you use a completely different IP address scheme from the one used for the public
n
network.
d.Make sure the “Default gateway” and “Use the Following DNS server addresses”
text boxes are empty.
6. Click Advanced.
The Advanced TCP/IP Settings dialog box opens.
37
2 Automatic Server Failover Installation
38
7. On the DNS tab, make sure no values are defined and that the “Register this
connection’s addresses in DNS” and “Use this connection’s DNS suffix in DNS
registration” are not selected.
8. On the WINS tab, do the following:
tMake sure no values are defined in the WINS addresses area.
tUncheck “Enable LMHOSTS Lookup”.
tSelect “Disable NetBIOS over TCP/IP.”
9. Click OK.
A message might by displayed stating “This connection has an empty primary WINS
address. Do you want to continue?” Click Yes.
10. Repeat this procedure on the other node in the cluster, using the static private IP
addresses for that node.
Preparing the Server for the Cluster Service
Configuring the Binding Order Networks on Each Node
Repeat this procedure on each node and make sure the configuration matches on both nodes.
To configure the binding order networks:
1. On one node, open the Network Connections window.
2. Select Advanced > Advanced Settings.
3. In the Connections area, use the arrow controls to position the network connections in
the following order:
-For an Avid Unity ISIS environment, use the following order, as shown in the
illustration:
-Right
-Left
-Private
-Local Area Connection 4
39
2 Automatic Server Failover Installation
-For an Avid Unity MediaNetwork environment use the following order:
-Public
-Private
4. Click OK.
5. Repeat this procedure on the other node and make sure the configuration matches on
nodes.
both
Configuring the Public Network Adapter on Each Node
Make sure you configure the IP address network interfaces for the Public Network Adapter
as you normally would. For examples of public network settings, see
and Network Names” on page 30.
Joining Both Servers to the Active Directory Domain
After configuring the network information, join the two servers to the Active Directory
domain. You can then use your domain credentials for the Cluster Installation Account (see
“Before You Begin the Server Failover Installation” on page 28).
“List of IP Addresses
Configuring the Cluster Shared-Storage RAID Disks on Each Node
Both nodes must have the same configuration for the cluster shared-storage RAID disk.
When you configure the disks on the second node, make sure the disks match the disk
configuration you set up on the first node.
Before you create the partitions on the cluster nodes, make sure the cluster shared-storage
n
RAID disks were pre-configured (mirror, stripe, etc.) by the vendor. Make sure the disks are
Basic and not Dynamic.
To configure the disks on each node:
1. Shut down the server node you are not configuring at this time.
2. Open the Disk Management tool.
3. Initialize the disks, if not already initialized, by right-clicking the disk and selecting
Initialize Disk.
4. Use Quick Format to configure the disks as partitions, using the following names and
drive letters:
-Quorum (Q:) 4GB
-MSDTC (R:) 5GB
-Database (S:) 925GB
40
Configuring the Cluster Service
Configure disks
as shown
The following illustration shows the required names and drive letters.
5. Verify you can access the disk and that it is working by creating a file and deleting it.
6. Shut down the first node and start the second node.
7. On the second node, assign drive letters and names. You do not need to format the disks.
a.Open the Disk Management tool. Right-click the partition, select Change Drive
Letter, and enter the appropriate letter. Repeat these actions for the other partitions.
b.Open My Computer. Select a drive, right-click, select Rename, and enter the
appropriate name. Repeat these actions for the other drives.
Configuring the Cluster Service
Take the following steps to configure the cluster service:
1. Turn off the second node.
2. Configure the first node using the New Server Cluster Wizard. See “Configuring the
Cluster Service on the First Node” on page 42
41
2 Automatic Server Failover Installation
3. Validate the cluster service installation on the first node. See “Validating the Cluster
Service on the First Node” on page 47.
4. Turn on the second node. Leave first node turned on.
5. Configure the second node using Add Cluster Computers Wizard. See “Configuring the
Cluster Service on the Second Node” on page 47.
Configuring the Cluster Service on the First Node
To configure the cluster service on the first node:
1. Turn off the server for the node you are not configuring at this time.
2. Make sure all storage devices are turned on.
3. Click Start and select All Programs > Administrative Tools > Cluster Administrator.
The Open Connection to Cluster dialog box opens.
42
4. Select “Create new cluster” from the Action menu.
5. Make sure you have the prerequisites to configure the cluster, as shown in the New
Server Cluster Wizard Welcome window.
6. Click Next.
7. In the Cluster Name and Domain dialog box, do the following:
-Domain: select the name of your Active Directory domain
-Cluster name: type the Cluster service name, for example SECLUSTER — see
“List of IP Addresses and Network Names” on page 30.
8. Click Next.
Type the Cluster
service name.
Configuring the Cluster Service
The Select Computer dialog box opens.
You might be prompted for an account. If so, use a domain user account, such as the Cluster
n
Installation Account referred to in
page 28. Do not use the Cluster Service Account (Service Execution User).
“Before You Begin the Server Failover Installation” on
43
2 Automatic Server Failover Installation
9. In the Select Computer dialog box, in the Computer name text box, type the Cluster
node host name of the first node.
44
For example, use SECLUSTER1. See “List of IP Addresses and Network Names” on
page 30.
10. Click Advanced.
The Advanced Configuration Options dialog box opens.
11. Select Advanced (minimum) configuration, and click OK.
12. Click Next.
Configuring the Cluster Service
Type the Cluster Service
Account user name.
The setup process analyzes the node for hardware or software problems that might cause
problems during installation. A warning icon displays next to “Checking Cluster
feasibility.” In this case, the warnings do not indicate a problem.
13. Click Next after the analyze is complete and the Task Complete bar is green.
14. In the IP Address dialog box, type the Cluster Service ISIS IP address for the left side in
the IP Address text box. See
“List of IP Addresses and Network Names” on page 30.
15. Click Next.
16. In the Cluster Service Account dialog box, type the cluster user name and password, and
select the domain.
This is the Cluster Service Account (Server Execution User) used to start the cluster
service. It is also used by the Interplay Engine. It must be a unique name that will not be
used for any other purpose. See
“Before You Begin the Server Failover Installation” on
page 28. Check that the account is part of the domain, and that the name and password
are correct, by logging into the domain.
17. Click Next.
The Proposed Cluster Configuration dialog box opens.
45
2 Automatic Server Failover Installation
46
18. Click Quorum.
The Cluster Configuration Quorum dialog box opens.
19. Select Disk Q: from the menu, and click OK.
20. Review the summary on the Proposed Cluster Configuration dialog box to verify all the
information for creating the cluster is correct.
21. Click Next. The Creating the Cluster dialog box opens.
22. Review any errors during the cluster creation.
If red errors display, check the Cluster Service ISIS IP address you entered in step 14.
2. In the left pane, click Resources to make sure all resources are online.
Configuring the Cluster Service
Configuring the Cluster Service on the Second Node
To configure the cluster service on the second node:
1. Make sure the first node is on and all storage devices are turned on.
2. Turn on the server for the second node.
3. In the first node, click Start and select Programs > Administrative Tools > Cluster
Administrator.
4. Select File > New > Node.
The Add Node Wizard opens.
5. Click Next.
You might be prompted for an account. If so, use a domain user account, such as the Cluster
n
Installation Account referred to in
page 28. Do not use the Cluster Service Account (Service Execution User).
6. In the Select Computers dialog box, in the Computer name text box, type the Cluster
node host name of the second node and click Add.
For example, use SECLUSTER2. See “List of IP Addresses and Network Names” on
page 30.
“Before You Begin the Server Failover Installation” on
47
2 Automatic Server Failover Installation
7. Click Advanced.
The Advanced Configuration Options dialog box opens.
48
8. Select Advanced (minimum) configuration, and click OK.
9. Click Next.
The setup process analyzes the node for hardware or software problems that might cause
problems during installation. A warning icon displays next to “Checking Cluster
feasibility.” In this case, the warnings do not indicate a problem.
10. Click Next after the analyze is complete and the Task Complete bar is green.
Configuring the Cluster Service
11. Type the password for the cluster service account. This account is used to start the
cluster service.
12. Click Next.
13. In the Proposed Cluster Configuration dialog box, review the summary to verify all the
information for creating the cluster is correct.
14. Click Next.
The Adding Nodes to the Cluster dialog box opens.
15. Review any errors during the cluster creation.
A warning icon displays next to “Reanalyzing cluster.” In this case, the warnings do not
indicate a problem.
16. Click Next.
17. Click Finish.
49
2 Automatic Server Failover Installation
The Private network
(virtual cluster) is used
for the Heartbeat.
Configuring Rules for the Cluster Networks
After the networks are configured on each node and the cluster service is configured, you
need to configure the network roles to determine the function within the cluster.
The procedures in this section use Left-74 and Right-75 as examples of the public networks.
n
You should replace the numbers with your subnet numbers.
2. In the left pane, click Cluster Configuration > Networks, and right-click Private and
select Properties.
3. Select “Internal cluster communications only (private network).”
50
4. Click OK.
5. In the left pane, click Cluster Configuration > Networks, and right-click Left-74 and
select Properties.
6. In the Left-74 Properties dialog box, verify these options:
-Name: Left-74
-Enable this network for cluster use
-All communications (mixed network)
7. Click OK.
8. In the left pane, click Cluster Configuration > Networks, and right-click Right-75 and
select Properties.
9. In the Right-75 Properties dialog box, verify these options:
-Name: Right-75
-Enable this network for cluster use
-All communications (mixed network)
10. Click OK.
Prioritizing the Heartbeat Adapter
After you configure network roles for how the cluster service uses the network adapter, you
need to prioritize the order in which they are used for intra-cluster communications. The
cluster service will use the next network adapter in the list when it cannot communicate by
using the first network adapter.
2. In the left pane, open the Groups folder, right-click Cluster Group, and select Move
Group.
The group and all its resources are moved to the other node. Disk Q is brought online on
the second node. Make sure the window displays that the second node is now the owner
of the Resources and that all resources are online.
55
2 Automatic Server Failover Installation
All resources are online
Second node is now
owner of the resources
3. Move the group back to node 1 after you finish testing the cluster installation.
4. Close the Cluster Administrator.
Configuration of the cluster service on all nodes is complete and the cluster is fully
operational. You can now install cluster resources, such as file shares, cluster aware services
such as Distributed Transaction Coordinator.
Installing the Distributed Transaction Coordinator
Interplay Engine requires DCOM services in the cluster. To allow DCOM services in the
cluster, create a resource group for the Distributed Transaction Coordinator. This resource
group needs its own physical 5GB disk, an IP address and a network name (MSDTC). Finish
the group by adding a resource of the Distributed Transaction Coordinator type.
The following sections provide procedures for creating a resource group for the Distributed
Transaction Coordinator by using the Cluster Administrator tool.
•“Creating a Resource Group for the Distributed Transaction Coordinator” on page 57
•“Assigning an IP Address to the MSDTC Group” on page 58
•“Assigning a Network Name to the MSDTC Group” on page 59
56
Installing the Distributed Transaction Coordinator
•“Creating a Physical Resource for the MSDTC Group” on page 60
•“Assigning Distributed Transaction Coordinator Resource to the MSDTC Group” on
page 60
When performing these procedures Avid suggests you use the same entries shown in the
procedure. These entries are from the list in section
“List of IP Addresses and Network
Names” on page 30.
For more information about Distributed Transaction Coordinator, see the Microsoft
Knowledge Base article addressing this topic (301600):
4. Complete the Possible Owners dialog box as follows:
-Add the cluster server host names to the Possible owners lists. For example,
SECLUSTER1 and SECLUSTER2. See “List of IP Addresses and Network
Names” on page 30.
5. Click Next.
6. Complete the Dependencies dialog box as follows:
-Add MSDTC DISK R and MSDTC NAME to the Resource dependencies list.
7. Click Finish.
Bringing the MSDTC Online
The following illustration shows the Cluster Administrator after you complete the setup of
the MSDTC group.
Installing the Distributed Transaction Coordinator
To bring the MSDTC online:
1. Initialize the MSDTC Log file by doing the following:
a.Bring MSDTC DISK R online: right-click MSDTC DISK R and select Bring
Online.
b.In the Command Window, run the following command on the node that is the owner
to reset the log: msdtc -resetlog
2. Bring MSDTC Group online by right-clicking MSDTC, and selecting Bring Online.
If you are running Active Directory on the cluster nodes, the MSDTC Resource might fail to
n
run on the backup domain controller. If this occurs, see the following Microsoft article:
http://support.microsoft.com/kb/900216/en-us.
61
2 Automatic Server Failover Installation
Installing the Interplay Engine
After you set up and configure the cluster, you need to install the Interplay Engine software
on both nodes. The following topics describe installing the Interplay Engine and other final
tasks:
•“Disabling Any Web Servers” on page 62
•“Installing the Interplay Engine on the First Node” on page 62
•“Installing the Interplay Engine on the Second Node” on page 81
•“Bringing the Interplay Engine Online” on page 82
•“Testing the Complete Installation” on page 82
For information about updating the installation, see “Updating a Clustered Installation
(Rolling Upgrade)” on page 84.
Disabling Any Web Servers
The Interplay Engine uses an Apache web server that can only be registered as a service if
no other web server (for example, IIS) is serving the port 80 (or 443). Stop and disable or
uninstall any other http services before you start the installation of the server. You must
perform this procedure on both nodes.
If you followed the procedures in this document no action is required, since the only web
n
server installed at this point is the IIS and it is disabled.
Installing the Interplay Engine on the First Node
The following sections provide procedures for installing the Interplay Engine on the first
“List of IP Addresses and Network Names” on page
62
c
node. For a list of example entries, see
30.
•“Preparation for Installing on the First Node” on page 63
•“Starting the Installation and Accepting the License Agreement” on page 63
•“Installing the Interplay Engine Using Custom Mode” on page 64
•“Bringing the Disk Resource Online” on page 78
Shut down the second node while installing Interplay Engine for the first time.
Installing the Interplay Engine on the First Node
Preparation for Installing on the First Node
You are ready to start installing the Interplay Engine on the first node. During setup you
must enter the following cluster-related information:
•Virtual IP Address: the Interplay Engine service IP address of the resource group. For a
list of example names, see “List of IP Addresses and Network Names” on page 30.
•Subnet Mask: the subnet mask on the local network.
•Public Network: the name of the public network connection. For ISIS, select
Left-subnet. For MediaNetwork, select Public. For ISIS, you set the other public
network connection after the installation. See
page 78.
To check the public network connection on the first node, open the Network
Connections panel in the Windows Control Panel and look up the name there.
•Shared Drive: the letter for the shared drive that holds the database. Use S: for the
shared drive letter.
•Cluster Service Account User and Password (Server Execution User): the domain
account that is used to run the cluster. See “Before You Begin the Server Failover
Installation” on page 28.
“Bringing the Disk Resource Online” on
c
Shut down the second node while installing Interplay Engine for the first time.
When installing the Interplay Engine for the first time on a machine with cluster services,
n
you are asked to choose between clustered and regular installation. The installation on the
second node (or later updates) reuses the configuration from the first installation without
allowing you to change the cluster-specific settings. In other words, it is not possible to
change the configuration settings without uninstalling the Interplay Engine.
Starting the Installation and Accepting the License Agreement
To start the installation:
1. Insert the Avid Interplay installation DVD.
A start screen opens.
2. Double-click Install Avid Interplay Engine to begin the Avid Interplay Engine
Installation Wizard, which guides you through the installation.
The Welcome dialog box opens.
3. Close all Windows programs before proceeding with the installation.
63
2 Automatic Server Failover Installation
4. Information about the installation of Apache is provided in the Welcome dialog box.
Read the text and then click Next.
The License Agreement dialog box opens.
5. Read the license agreement information and then accept the license agreement by
selecting “I accept the agreement.” Click Next.
The Specify Installation Type dialog box opens.
6. Continue the installation as described in the next topic.
Installing the Interplay Engine Using Custom Mode
The first time you install the Interplay Engine on a cluster system, you should use the
Custom installation mode. This lets you specify all the available options for the installation.
This is the recommended option to use.
The following procedures are used to perform a Custom installation of the Interplay Engine:
•“Specifying Cluster Mode During a Custom Installation” on page 65
•“Specifying the Interplay Engine Details” on page 66
•“Specifying the Interplay Engine Name” on page 68
•“Specifying the Destination Location” on page 69
•“Specifying the Default Database Folder” on page 69
•“Specifying the Share Name” on page 70
•“Specifying the Configuration Server” on page 71
•“Specifying the Server User” on page 73
•“Specifying the Server Cache” on page 74
•“Enabling Email Notifications” on page 75
•“Installing the Interplay Engine for a Custom Installation on the First Node” on page 76
For information about updating the installation, see “Updating a Clustered Installation
(Rolling Upgrade)” on page 84.
64
Installing the Interplay Engine on the First Node
Specifying Cluster Mode During a Custom Installation
To specify cluster mode:
1. In the Specify Installation Type dialog box, select Custom.
2. Click Next.
The Specify Cluster Mode dialog box opens.
65
2 Automatic Server Failover Installation
3. Select Cluster and click Next to continue the installation in cluster mode.
The Specify Interplay Engine Details dialog box opens.
Specifying the Interplay Engine Details
In this dialog box, provide details about the Interplay Engine.
66
To specify the Interplay Engine details:
1. Type the following values:
-Virtual IP address: This is the Interplay Engine service IP Address, not the Cluster
service IP address. For a list of example names, see “List of IP Addresses and
Network Names” on page 30.
-Subnet Mask: The subnet mask on the local network.
-Public Network: For ISIS, select Left-subnet. For MediaNetwork, select Public. For
ISIS, you set the other public network connection after the installation. See
“Bringing the Disk Resource Online” on page 78.
To check the public network connection on the first node, open the Network
Connections panel in the Windows Control Panel and look up the name there.
-Shared Drive: The letter of the shared drive that is used to store the database. Use S:
for the shared drive letter.
Installing the Interplay Engine on the First Node
c
Specifying the Interplay Engine Name
Make sure you type the correct information here, as this data cannot be changed
afterwards. Should you require any changes to the above values later, you will need to
uninstall the server on both nodes.
2. Click Next.
The Specify Interplay Engine Name dialog box opens.
In this dialog box, type the name of the Interplay Engine.
To specify the Interplay Engine name:
1. Specify the public names for the Avid Interplay Engine by typing the following values:
-The Network Name will be associated with the virtual IP Address that you entered
in the previous Interplay Engine Details dialog box. This is the Interplay Engine
service name (see “List of IP Addresses and Network Names” on page 30). It must
be a new, unused name, and must be registered in the DNS so that clients can find
the server without having to specify its address.
-The Server Name is used by clients to identify the server. If you only use Avid
Interplay Clients on Windows computers, you can use the Network Name as the
server name. If you use several platforms as client systems, such as Macintosh
Linux®, you need to specify the static IP address that you entered for the resource
®
and
67
2 Automatic Server Failover Installation
group in the previous dialog box. Macintosh systems are not always able to map
server names to IP addresses. If you type a static IP address, make sure this IP
address is not provided by a DHCP server.
2. Click Next.
The Specify Destination Location dialog box opens.
Specifying the Destination Location
In this dialog box specify the folder in which you want to install the Interplay Engine
program files.
68
c
To specify the destination location:
1. Avid recommends that you keep the default path C:\Program Files\Avid\Avid Interplay
Engine.
Under no circumstances attempt to install to a shared disk; independent installations
are required on both nodes. This is because local changes are also necessary on both
machines. Also, with independent installations you can use a rolling upgrade approach
later, upgrading each node individually without affecting the operation of the cluster.
2. Click Next.
The Specify Default Database Folder dialog box opens.
Specifying the Default Database Folder
In this dialog box specify the folder where the database data is stored.
Installing the Interplay Engine on the First Node
To specify the default database folder:
1. Type S:\Workgroup_Databases. Make sure the path specifies the shared drive (S:).
This folder should reside on the shared drive that is owned by the resource group of the
server. Avid strongly recommends using the shared drive resource so that it can be
monitored and managed by the cluster service. The drive must be assigned to the
physical drive resource that is mounted under the same drive letter on the other machine.
2. Click Next.
The Specify Share Name dialog box opens.
69
2 Automatic Server Failover Installation
Specifying the Share Name
In this dialog box specify a share name to be used for the database folder.
70
To specify the share name:
1. Accept the default share name.
Avid recommends you use the default share name WG_Database$. This name is visible
on all client platforms, such as Windows 98, Windows ME, Windows NT Windows
2000 and Windows XP.The “$” at the end makes the share invisible if you browse
through the network with the Windows Explorer. For security reasons, Avid
recommends using a “$” at the end of the share name. If you use the default settings, the
directory S:\Workgroup_Databases is accessible as \\InterplayEngine\WG_Database$.
2. Click Next.
This step takes a few minutes. When finished the Specify Configuration Server dialog
box opens.
Specifying the Configuration Server
Set for both nodes.
Use this option for
Interplay Archive
Engine
In this dialog box, indicate whether this server is to act as a Central Configuration Server.
Installing the Interplay Engine on the First Node
c
A Central Configuration Server (CCS) is an Avid Interplay Engine with a special module
that is used to store server and database-spanning information. For more information, see the
Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide.
To specify the server to act as the CCS server:
1. Select either the server you are installing or a previously installed server to act as the
Central Configuration Server.
Typically you are working with only one server, so the appropriate choice is “This Avid
Interplay Engine,” which is the default.
If you need to specify a different server as the CCS (for example, if an Interplay Archive
Engine is being used as the CCS), select “Another Avid Interplay Engine.” You need to
type the name of the other server to be used as the CCS in the next dialog box.
Only use a CCS that is at least as high availability as this cluster installation, typically
another clustered installation.
71
2 Automatic Server Failover Installation
If you specify the wrong CCS, you can change the setting later on the server machine in
the Windows Registry. See
2. Click Next.
The Specify Server User dialog box opens.
Specifying the Server User
In this dialog box, define the Cluster Service account (Server Execution User) used to run
the Avid Interplay Engine.
The Server Execution User is the Windows domain user that runs the Interplay Engine and
the cluster service. This account is automatically added to the Local Administrators group
on the server. This account must be the one that was used to set up the cluster service. See
“Before You Begin the Server Failover Installation” on page 28.
“Automatic Server Failover Tips and Rules” on page 89.
72
c
To specify the Server Execution User:
1. Type the Cluster Service Account user login information.
The installer cannot check the username or password you type in this dialog. Make
sure that the password is set correctly, or else you will need to uninstall the server and
repeat the entire installation procedure. Avid does not recommend changing the Server
Execution User in cluster mode afterwards, so choose carefully.
Installing the Interplay Engine on the First Node
c
Specifying the Server Cache
When typing the domain name do not use the full DNS name such as
mydomain.company.com, because the DCOM part of the server will be unable to start.
You should use the NetBIOS name, for example, mydomain.
2. Click Next.
The Specify Preview Server Cache dialog box opens.
If necessary, you can change the name of the Server Execution User after the installation.
For more information, see “Troubleshooting the Server Execution User Account” and
“Re-creating the Server Execution User” in the Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide and the Interplay ReadMe.
In this dialog box, specify the path for the cache folder.
For more information on the Preview Server cache and Preview Server configuration, see
n
“Avid Workgroup Preview Server Service” in the Avid Interplay Engine and Avid Interplay
Archive Engine Administration Guide.
To specify the server cache folder:
1. Type or browse to the path of the server cache folder. Typically, the default path is used.
2. Click Next.
73
2 Automatic Server Failover Installation
The Enable Email Notification dialog box opens if you are installing the Avid Interplay
Engine for the first time.
Enabling Email Notifications
The first time you install the Avid Interplay Engine, the Enable Email Notification dialog
box opens. The email notification feature sends emails to your administrator when special
events, such as “Cluster Failure,” “Disk Full,” and “Out Of Memory” occur. Activate email
notification if you want to receive emails on special events, server or cluster failures.
74
To enable email notification:
1. (Option) Select Enable email notification on server events.
The Email Notification Details dialog box opens.
Installing the Interplay Engine on the First Node
2. Type the administrator's email address and the email address of the server, which is the
sender.
If an event, such as “Resource Failure” or “Disk Full” occurs on the server machine, the
administrator receives an email from the sender's email account explaining the problem,
so that the administrator can react to the problem. You also need to type the static IP
address of your SMTP server. The notification feature needs the SMTP server in order
to send emails. If you do not know this IP, ask your administrator.
3. If you also want to inform Avid Support automatically using email if problems arise,
select “Send critical notifications also to Avid Support.”
4. Click Next.
The installer modifies the file Config.xml in the Workgroup_Data\Server\Config\Config
directory with your settings.
The Ready to Install dialog box opens.
75
2 Automatic Server Failover Installation
Installing the Interplay Engine for a Custom Installation on the First Node
In this dialog box, begin the installation of the engine software.
76
To install the Interplay Engine software:
1. Click Next.
Use the Back button to review or change the data you have entered. You can also
terminate the installer using the Cancel button, because no changes have been done to
the system yet.
The first time you install the software, a dialog box opens and asks if you want to install
the Sentinel driver. This driver is used by the licensing system.
2. Click Continue.
The Installation Completed dialog box opens after the installation is completed.
3. Do one of the following:
Installing the Interplay Engine on the First Node
tClick Finish.
tAnalyze and resolve any issues or failures reported.
4. Click OK if prompted for a restart the system.
The installation procedure requires the machine to restart (up to twice). For this reason it
is very important that the other node is shut down, otherwise the current node loses
ownership of the Avid Workgroup resource group. This applies to the installation on the
first node only.
Subsequent installations should be run as described in “Updating a Clustered Installation
n
(Rolling Upgrade)” on page 84 or in the Avid Interplay ReadMe.
Bringing the Disk Resource Online
To bring the Disk Resource online:
1. After the installation is complete, start the Cluster Administrator tool by clicking Start
and selecting Programs > Administrative Tools > Cluster Administrator.
77
2 Automatic Server Failover Installation
Avid Workgroup Disk is online
2. Open the Avid Workgroup Server resource group.
The list of resources should look similar to those in the following illustration.
78
The Avid Workgroup Disk resource should be online and all other resources offline.
3. Bring the disk resource online manually before continuing if necessary.
Avid does not recommend starting the server at this stage yet, since it is not installed on the
n
other node and a failover would be impossible.
4. (Avid Unity ISIS only) Add IP address on the second subnet for the Interplay Engine.
a.In the Cluster Administrator, right-click Avid Workgroup Server and select New >
Resource.
b.Complete the New Resource dialog box as follows:
-Name: Avid Workgroup Address 2
-Resource Type: IP Address
-Group: Avid Workgroup Server
c.Complete the Possible Owners dialog box as follows:
Installing the Interplay Engine on the First Node
Avid Workgroup Address 2
-Add the cluster server host names to the Possible owners lists. For example,
SECLUSTER1 and SECLUSTER2. See “List of IP Addresses and Network
Names” on page 30.
d.Complete the Dependencies dialog box as follows:
-Leave the Resource dependencies list empty.
e.Complete TCP/IP Address Parameters dialog box as follows:
-Address: type the second Interplay Engine service Avid Unity ISIS IP address.
See “List of IP Addresses and Network Names” on page 30
-Subnet mask: displays subnet for the second subnet network
-Network: select a network connection: Right-subnet
-Select Enable NetBIOS for this address
f.Click Finish.
The following illustration shows the new entry.
g.Right-click Avid Workgroup Address 2 and select Properties.
h.Click the Advanced tab.
i.Deselect Affect the Group as shown in the following illustration.
79
2 Automatic Server Failover Installation
Affect the group
option
Installing the Interplay Engine on the Second Node
80
c
j.Click OK.
5. When the installation is complete, leave this node running so that it maintains ownership
of the resource group and proceed to
Node” on page 81.
To install the Interplay Engine on the second node:
1. Leave the first machine running so that it maintains ownership of the resource group and
start the second node.
Do not attempt to move the resource group over to the second node, or similarly, do not
shut down the first node while the second is up, before the installation is completed on
the second node.
2. Perform the installation procedure for the second node as described in “Installing the
Interplay Engine on the First Node” on page 62. In contrast to the installation on the first
node, the installer automatically detects all settings previously entered on the first node.
The Attention dialog box opens.
“Installing the Interplay Engine on the Second
Bringing the Interplay Engine Online
3. Click OK.
4. The same installation dialog boxes will open that you saw before, except for the cluster
related settings that only need to be entered once. Enter the requested information and
allow the installation to proceed.
c
Make sure you use the installation mode that you used for the first node and enter the
same information throughout the installer. Using different values results in a corrupted
installation.
5. The installation procedure requires the machine to restart (up to twice). Allow the restart
as requested.
2. Click Groups and right-click Avid Workgroup Server and select Bring on line.
All resources are now online.
Testing the Complete Installation
After you complete all the previously described steps, you are now ready to test the
installation. Make yourself familiar with the Cluster Administrator and review the different
failover-related settings.
81
2 Automatic Server Failover Installation
To test the complete installation:
1. To start the server, bring the resource group online; this starts the Interplay Engine and
its affiliated services.
After starting the Avid Interplay Engine on the first node, the Cluster Administrator
should look similar to the following figure.
82
2. Start an Interplay Administrator, install the licenses if needed, create a test database and
add some files to it. If the other node is also running, you are ready to test the failover
functionality.
3. Initiate a failover by moving the resource group; do this through the context menu of the
resource group. Failures can also be simulated, again through the context menu of the
appropriate resource.
Failures do not necessarily initiate failover.
n
4. You might also want to experiment by terminating the Interplay Engine manually using
the Windows Task Manager (NxNServer.exe). This is also a good way to get familiar
with the failover settings which can be found in the Properties Panel of the Avid
Workgroup resource, under the Advanced tab.
5. Look at the related settings of the resource group. If you need to change any
configuration files, make sure that the Avid Workgroup Disk resource is online; the
configuration files can be found on the resource drive in the Workgroup_Data folder.
Updating a Clustered Installation (Rolling Upgrade)
Updating a Clustered Installation (Rolling
Upgrade)
A major benefit of a clustered installation is that you can perform “rolling upgrades.” You
can keep a node in production while updating the installation on the other, then move the
resource over and update the second node as well.
For information about updating specific versions of the Interplay Engine and a cluster, see
n
the Avid Interplay ReadMe. The ReadMe describes an alternative method of updating a
cluster, in which you lock and deactivate the database before you begin the update.
When updating a clustered installation, the settings that were entered to set up the cluster
resources cannot be changed. Additionally, all other values must be reused, so Avid strongly
recommends choosing the Typical installation mode. Changes to the fundamental attributes
can only be achieved by uninstalling both nodes first and installing again with the new
settings.
Make sure you follow the procedure in this order, otherwise you might end up with a
corrupted installation.
To update a cluster:
c
1. Determine which node is active.
a.Select Control Panel > Administrative Tools > Cluster Administrator.
b.Open the Groups folder and check the owner column for the Avid Workgroup
Server.
Consider this the first node.
2. Make sure this node is also the owner of the Cluster and the MSDTC groups. If these
groups are not on the active node, right-click each group and select Move Group.
3. Run the Interplay Engine installer to update the installation on the non-active node
(second node). Select Typical mode to reuse values set during the previous installation
on that node. Restart as requested and continue with Part 2 of the installation. The
installer will ask you to restart again after Part 2.
Do not move the Avid Workgroup Server resource group to the second node yet.
4. Make sure that first node is active. Run the Interplay Engine installer to update the
installation on the first node. Select Typical mode so that all values are reused.
83
2 Automatic Server Failover Installation
5. During the installation, the installer displays a dialog box that asks you to move the Avid
Workgroup Server group to the second node. Move the group, then click OK in the
installation dialog box to continue. Restart as requested and continue with Part 2 of the
installation. The installer will ask you to restart again after Part 2.
6. For a split database, update the workgroup.xml file (see “Updating the Workgroup.xml
File for a Split Database” on page 85).
7. You might want to test the final result of the update by moving the server back to the
first node. The Interplay Administrator can be used to display the version of the server.
After completing the above steps, your entire clustered installation is updated to the new
version. Should you encounter any complications or face a specialized situation, contact
Avid Support as instructed in
“If You Need Help” on page 10.
Updating the Workgroup.xml File for a Split Database
If you have a split database, you must edit or replace the workgroup.xml file after the
upgrade.
A split database allows you to store non-Avid assets such as graphics files and Microsoft
Office files on shared storage. See the Interplay Engine and Interplay Archive Engine Administration Guide for information on configuring Interplay for a split database.
84
During installation, a new workgroup.xml file is created and the existing workgroup.xml file
is renamed in the format workgroup.ddmmyy-hhmmss. If your Interplay Engine stores file
assets on a split database, make sure the workgroup.xml file contains the correct path. If not,
edit the file or replace it with the renamed backup file after the upgrade has been finished.
For example:
On a cluster system, the following file is installed on both nodes: C:\Program
n
Files\Avid\Avid Interplay Engine\Data\Apache\conf\workgroup.xml. These files are not
currently used and do not need to be edited or overwritten. Edit or overwrite the version on
the S drive: S:\WorkgroupData\Apache\conf\workgroup.xml.
Uninstalling the Interplay Engine on a Clustered
System
To uninstall the Avid Interplay Engine, use the Avid Interplay Engine uninstaller, first on the
inactive node, then on the active node.
To uninstall the Interplay Engine:
c
1. If you plan to reinstall the Interplay Engine and reuse the existing database, create a
complete backup of the AvidWG database and the _InternalData database in
S:\Workgroup_Databases. For information about creating a backup, see “Creating and
Restoring Database Backups” in the Avid Interplay Engine and Avid Interplay Archive
Engine Administration Guide.
After the uninstall is complete, but before you reinstall the Interplay Engine, rename the
folder S:\Workgroup_Data so that it will be preserved during the reinstallation process.
In case of a problem with the new installation, you can check the old configuration
information in that folder.
The uninstall mechanism of the cluster resources only functions properly if the names
of the resources or the resource groups are not changed. Never change these names.
2. Make sure that both nodes are running before you start the uninstaller.
3. On the inactive node (the node that does not own the Avid Workgroup Server resource
group), start the uninstaller by selecting Programs > Avid > Avid Interplay Engine >
Uninstall Avid Interplay Engine.
85
2 Automatic Server Failover Installation
4. When you are asked if you want to delete the cluster resources, click No.
5. When you are asked if you want to restart the system, click Yes.
6. At the end of the uninstallation process, if you are asked to restart the system, click Yes.
7. After the uninstallation on the inactive node is complete, wait until the last restart is
done. Then open the Cluster Administrator on the active node and make sure the
inactive node is shown as online. (The nodes are shown in the lower part of the tree on
the left side of the Cluster Administrator.)
8. Start the uninstallation on the active node (the node that owns the Avid Workgroup
Resource Group).
86
9. When you are asked if you want to delete the cluster resources, click Yes.
A confirmation dialog box opens.
10. Click Yes.
11. When you are asked if you want to restart the system, click Yes.
12. At the end of the uninstallation process, if you are asked to restart the system, click Yes.
3Automatic Server Failover Tips and
Rules
This chapter provides some important tips and rules to use when configuring the automatic
server failover.
Don't Access the Machines Directly
Don’t access the machines (nodes) directly. Use the virtual network name or IP address that
has been assigned to the Interplay Engine resource group (see
Network Names” on page 30). Never use the actual physical names or IP addresses of the
machines that are part of the cluster.
Make Sure to Connect to the Interplay Engine Resource Group
The network names and the virtual IP addresses resolve to the physical machine they are
being hosted on. For example, it is possible to mistakenly connect to the Interplay Engine
using the network name or IP address of the cluster group (see
Network Names” on page 30). The server is found using the alternative address also, but
only while it is online on the same node. Therefore, under no circumstances connect the
clients to a network name other than what was used to set up the Interplay Engine resource
group.
“List of IP Addresses and
“List of IP Addresses and
Do Not Rename Resources
Do not rename resources. The resource plugin, the installer, and the uninstaller all depend on
the names of the cluster resources. These are assigned by the installer and even though it is
possible to modify them using the cluster administrator, doing so corrupts the installation
and is most likely to result in the server not functioning properly.
Do Not Install the Interplay Engine Server on a Shared Disk
The Interplay Engine must be installed on the local disk of the cluster nodes and not on a
shared resource. This is because local changes are also necessary on both machines. Also,
with independent installations you can later use a rolling upgrade approach, upgrading each
node individually without affecting the operation of the cluster. The Microsoft
documentation is also strongly against installing on shared disks.
3 Automatic Server Failover Tips and Rules
Do Not Change the Interplay Engine Server Execution User
The domain account that was entered when setting up the cluster (the Cluster Service
Account —see
“Before You Begin the Server Failover Installation” on page 28) also has to
be the Server Execution User of the Interplay Engine. Given that you cannot easily change
the cluster user, the Interplay Engine execution user has to stay fixed as well. For more
information, see “Troubleshooting the Server Execution User Account” in the Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide.
Do Not Edit the Registry While the Server is Offline
If you edit the registry while the server is offline, you will lose your changes. This is
something that most likely will happen to you since it is very easy to forget the implications
of the registry replication. Remember that the registry is restored by the resource monitor
before the process is put online, thereby wiping out any changes that you made while the
resource (the server) was offline. Only changes that take place while the resource is online
are accepted.
Do Not Remove the Dependencies of the Affiliated Services
The TCP-COM Bridge, the Preview Server, and the Server Browser services must be in the
same resource group and assigned to depend on the server. Removing these dependencies
might speed up some operations but prohibit automatic failure recovery in some
scenarios.
90
Consider Disabling Failover When Experimenting
If you are performing changes that could make the Avid Interplay Engine fail, consider
disabling failover. The default behavior is to restart the server twice (threshold = 3) and then
initiate the failover, with the entire procedure repeating several times before final failure.
This can take quite a while.
Changing the CCS
If you specify the wrong Central Configuration Server (CCS), you can change the setting
later on the server machine in the Windows Registry under:
The string value CMS specifies the server. Make sure to set the CMS to a valid entry while
the Interplay Engine is online, otherwise your changes to the registry won't be effective.
After the registry is updated, stop and restart the server using the Cluster Administrator (in
the Administration Tools folder in Windows).
Specifying an incorrect CCS can prevent login. See “Troubleshooting Login Problems” in
the Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide.
For more information, see “Understanding the Central Configuration Server” in the Avid Interplay Engine and Avid Interplay Archive Engine Administration Guide.
91
3 Automatic Server Failover Tips and Rules
92
Index
A
Antivirus software
running on a failover cluster
Apache web server
on failover cluster
Avid Unity environment
ISIS failover cluster connections SR2400
(illustration)
ISIS failover cluster connections SR2500
(illustration)
MediaNetwork failover cluster connections
SR2400 (illustration)
MediaNetwork failover cluster connections
SR2500 ( illustration)
SR2400 server slot locations (failover cluster)
SR2500 server slot locations (failover cluster)
Avid Unity ISIS
connections for failover cluster
failover cluster connections SR2400 (illustration)
failover cluster
cluster details
cluster information for installation
default database location for failover cluster
installing on first node
Server Execution User, specifying for failover
73
cluster
share name for failover cluster
IP addresses (failover cluster)
assigning to MSDTC group
private network adapter
71
68
62
58
35
56
64
66
70
16
22
69
public network adapter
required
30
40
L
License
agreement (failover server)
License requirements
failover cluster system
16
M
MSDTC resource group
creating
creating physical disk
57
60
N
Network connections
naming for failover cluster
Network interface
renaming LAN for failover cluster
Network name
assigning MSDTC group
Network names
examples for failover cluster
Node
26
defined
name examples
setting startup time
30
59
53
O
Online resource
26
defined
P
Partition
for the cluster group
Port
for Apache web server
Private network adapter
configuring
Public Network
for failover cluster
35
28
62
66
63
33
33
30
94
Index
Public network adapter
configuring
40
Q
Quorum disk 28
configuring
verifying
Quorum resource
defined
42
53
26
R
RAID array
configuring for failover cluster
Registry
editing while offline
Resource group
connecting to
26
defined
services
Resources
defined
renaming
Rolling upgrade (failover cluster)
89
26
89
89
89
S
Server
setting startup time on each node
Server cache
Interplay Engine cluster installation
Server Execution User
changing
specifying for failover cluster
Server Failover
overview
See also Failover cluster
Service name
examples for failover cluster
Services
dependencies
Shared drive
configuring for failover cluster
specifying for Interplay Engine
Slot locations
SR2400 server (failover cluster)
89
13
89
73
30
40
84
40
66
53
17
74
SR2500 server (failover cluster)
Software
requirements for failover cluster system
SR2400 server
slot locations (failover cluster)
SR2500 server
slot locations (failover cluster)
Subnet Mask
66
T
Troubleshooting
server failover
89
U
Uninstalling
Interplay Engine (failover cluster)
Updating
cluster installation
84
V
Virtual IP address
for Interplay Engine (failover cluster)
Virtual Server Address
63
W
Web servers
disabling
Windows Cluster Administrator
console
62
15
18
16
17
18
86
66
95
Index
96
Loading...
+ hidden pages
You need points to download manuals.
1 point = 1 manual.
You can buy points or you can get point for every manual you upload.