Pinnacle Systems MediaCentral Platform Services User Manual

MediaCentral Platform Services
Concepts and Clustering Guide
Legal Notices
Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology, Inc.
This product is subject to the terms and conditions of a software license agreement provided with the software. The product may only be used in accordance with the license agreement.
This product may be protected by one or more U.S. and non-U.S patents. Details are available at www.avid.com/patents.
Copyright © 2014 Avid Technology, Inc. and its licensors. All rights reserved.
The following disclaimer is required by Apple Computer, Inc.:
APPLE COMPUTER, INC. MAKES NO WARRANTIES WHATSOEVER, EITHER EXPRESS OR IMPLIED, REGARDING THIS PRODUCT, INCLUDING WARRANTIES WITH RESPECT TO ITS MERCHANTABILITY OR ITS FITNESS FOR ANY PARTICULAR PURPOSE. THE EXCLUSION OF IMPLIED WARRANTIES IS NOT PERMITTED BY SOME STATES. THE ABOVE EXCLUSION MAY NOT APPLY TO YOU. THIS WARRANTY PROVIDES YOU WITH SPECIFIC LEGAL RIGHTS. THERE MAY BE OTHER RIGHTS THAT YOU MAY HAVE WHICH VARY FROM STATE TO STATE.
The following disclaimer is required by Sam Leffler and Silicon Graphics, Inc. for the use of their TIFF library:
Copyright © 1988–1997 Sam Leffler Copyright © 1991–1997 Silicon Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this software [i.e., the TIFF library] and its documentation for any purpose is hereby granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE, INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The following disclaimer is required by the Independent JPEG Group:
This software is based in part on the work of the Independent JPEG Group.
This Software may contain components licensed under the following conditions:
Copyright (c) 1989 The Regents of the University of California. All rights reserved.
Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the University of California, Berkeley. The name of the University may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Copyright (C) 1989, 1991 by Jef Poskanzer.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation. This software is provided "as is" without express or implied warranty.
Copyright 1995, Trinity College Computing Center. Written by David Chappell.
2
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation. This software is provided "as is" without express or implied warranty.
Copyright 1996 Daniel Dardailler.
Permission to use, copy, modify, distribute, and sell this software for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Daniel Dardailler not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. Daniel Dardailler makes no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty.
Modifications Copyright 1999 Matt Koss, under the same license as above.
Copyright (c) 1991 by AT&T.
Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting documentation for such software.
THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR AT&T MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
This product includes software developed by the University of California, Berkeley and its contributors.
The following disclaimer is required by Paradigm Matrix:
Portions of this software licensed from Paradigm Matrix.
The following disclaimer is required by Ray Sauers Associates, Inc.:
“Install-It” is licensed from Ray Sauers Associates, Inc. End-User is prohibited from taking any action to derive a source code equivalent of “Install-It,” including by reverse assembly or reverse compilation, Ray Sauers Associates, Inc. shall in no event be liable for any damages resulting from reseller’s failure to perform reseller’s obligation; or any damages arising from use or operation of reseller’s products or the software; or any other damages, including but not limited to, incidental, direct, indirect, special or consequential Damages including lost profits, or damages resulting from loss of use or inability to use reseller’s products or the software for any reason including copyright or patent infringement, or lost data, even if Ray Sauers Associates has been advised, knew or should have known of the possibility of such damages.
The following disclaimer is required by Videomedia, Inc.:
“Videomedia, Inc. makes no warranties whatsoever, either express or implied, regarding this product, including warranties with respect to its merchantability or its fitness for any particular purpose.”
“This software contains V-LAN ver. 3.0 Command Protocols which communicate with V-LAN ver. 3.0 products developed by Videomedia, Inc. and V-LAN ver. 3.0 compatible products developed by third parties under license from Videomedia, Inc. Use of this software will allow “frame accurate” editing control of applicable videotape recorder decks, videodisc recorders/players and the like.”
The following disclaimer is required by Altura Software, Inc. for the use of its Mac2Win software and Sample Source Code:
©1993–1998 Altura Software, Inc.
The following disclaimer is required by 3Prong.com Inc.:
Certain waveform and vector monitoring capabilities are provided under a license from 3Prong.com Inc.
The following disclaimer is required by Interplay Entertainment Corp.:
The “Interplay” name is used with the permission of Interplay Entertainment Corp., which bears no responsibility for Avid products.
This product includes portions of the Alloy Look & Feel software from Incors GmbH.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
© DevelopMentor
This product may include the JCifs library, for which the following notice applies:
JCifs © Copyright 2004, The JCIFS Project, is licensed under LGPL (http://jcifs.samba.org/). See the LGPL.txt file in the Third Party Software directory on the installation CD.
3
Avid Interplay contains components licensed from LavanTech. These components may only be used as part of and in connection with Avid Interplay.
This product includes FFmpeg, which is covered by the GNU Lesser General Public License.
This product includes software that is based in part of the work of the FreeType Team.
This software is based in part on the work of the Independent JPEG Group.
This product includes libjpeg-turbo, which is covered by the wxWindows Library License, Version 3.1.
Portions copyright 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Cold Spring Harbor Laboratory. Funded under Grant
P41-RR02188 by the National Institutes of Health.
Portions copyright 1996, 1997, 1998, 1999, 2000, 2001, 2002 by Boutell.Com, Inc.
Portions relating to GD2 format copyright 1999, 2000, 2001, 2002 Philip Warner.
Portions relating to PNG copyright 1999, 2000, 2001, 2002 Greg Roelofs.
Portions relating to gdttf.c copyright 1999, 2000, 2001, 2002 John Ellson (ellson@lucent.com).
Portions relating to gdft.c copyright 2001, 2002 John Ellson (ellson@lucent.com).
Portions relating to JPEG and to color quantization copyright 2000, 2001, 2002, Doug Becker and copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, Thomas G. Lane. This software is based in part on the work of the Independent JPEG Group. See the file README-JPEG.TXT for more information. Portions relating to WBMP copyright 2000, 2001, 2002 Maurice Szmurlo and Johan Van den Brande.
Permission has been granted to copy, distribute and modify gd in any context without fee, including a commercial application, provided that this notice is present in user-accessible supporting documentation.
This does not affect your ownership of the derived work itself, and the intent is to assure proper credit for the authors of gd, not to interfere with your productive use of gd. If you have questions, ask. "Derived works" includes all programs that utilize the library. Credit must be given in user-accessible documentation.
This software is provided "AS IS." The copyright holders disclaim all warranties, either express or implied, including but not limited to implied warranties of merchantability and fitness for a particular purpose, with respect to this code and accompanying documentation.
Although their code does not appear in gd, the authors wish to thank David Koblas, David Rowley, and Hutchison Avenue Software Corporation for their prior contributions.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/)
Interplay Central may use OpenLDAP. Copyright 1999-2003 The OpenLDAP Foundation, Redwood City, California, USA. All Rights Reserved. OpenLDAP is a registered trademark of the OpenLDAP Foundation.
Avid Interplay Pulse enables its users to access certain YouTube functionality, as a result of Avid's licensed use of YouTube's API. The charges levied by Avid for use of Avid Interplay Pulse are imposed by Avid, not YouTube. YouTube does not charge users for accessing YouTube site functionality through the YouTube APIs.
Avid Interplay Pulse uses the bitly API, but is neither developed nor endorsed by bitly.
Attn. Government User(s). Restricted Rights Legend
U.S. GOVERNMENT RESTRICTED RIGHTS. This Software and its documentation are “commercial computer software” or “commercial computer software documentation.” In the event that such Software or documentation is acquired by or on behalf of a unit or agency of the U.S. Government, all rights with respect to this Software and documentation are subject to the terms of the License Agreement, pursuant to FAR §12.212(a) and/or DFARS §227.7202-1(a), as applicable.
4
Trademarks
003, 192 Digital I/O, 192 I/O, 96 I/O, 96i I/O, Adrenaline, AirSpeed, ALEX, Alienbrain, AME, AniMatte, Archive, Archive II, Assistant Station, AudioPages, AudioStation, AutoLoop, AutoSync, Avid, Avid Active, Avid Advanced Response, Avid DNA, Avid DNxcel, Avid DNxHD, Avid DS Assist Station, Avid Ignite, Avid Liquid, Avid Media Engine, Avid Media Processor, Avid MEDIArray, Avid Mojo, Avid Remote Response, Avid Unity, Avid Unity ISIS, Avid VideoRAID, AvidRAID, AvidShare, AVIDstripe, AVX, Beat Detective, Beauty Without The Bandwidth, Beyond Reality, BF Essentials, Bomb Factory, Bruno, C|24, CaptureManager, ChromaCurve, ChromaWheel, Cineractive Engine, Cineractive Player, Cineractive Viewer, Color Conductor, Command|24, Command|8, Control|24, Cosmonaut Voice, CountDown, d2, d3, DAE, D-Command, D-Control, Deko, DekoCast, D-Fi, D-fx, Digi 002, Digi 003, DigiBase, Digidesign, Digidesign Audio Engine, Digidesign Development Partners, Digidesign Intelligent Noise Reduction, Digidesign TDM Bus, DigiLink, DigiMeter, DigiPanner, DigiProNet, DigiRack, DigiSerial, DigiSnake, DigiSystem, Digital Choreography, Digital Nonlinear Accelerator, DigiTest, DigiTranslator, DigiWear, DINR, DNxchange, Do More, DPP-1, D-Show, DSP Manager, DS-StorageCalc, DV Toolkit, DVD Complete, D-Verb, Eleven, EM, Euphonix, EUCON, EveryPhase, Expander, ExpertRender, Fader Pack, Fairchild, FastBreak, Fast Track, Film Cutter, FilmScribe, Flexevent, FluidMotion, Frame Chase, FXDeko, HD Core, HD Process, HDpack, Home-to-Hollywood, HYBRID, HyperSPACE, HyperSPACE HDCAM, iKnowledge, Image Independence, Impact, Improv, iNEWS, iNEWS Assign, iNEWS ControlAir, InGame, Instantwrite, Instinct, Intelligent Content Management, Intelligent Digital Actor Technology, IntelliRender, Intelli-Sat, Intelli-sat Broadcasting Recording Manager, InterFX, Interplay, inTONE, Intraframe, iS Expander, iS9, iS18, iS23, iS36, ISIS, IsoSync, LaunchPad, LeaderPlus, LFX, Lightning, Link & Sync, ListSync, LKT-200, Lo-Fi, MachineControl, Magic Mask, Make Anything Hollywood, make manage move | media, Marquee, MassivePack, Massive Pack Pro, Maxim, Mbox, Media Composer, MediaFlow, MediaLog, MediaMix, Media Reader, Media Recorder, MEDIArray, MediaServer, MediaShare, MetaFuze, MetaSync, MIDI I/O, Mix Rack, Moviestar, MultiShell, NaturalMatch, NewsCutter, NewsView, NewsVision, Nitris, NL3D, NLP, NSDOS, NSWIN, OMF, OMF Interchange, OMM, OnDVD, Open Media Framework, Open Media Management, Painterly Effects, Palladium, Personal Q, PET, Podcast Factory, PowerSwap, PRE, ProControl, ProEncode, Profiler, Pro Tools, Pro Tools|HD, Pro Tools LE, Pro Tools M-Powered, Pro Transfer, QuickPunch, QuietDrive, Realtime Motion Synthesis, Recti-Fi, Reel Tape Delay, Reel Tape Flanger, Reel Tape Saturation, Reprise, Res Rocket Surfer, Reso, RetroLoop, Reverb One, ReVibe, Revolution, rS9, rS18, RTAS, Salesview, Sci-Fi, Scorch, ScriptSync, SecureProductionEnvironment, Serv|GT, Serv|LT, Shape-to-Shape, ShuttleCase, Sibelius, SimulPlay, SimulRecord, Slightly Rude Compressor, Smack!, Soft SampleCell, Soft-Clip Limiter, SoundReplacer, SPACE, SPACEShift, SpectraGraph, SpectraMatte, SteadyGlide, Streamfactory, Streamgenie, StreamRAID, SubCap, Sundance, Sundance Digital, SurroundScope, Symphony, SYNC HD, SYNC I/O, Synchronic, SynchroScope, Syntax, TDM FlexCable, TechFlix, Tel-Ray, Thunder, TimeLiner, Titansync, Titan, TL Aggro, TL AutoPan, TL Drum Rehab, TL Everyphase, TL Fauxlder, TL In Tune, TL MasterMeter, TL Metro, TL Space, TL Utilities, tools for storytellers, Transit, TransJammer, Trillium Lane Labs, TruTouch, UnityRAID, Vari-Fi, Video the Web Way, VideoRAID, VideoSPACE, VTEM, Work-N-Play, Xdeck, X-Form, Xmon and XPAND! are either registered trademarks or trademarks of Avid Technology, Inc. in the United States and/or other countries.
Adobe and Photoshop are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Apple and Macintosh are trademarks of Apple Computer, Inc., registered in the U.S. and other countries. Windows is either a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. All other trademarks contained herein are the property of their respective owners.
Avid MediaCentral Platform Services — Concepts and Clustering Guide • Created 11/16/15 • This document is distributed by Avid in online (electronic) form only, and is not available for purchase in printed form.
5

Contents

Using This Guide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Single Server Deployments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Multi-Server Deployments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
How Failover Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
How Load-Balancing Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Working with Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Chapter 2 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Cluster Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
MCS Services, Resources and Cluster Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Clustering Infrastructure Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
RabbitMQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
DRBD and Database Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Corosync and Pacemaker. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Disk and File System Layout. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Gluster and Cache Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Chapter 3 Services and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Services vs Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Tables of Services and Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Single Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Cluster - Master Node Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Cluster - All Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Cluster - Pacemaker Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Interacting with Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Interacting with Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Directly Stopping Managed Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Using the avid-ics Utility Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Verifying the Startup Configuration for Avid Services . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Services Start Order and Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Chapter 4 Validating the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Verifying Node Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Verifying the “Always-On” IP Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Verifying Network Connectivity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Verify Network Routing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Verifying DNS Host Name Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Validating the FQDN for External Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Verifying External Access. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Verifying Time Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Verifying the Pacemaker / Corosync Cluster Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Verifying the Status of RabbitMQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Verifying the DRBD Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Verifying ACS Bus Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Verifying the AAF Generator Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 5 Cluster Resource Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Accessing the Cluster Resource Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Interpreting the Output of CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Identifying Failures in CRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Interpreting Failures in the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Chapter 6 Cluster Maintenance and Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
General Maintenance Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Adding Nodes to a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Permanently Removing a Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Reviewing the Cluster Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Changing the Administrator E-mail Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Changing IP Address in a Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Taking Nodes Offline and Forcing a Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Shutting Down or Rebooting a Single Cluster Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Shutting Down the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Starting the Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
7
Performing a Rolling Reboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Chapter 7 User Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Identifying Connected Users and Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Backing Up the UMS Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Chapter 8 MCS Troubleshooting and System Logs . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Common Troubleshooting Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Responding to Automated Cluster E-mail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Troubleshooting RabbitMQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Troubleshooting DRBD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Manually Connecting the DRBD Slave to the Master . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Correcting a DRBD Split Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Working with Cluster Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Understanding Log Rotation and Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Viewing the Content of Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
Retrieving Log Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Important Log Files at a Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
RHEL Logs in /var/log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
RHEL Subdirectories in /var/log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Avid Logs in /var/log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Media Distribute Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
MediaCentral Distribution Service Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Browser Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Mobile Device Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
8

Using This Guide

This guide is intended for the individuals responsible for installing, maintaining or performing administrative tasks on an Avid MediaCentral Platform Services (MCS) system. This document serves as an educational tool; providing background and technical information on MCS. Additionally, it explains the specifics of an MCS cluster, how each service operates in a cluster, and provides guidance on best practices for cluster administration.
For instructions on the proper installation and configuration of MediaCentral Platform Services, including the configuration of a cluster, see the Avid MediaCentral Platform Services Installation and Configuration Guide. For administrative information for MediaCentral UX, see the Avid MediaCentral UX Administration Guide.

1 Overview

MediaCentral Platform Services (MCS) is a collection of services running on one or more servers, providing a base infrastructure for solutions including MediaCentral UX, Media Composer Cloud, and Interplay MAM. Multiple MCS servers can be grouped together in a cluster configuration to provide high-availability and increased scale. Every server in a cluster is identified as a “node”. The first two nodes in a cluster are known as the primary (master) and secondary (slave). Any additional server in the cluster is known as a load-balancing node.
All MCS services run on the primary and secondary nodes; while a limited number of services run on the load-balancing nodes. Select services on the secondary node will wait in standby and only become active in the event of a failure of the primary node. If a failure occurs, the services automatically start on the secondary node, without the need for human intervention which greatly reduces system down-time.
When increased client and stream-counts are required, load-balancing servers can be added to the cluster. Load-balancing nodes add scale to the system, but they do not participate in failover. If both the primary and secondary nodes are offline, the MCS system will be down until one of these servers becomes available. A load-balanced cluster provides better performance for deployments supporting multiple, simultaneous users or connections.
An additional benefit of a load-balanced cluster is cache replication, in which media transcoded by one server is immediately distributed to all the other nodes in the cluster. If another node receives the same playback request, the material is available on the local node without the need for re-transcoding. Cache replication is achieved through an open source, distributed file system called GlusterFS.
In summary, an MCS cluster provides the following:
Redundancy/High-Availability. Services are mirrored on the primary and secondary nodes which provide redundancy of the database, system settings and key services. If any node in the cluster fails, connections to that node are automatically redirected to another node.
Scale/Load-Balancing. All incoming playback connections are routed to a single cluster IP address, and are subsequently distributed evenly across the nodes in the cluster.
Replicated Cache. The media transcoded by one node in the cluster is automatically replicated on the other nodes. If another node receives the same playback request, the media is available without the need to re-transcode.
Cluster Monitoring. A cluster resource monitor lets you actively monitor the status of the cluster. In addition, if a node fails or a serious problem is detected, designated system administrators are alerted to the issue through an automatically generated an e-mail.

Single Server Deployments

In a single server deployment, all MCS services (including the playback service) run on the same server. This server also hosts the MCS database and a file cache which contains the transcoded media files used in playback requests. The MCS server has a standard host name and IP address which is used, for example, by MediaCentral UX users to connect directly to it using a web browser or the MediaCentral UX Desktop application.
The following diagram illustrates a typical single-server deployment:
11

Multi-Server Deployments

Two or more MCS servers connect to each other through clustering software installed and configured on each server. In a basic deployment, a cluster consists of a master/slave pair of nodes configured for high-availability. All MCS traffic is routed through the master node which is running all MCS services. Select MCS services and databases are replicated to the slave node. Some of these services are actively running while others are in “standby” mode; ready to assume the role of master at any time. Although not required, additional nodes are often present in a cluster configuration to support load-balanced transcoding, playback and increased scale.
Playback requests, handled by the ICPS playback service, are distributed by the master to available nodes. The load-balancing nodes perform transcoding, but do not participate in failover. Unless reconfigured by a system administrator, the load-balancing nodes can never take on the role of master or slave.
An interesting difference in a cluster deployment is at the network level. In a single server deployment, the MCS server owns its host name and IP address. Clients connect directly to this host name or IP to access the MCS system. In a cluster configuration, while each server maintains its own host name and IP address, a virtual host name and IP address is also configured for the cluster group. MediaCentral UX users connect to the cluster’s IP address or host name, and not to the name of an individual server. Connecting to the cluster and not to an individual node ensures that the client request is always serviced regardless of which nodes may be available at the time.
The following diagram illustrates a typical cluster deployment:
12

How Failover Works

Failover in MCS operates at two distinct levels: service, and node - both of which are manged by a cluster monitoring system. If a service fails, it is quickly restarted by the cluster monitor, which also tracks the service's fail count. If the service fails too often (or cannot be restarted), the cluster monitor gives responsibility for the service to the standby node in the cluster, in a process referred to as a failover. A service restart in itself is not enough to trigger a failover. A failover occurs when the fail count for the service reaches a specified threshold value.
The node on which the service failed remains in the cluster, but no longer performs the duties that have failed. Until the fail count is manually reset, the failed service will not be restarted.
In order to achieve this state of high-availability, one node in the cluster is assigned the role of master. It runs all the key MCS services. The master node also owns the cluster IP address. Thus all incoming requests come directly to this node and are serviced by it. This is shown in the following illustration:
Should any of the key MCS services running on the master node fail without recovery (or reach the failure threshold) a failover is initiated and the secondary node takes on the role of master node. The node that becomes master inherits the cluster IP address, and its own MCS services (that were previously in standby) become fully active. From this point, the new master receives all incoming requests. Manual intervention must be undertaken to determine the cause of the fault on the failed node and to restore it to service.
13
In a correctly sized cluster, a single node can fail and the cluster will properly service its users.
n
However, if two nodes fail, the remaining servers are likely under-provisioned for expected use and will be oversubscribed. Users should expect reduced performance in this scenario. If the primary and secondary nodes both fail, the system will be unavailable until the situation is resolved.
The failover from master to slave is shown in the following illustration:

How Load-Balancing Works

In MCS video playback is load-balanced, meaning that incoming video playback requests are distributed across all nodes in the cluster. Playback is made possible through the Interplay Central Playback Service (ICPS) which actively runs on all nodes in the cluster concurrently.
When a client generates a playback request, the task is received by the master node. A load­balancing algorithm controlled by the master node monitors the clustered nodes, and distributes the request to a playback node. The playback node reads the source media from a shared storage system and performs a quick lower-resolution transcode to stream to the client.
The node that has the least amount of system load receives the playback request. Subsequent playback requests continue in a “round-robin” style where the next most available node receives the following playback request.
14
The master node is treated differently in that 30% of its CPU capacity is always reserved for the duties performed by the master node alone, which include serving the UI, handling logins and user session information, and so on. When the system is under heavy usage, the master node will not take on additional playback jobs. All other nodes can reach 100% CPU saturation to service playback requests.
The following illustration shows a typical load-balanced cluster. The colored lines indicate that playback jobs are sent to different nodes in the cluster. They are not meant to indicate a particular client is bound to a particular node for its entire session, which may not be the case. Notice the master node’s bandwidth preservation.
The next illustration shows a cluster under heavy usage. As illustrated, CPU usage on the master node will not exceed a certain amount, even when the other nodes approach saturation.
15

Working with Linux

Red Hat Enterprise Linux (RHEL) is a commercially supported, open source version of the Linux operating system. If you have run DOS commands in Windows or have used the Mac terminal window, the Linux environment will be familiar to you. While many aspects of the MCS installation are automated, much of it requires entering commands and editing files using the Linux command-line.
RHEL is not free, and Avid does not redistribute it or include it as part of the MCS installation.
n
RHEL licensing and support options are covered in the MediaCentral Platform Services Hardware Guide.
Installing Linux
Installations on Avid qualified HP and Dell servers can use an express process involving a USB key and the Avid-supplied kickstart (ks.cfg) file. Kickstart files are commonly used in Linux installs to automate the OS installation. A kickstart file automatically answers questions posed by the Linux installer, for hardware known in advance.
To further assist in the deployment of the Linux server, the MCS installation package includes a Windows-based tool called “ISO2USB”. This application is used to create a bootable USB drive from a RHEL installation DVD or image (.iso) file. When a user boots from this USB drive, RHEL and the MCS software packages are installed simultaneously with limited involvement from the user.
If you are installing MediaCentral Platform Services on hardware that has not been qualified by
n
Avid, see “Appendix A: Installing MCS on Non-HP / Dell Hardware for Interplay MAM” in the MCS Installation Guide.
Linux Concepts
Once RHEL is installed you can begin the work of configuring the server for MCS. This involves simple actions such as verifying the system time. It also involves more complex actions, such as verifying and modifying hardware settings related to networking, and editing files. Depending on the deployment, you may also be required to create logical volumes, configure port bonding, and perform other advanced actions.
Advance knowledge of the following Linux concepts is helpful:
root user: The root user (sometimes called the “super” user) is the Linux user with highest privileges. All steps in the installation are performed as root.
mounting: Linux does not recognize hard drives or removable devices such as USB keys unless they are formally mounted.
files and directories: In Linux, everything is a file or a directory.
16
Key Linux Directories
Like other file systems, the Linux filesystem is represented as a hierarchical tree. In Linux directories are reserved for particular purposes. The following table presents some of the key Linux directories encountered during the MCS installation and configuration:
Directory Description
/ The root of the filesystem.
/dev Contains device files, including those identifying HD partitions, USB
and CD drives, and so on. For example, sda1 represents the first partition (1) of the first hard disk (a).
/etc Contains Linux system configuration files, including the filesystem table,
fstab, which tells the operating system what volumes to mount at mount at boot-time.
/etc/udev/rules.d Contains rules used by the Linux device manager, including network
script files where persistent names are assigned to network interfaces.
In Linux, every network interface has a unique name. If a NIC card has four connection “ports”, for example, they might be named eth0 through eth3.
/etc/sysconfig/network-scripts Contains, amongst other things, files providing Linux with boot-time
network configuration information, including which NIC interfaces to bring up.
/media Contains the mount points for detachable storage, such as USB keys. In
Linux, volumes and removable storage must be mounted before they can be accessed.
/opt Contains add-on application packages that are not a native part of Linux,
including the MCS components.
/usr Contains user binaries, including some MCS components.
/tmp The directory for temporary files.
/var Contains data files that change in size (variable data), including the MCS
server log files.
17
Linux Command Line
The Linux command line is a powerful tool that lets you perform simple and powerful actions alike with equal speed and ease. For example, entering the Linux list command, ls, at the root directory produces results similar to the following:
# ls
/bin /boot /dev /etc /lib /media /mnt /opt /sbin /srv /tmp /usr /var
In the above command note the following
The pound sign (#) indicates the presence of the Linux command prompt for a user with root level privileges (the highest privilege level). You do not type a pound sign.
A non-root level user would see a dollar sign ($) prompt instead.
Linux commands, paths, and file names are case-sensitive.
The following table presents a few of the more commonly used Linux commands:
Command Description
ls Lists directory contents. Use the –l option (hyphen lower-case L) for a detailed
listing.
cd Changes directories.
cat <filename> Prints the contents of the named file to the screen.
clear Clears screen.
cp Copies files and directories.
<tab> Auto-completes the command based on contents of the command line and directory
contents.
For example, typing cd and the beginning of a directory name, then pressing the tab key fills in the remaining letters in the name.
| “Pipes” the output from one command to the input of another.
For example, to view the output of a command one screen at a time, pipe into the more command, as in:
ls | more
18
Command Description
dmesg Displays messages from the Linux kernel buffer. Useful to see if a device (such as
USB key) mounted correctly.
find Searches for files.
For example, the following use of the find command searches for <filename> on all local filesystems (avoiding network mounts):
find / -mount -name <filename>
grep Searches for the named regular expression. Often used in conjunction with the pipe
command, as in:
ps | grep avid
This example would display all running processes that contain the word “avid”.
less Similar to the cat command, but automatically breaks up the output in to screen-sized
chunks, with navigation. Useful for navigating large amounts of text on screen at a time.
For example:
less <filename>
lvdisplay Displays information about logical volumes.
man Presents help (the “manual page”) for the named command.
mkdir Creates a new directory.
| more Piping (“|”) the output of a command through the more command breaks up the
output into screen-sized chunks.
For example to view the contents of a large directory one screen at a time, type the
ls | more
mount
umount
following:
Mounts and unmounts an external device to a directory. A device must be mounted before its contents can be accessed.
ps Lists the running processes.
passwd Changes the password for the logged-in user.
scp Securely copies files between machines (across an ssh connection).
19
Command Description
tail Shows you the last 10 (or n) lines in a file.
tail <filename>
tail -50 <filename>
tail –f <filename>
The “-f” option keeps the tail command outputting appended data as the file grows. Useful for monitoring log files.
udevadm Requests device events from the Linux kernel. Can be used to replay device events
and create/update the
70-persistent-net.rules file.
e.g. udevadm trigger --action=add
vi Starts a vi editing session.
Linux Text Editor (vi)
Linux features a powerful text editor called vi. To invoke vi, type the vi command followed by the target file at the command prompt.
# vi <filename>
vi operates in one of two modes, insert mode and command mode. Insert mode lets you perform text edits – insertion, deletion, etc. Command mode acts upon the file as a whole – for example, to save it or to quit without saving.
Press the “i” (as in Indigo) key to switch to insert mode.
Press the colon (“:”) key to switch to command mode.
The following table presents a few of the more useful vi command mode commands:
Key Press Description
: Prefix to commands in command mode
:wq Write file and quit vi (in command mode)
:q! Quit without writing (in command mode)
20
The following table presents a few of the more useful vi insert mode commands:
Key Press Description
i Insert text before the cursor, until you press <Esc>
I Insert text at beginning of current line
a Insert text after the cursor
A Insert text at end of current line
wNext word
b Previous word
Shift-g Move cursor to last line of the file
D Delete remainder of line
x Delete character under the cursor
dd Delete current line
yy “Yank” (copy) a whole line in command mode.
p Paste the yanked line in command mode.
<Esc> Turn off Insert mode and switch to command mode.
For two short and helpful vi tutorials, more complete reference information, and a vi FAQ, see:
Linux Usage Tips
The following table presents tips that will make it easier to work in RHEL:
Tip Description
Getting Help For help with Linux commands, the Linux System Manual (“man” pages)
are easily available by typing the man command followed by the item of interest.
For example, for help with the ls command, type:
Searching within a man page To search for a string within a Linux man page, type the forward slash (“/
”) followed by the string of interest. This can be helpful for finding a parameter of interest in a long man entry.
21
man ls
Tip Description
“command not found” error A common experience for users new to the Linux command line is to
receive a “command not found” after invoking a command or script that is definitely in the current directory.
Linux has a PATH variable, but for reasons of security, the current directory — “.” in Linux — is not included in it by default.
Thus, to execute a command or script in a directory that is unknown to the PATH variable you must enter the full path to the script from the root directory (“/”) or from the directory containing the script using dot-slash (“./”) notation, which tells Linux the command you are looking for is in the current directory.
22

2 System Architecture

MediaCentral Platform Services is comprised of multiple systems such as: messaging systems, user management services, cluster management infrastructure, and so on. While many of these systems are independent, they are required to work together to create a cohesive environment. The following diagram shows how these systems operate at distinct layers of the architecture.
The following table explains the role of each layer:
System Architecture Layer Description
Client Applications MCS clients are defined as any system that takes advantage of the
MCS platform. Clients can range in complexity from a single MediaCentral UX session on a web browser to a complex system such as Interplay MAM. Additional client examples include Media Composer Cloud, and MediaCentral UX on a mobile device.
Cluster Virtual IP Address In a cluster, clients gain access to MCS via the cluster’s virtual IP
address.
The dotted line in the illustration indicates that Corosync manages ownership of the Cluster IP address.
Node IP Addresses In a single server configuration, clients gain access to MCS via the
server’s IP address or host name.
In a cluster configuration, each server maintains its own IP address and host name. However, the cluster is seen from the outside as a single machine with one IP address and host name.
Top-Level Services At the top level of the service layer are the MCS services running on a
single server or cluster master node only. These include:
IPC - Interplay Central core services (aka “middleware”)
UMS - User Management Services
USS - User Setting Service
ACS - Avid Common Service bus (aka “the bus”) (configuration & messaging uses RabbitMQ.
The dotted line in the illustration indicates the top level services communicate with one another via ACS, which, in turn, uses RabbitMQ.
Additional Services - These services might not be active on all systems as they require additional software or configuration.
Media Distribute services
Media Index services
Closed Captioning service
24
System Architecture Layer Description
Load-Balancing Services The mid-level service layer includes the services that run on all
servers, regardless of a single server or cluster configuration. In a cluster, these services are load-balanced.
AvidConnectivityMon - Verifies that the “always on” cluster IP is reachable.
AvidAll - Encapsulates all other ICPS back-end services.
AvidICPS - Interplay Central Playback Services: Transcodes and serves transcoded media.
Databases The mid-level service layer also includes two databases:
PostgreSQL: Stores data for several MCS services (UMS, ACS, ICPS).
MongoDB: Stores data related to MCS messaging.
In a cluster configuration, these databases are synchronized between the master and slave nodes for failover readiness.
RabbitMQ Message Queue RabbitMQ is the message broker (“task queue”) used by the MCS top
level services.
In a cluster, RabbitMQ maintains its own independent clustering system. That is, RabbitMQ is not managed by Pacemaker. This allows RabbitMQ to continue delivering service requests to underlying services in the event of a failure.
DRBD Distributed Replicated Block Device (DRBD) is responsible for
volume mirroring.
DRBD replicates and synchronizes the system disk's logical volume containing the PostgreSQL and MongoDB databases across the master and slave, for failover readiness. DRBD carries out replication at the block level.
Pacemaker The cluster resource manager. A resource represents a service or a
group of services that are monitored by Pacemaker. Pacemakers sees and manages resources, not individual services.
Corosync Corosync is the clustering infrastructure. By default, corosync uses a
multicast address to communicate with the other nodes in the cluster. However, configurations can be modified to use unicast addresses for networks that do not support multicast protocols.
25

Cluster Networking

System Architecture Layer Description
File systems The standard Linux file system.
This layer also conceptually includes GlusterFS, the Gluster “network file system” used for cache replication. GlusterFS performs its replication at the file level.
Unlike the Linux file system, GlusterFS operates in the “user space” ­the advantage being any GlusterFS malfunction does not bring down the system.
Hardware At the lowest layer is the server hardware. This includes network
adapters, disk drives, BIOS settings and more.
The system disk is established in a RAID 1 (mirrored) configuration. This mirroring is distinct from the replication of a particular volume by DRBD. The RAID 1 mirror protects against disk failure. The DRBD mirror protects against node failure.
Many systems will also include multiple disks in a RAID 5 configuration. These disks are configured as a cache for the low resolution transcoded media that is streamed to the clients.
The following sections of this chapter provide additional detail on the system architecture layers.
Cluster Networking
Network communication in a cluster generally occurs over a single network interface, using multiple messaging protocols (unicast and multicast). Unicast messaging involves one host (node-1) sending a network packet to another specific host (node-2). If node-1 needed to send the same packet to additional hosts (node-3, node-4), multiple messages must be sent individually. With multicast messaging, a single packet can be sent to a group of hosts simultaneously. This can have advantages in some situations, but it lacks the precision of a point-to-point unicast message. The following IP addresses are required for an MCS cluster:
Required IP Addresses for an MCS Cluster:
Node IP Address (unicast)
Every node in an MCS system is assigned a static IP. This is true of both single-server and cluster configurations. While single-server MCS systems support assigning the node IP address through DHCP, clusters require static IP addresses for each node. Network level firewalls and switches must allow the nodes to communicate with one another.
26
Cluster Networking
Virtual IP Address (unicast)
During the configuration process, a unicast IP address is assigned to the cluster. This IP is associated with a virtual hostname in the site’s DNS system. Clients use these virtual identifiers to communicate with the cluster. If a cluster node is offline, clients are still able to communicate with the cluster using the virtual host name or IP.
The virtual IP address is managed by the cluster in the form of the AvidClusterIP resource. It is owned by the master node and moves to the slave node in the event of a failover.
Cluster IP Address (multicast by default)
During the configuration process, a multicast IP address is also assigned to the cluster. The multicast address is used for inter-cluster communication. If cluster nodes are spread across multiple network switches, the switches must be configured to allow this multicast traffic. During the cluster configuration, a default multicast IP of 239.192.1.1 can be used as long as no other multicast traffic exists on the network. Alternatively, your IT department can assign a specific multicast address to avoid cross-communication between multicast groups. If your site is not configured to use multicast, a static IP address can be used. However, this requires additional configuration.
Reviewing the IP Addresses:
Once the cluster is configured, you can use the ifconfig command to review the network configuration on each node. The following is an example from a master node on HP hardware:
[root@wavd-mcs01 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:60:DD:45:15:21 inet addr:192.168.10.51 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe40::222:dddd:ff13:1210/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:586964290 errors:0 dropped:0 overruns:0 frame:0 TX packets:627585183 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:101260694799 (94.3 GiB) TX bytes:174678891394 (162.6 GiB) Interrupt:103
eth0:cl0 Link encap:Ethernet HWaddr 00:60:DD:45:15:21 inet addr:192.168.10.50 Bcast:192.168.10.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:103
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:139012986 errors:0 dropped:0 overruns:0 frame:0 TX packets:139012986 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:101973025015 (94.9 GiB) TX bytes:101973025015 (94.9 GiB)
27
Cluster Networking
HP servers identify network adapters with an “eth” prefix whereas Dell servers identify the
n
adapters with an “em1”, “p1p1”or “p2p1”.
The following is true for the example above:
“eth0” is the node IP address. This is the IP address of the server. Each node will have a listing for this. In this example, “192.168.10.51” is the unicast IP address for this node. This physical adapter has a state of “UP” which means the adapter is available and active.
“eth:cl0” (or “cluster 0”) is the virtual IP address of the cluster. This will only appear on the master node that owns the AvidClusterIP resource. In this example “192.168.10.50” is the virtual unicast IP address for the cluster. This virtual adapter has a state of “UP”.
“lo” is the loopback adapter. Each node will have a listing for this. If external network cable(s) are disconnected, the loopback adapter is used by the system to communicate with itself. Without this virtual adapter, some basic system functions would be unable to communicate internally. This virtual adapter has a state of “UP”.
The multicast address used for inter-cluster communication does not appear within ifconfig. That address can be verified in the cluster configuration file (corosync.conf) located at:
/etc/corosync/
.
28

MCS Services, Resources and Cluster Databases

MCS Services, Resources and Cluster Databases
The following table lists the main MCS services and resources managed by Pacemaker, and where they run:
Service Resource Name
IPC Core Services (“the middleware”) (avid-interplay-central)
User Management Service (avid-ums)
UMS session cache service (redis)
MCS User Setting Service
(avid-uss)
Avid Common Services bus (“the bus”) (acs-ctrl-core)
Avid Monitor (avid-monitor) AvidClusterMon
Playback Service (avid-icps-manager)
Load-Balancing Services (“the back-end”) (avid-all)
= ON (RUNNING) = OFF (STANDBY) = OFF (DOES NOT RUN)
AvidIPC ON OFF OFF OFF
AvidUMS
Redis
AvidUSS
AvidACS
AvidICPSEverywhere
AvidAllEverywhere
Node 1 (Master)
ON OFF OFF OFF
ON OFF OFF OFF
ON OFF OFF OFF
ON OFF OFF OFF
ON OFF OFF OFF
ON ON ON ON
ON ON ON ON
Node 2 (Slave)
Node 3 Node n
Note the following:
All MCS services run on the Master node in the cluster.
Some MCS services are run on the Slave node in standby only. These services are started automatically during a failover.
Other services spawned by the Avid Common Service bus run on all nodes. The Playback Service (ICPS) is an example of such a service. It runs on all nodes for scalability (load-balancing supports many concurrent clients and/or large media requests) and high availability (service is always available).
29
MCS Services, Resources and Cluster Databases
The following table lists the bus-dependent services:
Services and Resources
Node 1 (master)
Node 2 (slave) Node 3 Node n
AAF Generator* (avid-aaf-gen) ON ON ON ON
MCS MCS Messaging
ON ON ON ON
(avid-acs-messenger & avid-acs-mail)
* The AAF Generator runs on all nodes. However, since it is used by the MCS Core Service (“the middleware”), it is only in operation on the master and slave nodes.
The following table lists the MCS databases, and where they run:
MCS Databases Node 1
(Master)
ICS Database PostgreSQL ON OFF OFF OFF
Service Bus Messaging
MongoDB
ON OFF OFF OFF
Database
RabbitMQ database Mnesia
ON ON ON ON
= ON (RUNNING) = OFF (STANDBY) = OFF (DOES NOT RUN)
Node 2 (Slave) Node 3 Node n
30
Loading...
+ 93 hidden pages