Interplay Engine Failover Guide for Windows Server 2012

Interplay® | Engine
Failover Guide for Windows Server 2012
November 2017
Legal Notices
Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology, Inc.
This product is subject to the terms and conditions of a software license agreement provided with the software. The product may only be
used in accordance with the license agreement.
This product may be protected by one or more U.S. and non-U.S patents. Details are available at www.avid.com/patents.
This guide is protected by copyright. This guide is for your personal use and may not be reproduced or distributed, in whole or in part,
without permission of Avid. Reasonable care has been taken in preparing this guide; however, it may contain omissions, technical
inaccuracies, or typographical errors. Avid Technology, Inc. disclaims liability for all losses incurred through the use of this document.
Product specifications are subject to change without notice.
Copyright © 2017 Avid Technology, Inc. and its licensors. All rights reserved.
The following disclaimer is required by Apple Computer, Inc.:
APPLE COMPUTER, INC. MAKES NO WARRANTIES WHATSOEVER, EITHER EXPRESS OR IMPLIED, REGARDING THIS
PRODUCT, INCLUDING WARRANTIES WITH RESPECT TO ITS MERCHANTABILITY OR ITS FITNESS FOR ANY PARTICULAR
PURPOSE. THE EXCLUSION OF IMPLIED WARRANTIES IS NOT PERMITTED BY SOME STATES. THE ABOVE EXCLUSION MAY
NOT APPLY TO YOU. THIS WARRANTY PROVIDES YOU WITH SPECIFIC LEGAL RIGHTS. THERE MAY BE OTHER RIGHTS THAT
YOU MAY HAVE WHICH VARY FROM STATE TO STATE.
The following disclaimer is required by Sam Leffler and Silicon Graphics, Inc. for the use of their TIFF library:
Copyright © 1988–1997 Sam Leffler
Copyright © 1991–1997 Silicon Graphics, Inc.
Permission to use, copy, modify, distribute, and sell this software [i.e., the TIFF library] and its documentation for any purpose is hereby
granted without fee, provided that (i) the above copyright notices and this permission notice appear in all copies of the software and
related documentation, and (ii) the names of Sam Leffler and Silicon Graphics may not be used in any advertising or publicity relating to
the software without the specific, prior written permission of Sam Leffler and Silicon Graphics.
THE SOFTWARE IS PROVIDED “AS-IS” AND WITHOUT WARRANTY OF ANY KIND, EXPRESS, IMPLIED OR OTHERWISE,
INCLUDING WITHOUT LIMITATION, ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
IN NO EVENT SHALL SAM LEFFLER OR SILICON GRAPHICS BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT OR
CONSEQUENTIAL DAMAGES OF ANY KIND, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER OR NOT ADVISED OF THE POSSIBILITY OF DAMAGE, AND ON ANY THEORY OF LIABILITY, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
The following disclaimer is required by the Independent JPEG Group:
This software is based in part on the work of the Independent JPEG Group.
This Software may contain components licensed under the following conditions:
Copyright (c) 1989 The Regents of the University of California. All rights reserved.
Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are
duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use
acknowledge that the software was developed by the University of California, Berkeley. The name of the University may not be used to
endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS
IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Copyright (C) 1989, 1991 by Jef Poskanzer.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided " as is" without express or implied warranty.
Copyright 1995, Trinity College Computing Center. Written by David Chappell.
Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in
supporting documentation. This software is provided " as is" without express or implied warranty.
Copyright 1996 Daniel Dardailler.
Permission to use, copy, modify, distribute, and sell this software for any purpose is hereby granted without fee, provided that the above
copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation,
and that the name of Daniel Dardailler not be used in advertising or publicity pertaining to distribution of the software without specific,
written prior permission. Daniel Dardailler makes no representations about the suitability of this software for any purpose. It is provided " as
is" without express or implied warranty.
Modifications Copyright 1999 Matt Koss, under the same license as above.
Copyright (c) 1991 by AT&T.
Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice
is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting
documentation for such software.
2
THIS SOFTWARE IS BEING PROVIDED " AS IS" , WITHOUT ANY EXPRESS OR IMPLIED WARRANTY. IN PARTICULAR, NEITHER
THE AUTHOR NOR AT&T MAKES ANY REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
This product includes software developed by the University of California, Berkeley and its contributors.
The following disclaimer is required by Paradigm Matrix:
Portions of this software licensed from Paradigm Matrix.
The following disclaimer is required by Ray Sauers Associates, Inc.:
“Install-It” is licensed from Ray Sauers Associates, Inc. End-User is prohibited from taking any action to derive a source code equivalent of
“Install-It,” including by reverse assembly or reverse compilation, Ray Sauers Associates, Inc. shall in no event be liable for any damages
resulting from reseller’s failure to perform reseller’s obligation; or any damages arising from use or operation of reseller’s products or the
software; or any other damages, including but not limited to, incidental, direct, indirect, special or consequential Damages including lost
profits, or damages resulting from loss of use or inability to use reseller’s products or the software for any reason including copyright or
patent infringement, or lost data, even if Ray Sauers Associates has been advised, knew or should have known of the possibility of such
damages.
The following disclaimer is required by Videomedia, Inc.:
“Videomedia, Inc. makes no warranties whatsoever, either express or implied, regarding this product, including warranties with respect to
its merchantability or its fitness for any particular purpose.”
“This software contains V-LAN ver. 3.0 Command Protocols which communicate with V-LAN ver. 3.0 products developed by Videomedia,
Inc. and V-LAN ver. 3.0 compatible products developed by third parties under license from Videomedia, Inc. Use of this software will allow
“frame accurate” editing control of applicable videotape recorder decks, videodisc recorders/players and the like.”
The following disclaimer is required by Altura Software, Inc. for the use of its Mac2Win software and Sample Source
Code:
©1993–1998 Altura Software, Inc.
The following disclaimer is required by Interplay Entertainment Corp.:
The “Interplay” name is used with the permission of Interplay Entertainment Corp., which bears no responsibility for Avid products.
This product includes portions of the Alloy Look & Feel software from Incors GmbH.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
© DevelopMentor
This product may include the JCifs library, for which the following notice applies:
JCifs © Copyright 2004, The JCIFS Project, is licensed under LGPL (http://jcifs.samba.org/). See the LGPL.txt file in the Third Party
Software directory on the installation CD.
Avid Interplay contains components licensed from LavanTech. These components may only be used as part of and in connection with Avid
Interplay.
Attn. Government User(s). Restricted Rights Legend
U.S. GOVERNMENT RESTRICTED RIGHTS. This Software and its documentation are “commercial computer software” or “commercial
computer software documentation.” In the event that such Software or documentation is acquired by or on behalf of a unit or agency of the
U.S. Government, all rights with respect to this Software and documentation are subject to the terms of the License Agreement, pursuant
to FAR §12.212(a) and/or DFARS §227.7202-1(a), as applicable.
Trademarks
Avid, the Avid Logo, Avid Everywhere, Avid DNXHD, Avid DNXHR, Avid NEXIS, AirSpeed, Eleven, EUCON, Interplay, iNEWS, ISIS, Mbox,
MediaCentral, Media Composer, NewsCutter, Pro Tools, ProSet and RealSet, Maestro, PlayMaker, Sibelius, Symphony, and all related
product names and logos, are registered or unregistered trademarks of Avid Technology, Inc. in the United States and/or other countries.
The Interplay name is used with the permission of the Interplay Entertainment Corp. which bears no responsibility for Avid products. All
other trademarks are the property of their respective owners. For a full list of Avid trademarks, see: http://www.avid.com/US/about-avid/
legal-notices/trademarks.
Footage
Eco Challenge Morocco — Courtesy of Discovery Communications, Inc.
News material provided by WFTV Television Inc.
Ice Island — Courtesy of Kurtis Productions, Ltd.
Interplay | Engine Failover Guide for Windows Server 2012 • Created November 1, 2017 • This document is distributed
by Avid in online (electronic) form only, and is not available for purchase in printed form.
3
Contents
Using This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Symbols and Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
If You Need Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Viewing Help and Documentation on the Interplay Production Portal . . . . . . . . . . . . . . . . . . . 9
Avid Training Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 1
Automatic Server Failover Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Server Failover Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
How Server Failover Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Server Failover Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Server Failover Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Installing the Failover Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Slot Locations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Failover Cluster Connections: Redundant-Switch Configuration . . . . . . . . . . . . . . . . . . . 17
Failover Cluster Connections, Dual-Connected Configuration . . . . . . . . . . . . . . . . . . . . 20
HP MSA 2040 Reference Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
HP MSA 2040 Storage Management Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
HP MSA 2040 Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
HP MSA 2040 Support Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Clustering Technology and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Chapter 2
Creating a Microsoft Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Server Failover Installation Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Before You Begin the Server Failover Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Requirements for Domain User Accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
List of IP Addresses and Network Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Active Directory and DNS Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Preparing the Server for the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Configuring the ATTO Fibre Channel Card . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Changing Windows Server Settings on Each Node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Configuring Local Software Firewalls. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Renaming the Local Area Network Interface on Each Node . . . . . . . . . . . . . . . . . . . . . . 37
Configuring the Private Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 40
Configuring the Binding Order Networks on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 42
Configuring the Public Network Adapter on Each Node . . . . . . . . . . . . . . . . . . . . . . . . . 44
Configuring the Cluster Shared-Storage RAID Disks on Each Node. . . . . . . . . . . . . . . . 44
Configuring the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Joining Both Servers to the Active Directory Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Installing the Failover Clustering Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Creating the Failover Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Renaming the Cluster Networks in the Failover Cluster Manager . . . . . . . . . . . . . . . . . . 62
Renaming the Quorum Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Removing Disks Other Than the Quorum Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Adding a Second IP Address to the Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Testing the Cluster Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Chapter 3
Installing the Interplay | Engine for a Failover Cluster . . . . . . . . . . . . . . . . . . . . 75
Disabling Any Web Servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Installing the Interplay | Engine on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Preparation for Installing on the First Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Bringing the Shared Database Drive Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Starting the Installation and Accepting the License Agreement. . . . . . . . . . . . . . . . . . . . 78
Installing the Interplay | Engine Using Custom Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Checking the Status of the Cluster Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Creating the Database Share Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Adding a Second IP Address (Dual-Connected Configuration) . . . . . . . . . . . . . . . . . . . . 93
Changing the Resource Name of the Avid Workgroup Server. . . . . . . . . . . . . . . . . . . . . 97
Installing the Interplay | Engine on the Second Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Bringing the Interplay | Engine Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
After Installing the Interplay | Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Creating an Interplay | Production Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Testing the Complete Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Installing a Permanent License. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Updating a Clustered Installation (Rolling Upgrade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
Uninstalling the Interplay | Engine on a Clustered System . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Chapter 4
Automatic Server Failover Tips and Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Chapter 5
Expanding the Database Volume for an Interplay Engine Cluster . . . . . . . . . 109
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Task 2: Expand the Databases Volume Using the HP SMU (Version 2) . . . . . . . . . . . . . . . 110
Task 2: Expand the Databases Volume Using the HP SMU (Version 3) . . . . . . . . . . . . . . . 115
Task 3: Extend the Databases Volume in Windows Disk Management . . . . . . . . . . . . . . . . 121
5
Chapter 6
Adding Storage for File Assets for an Interplay Engine Cluster . . . . . . . . . . . 124
Before You Begin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Task 1: Add Drives to the MSA Storage Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Task 2: Create a Disk and Volume Using the HP SMU V3. . . . . . . . . . . . . . . . . . . . . . . . . . 126
Task 3: Initialize the Volume in Windows Disk Management . . . . . . . . . . . . . . . . . . . . . . . . 132
Task 4: Add the Disk to the Failover Cluster Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Task 5: Copy the File Assets to the New Drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Task 6: Mount the FileAssets Partition in the _Master Folder . . . . . . . . . . . . . . . . . . . . . . . 139
Task 7: Create Cluster Dependencies for the New Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6
Using This Guide
Congratulations on the purchase of Interplay | Production, a powerful system for managing media in
a shared storage environment.
This guide is intended for all Interplay Production administrators who are responsible for installing,
configuring, and maintaining an Interplay | Engine with the Automatic Server Failover module
integrated. This guide is for Interplay Engine clusters that use Windows Server 2012 R2.
Revision History
Date Revised
Changes Made
November 2017
•
Added reference to Avid Knowledge Base article in “Requirements for Domain User
Accounts” on page 28.
•
Added “Expanding the Database Volume for an Interplay Engine Cluster” on
page 109.
•
Added “Adding Storage for File Assets for an Interplay Engine Cluster” on page 124.
•
Updated references to shared-storage.
September 2016
Updated to correct an error in “Creating the Database Share Manually” on page 92. The
name of the File Server was simplified in Windows Server 2012.
June 2016
Updated to specify a single Interplay Engine license for both nodes of a cluster.
December 2015
Updated “Server Failover Requirements” on page 15 (Only ATTO FC adapter qualified)
and “Downloading the ATTO Driver and Configuration Tool” on page 33.
July 2015
First publication
Symbols and Conventions
Avid documentation uses the following symbols and conventions:
Symbol or Convention Meaning or Action
n
A note provides important related information, reminders,
recommendations, and strong suggestions.
c
A caution means that a specific action you take could cause harm to
your computer or cause you to lose data.
w
A warning describes an action that could cause you physical harm.
Follow the guidelines in this document or on the unit itself when
handling electrical equipment.
>
This symbol indicates menu commands (and subcommands) in the
order you select them. For example, File > Import means to open the
File menu and then select the Import command.
If You Need Help
Symbol or Convention Meaning or Action
This symbol indicates a single-step procedure. Multiple arrows in a list
indicate that you perform one of the actions listed.
(Windows), (Windows
only), (Macintosh), or
(Macintosh only)
This text indicates that the information applies only to the specified
operating system, either Windows or Macintosh OS X.
Bold font
Bold font is primarily used in task instructions to identify user interface
items and keyboard sequences.
Italic font
Italic font is used to emphasize certain words and to indicate variables.
Courier Bold font
Courier Bold font identifies text that you type.
Ctrl+key or mouse action
Press and hold the first key while you press the last key or perform the
mouse action. For example, Command+Option+C or Ctrl+drag.
| (pipe character)
The pipe character is used in some Avid product names, such as
Interplay | Production. In this document, the pipe is used in product
names when they are in headings or at their first use in text.
If You Need Help
If you are having trouble using your Avid product:
1. Retry the action, carefully following the instructions given for that task in this guide. It is
especially important to check each step of your workflow.
2. Check the latest information that might have become available after the documentation was
published. You should always check online for the most up-to-date release notes or ReadMe
because the online version is updated whenever new information becomes available. To view
these online versions, select ReadMe from the Help menu, or visit the Knowledge Base at
www.avid.com/support.
3. Check the documentation that came with your Avid application or your hardware for
maintenance or hardware-related issues.
4. Visit the online Knowledge Base at www.avid.com/support. Online services are available 24
hours per day, 7 days per week. Search this online Knowledge Base to find answers, to view
error messages, to access troubleshooting tips, to download updates, and to read or join online
message-board discussions.
8
Viewing Help and Documentation on the Interplay Production Portal
Viewing Help and Documentation on the
Interplay Production Portal
You can quickly access the Interplay Production Help, links to the PDF versions of the
Interplay Production guides, and other useful links by viewing the Interplay Production User
Information Center on the Interplay Production Portal. The Interplay Production Portal is a Web site
that runs on the Interplay Production Engine.
You can access the Interplay Production User Information Center through a browser from any system
in the Interplay Production environment. You can also access it through the Help menu in
Interplay | Access and the Interplay | Administrator.
The Interplay Production Help combines information from all Interplay Production guides in one
Help system. It includes a combined index and a full-featured search. From the Interplay Production
Portal, you can run the Help in a browser or download a compiled (.chm) version for use on other
systems, such as a laptop.
To open the Interplay Production User Information Center through a browser:
1. Type the following line in a Web browser:
http://Interplay_Production_Engine_name
For Interplay_Production_Engine_name substitute the name of the computer running the
Interplay Production Engine software. For example, the following line opens the portal Web
page on a system named docwg:
http://docwg
2. Click the “Interplay Production User Information Center” link to access the Interplay Production
User Information Center Web page.
To open the Interplay Production User Information Center from Interplay Access or the
Interplay Administrator:
t
Select Help > Documentation Website on Server.
Avid Training Services
Avid makes lifelong learning, career advancement, and personal development easy and convenient.
Avid understands that the knowledge you need to differentiate yourself is always changing, and Avid
continually updates course content and offers new training delivery methods that accommodate your
pressured and competitive work environment.
For information on courses/schedules, training centers, certifications, courseware, and books, please
visit www.avid.com/support and follow the Training links, or call Avid Sales at 800-949-AVID
(800-949-2843).
9
1 Automatic Server Failover Introduction
This chapter covers the following topics:
•
Server Failover Overview
•
How Server Failover Works
•
Installing the Failover Hardware Components
•
HP MSA 2040 Reference Information
•
Clustering Technology and Terminology
Server Failover Overview
The automatic server failover mechanism in Avid Interplay allows client access to the Interplay
Engine in the event of failures or during maintenance, with minimal impact on the availability. A
failover server is activated in the event of application, operating system, or hardware failures. The
server can be configured to notify the administrator about such failures using email.
The Interplay implementation of server failover uses Microsoft® clustering technology. For
background information on clustering technology and links to Microsoft clustering information, see
“Clustering Technology and Terminology” on page 24.
c
Additional monitoring of the hardware and software components of a high-availability solution
is always required. Avid delivers Interplay preconfigured, but additional attention on the
customer side is required to prevent outage (for example, when a private network fails, RAID
disk fails, or a power supply loses power). In a mission critical environment, monitoring tools
and tasks are needed to be sure there are no silent outages. If another (unmonitored)
component fails, only an event is generated, and while this does not interrupt availability, it
might go unnoticed and lead to problems. Additional software reporting such issues to the IT
administration lowers downtime risk.
The failover cluster is a system made up of two server nodes and a shared-storage device connected
over Fibre Channel. These are to be deployed in the same location given the shared access to the
storage device. The cluster uses the concept of a “virtual server” to specify groups of resources that
failover together. This virtual server is referred to as a “cluster application” in the failover cluster user
interface.
The following diagram illustrates the components of a cluster group, including sample IP addresses.
For a list of required IP addresses and node names, see “List of IP Addresses and Network Names”
on page 30.
How Server Failover Works
Cluster Group
Intranet
Resource groups
Clustered
services
Failover Cluster
11.22.33.200
Node #1
Intranet: 11.22.33.44
Private: 10.10.10.10
Interplay Server
(cluster application)
11.22.33.201
Private Network
Node #2
Intranet: 11.22.33.45
Private: 10.10.10.11
FibreChannel
Disk resources
(shared disks)
n
Quorum
Disk
Database
Disk
If you are already using clusters, the Avid Interplay Engine will not interfere with your current setup.
How Server Failover Works
Server failover works on three different levels:
•
Failover in case of hardware failure
•
Failover in case of network failure
•
Failover in case of software failure
Hardware Failover Process
When the Microsoft Cluster service is running on both systems and the server is deployed in cluster
mode, the Interplay Engine and its accompanying services are exposed to users as a virtual server. To
clients, connecting to the clustered virtual Interplay Engine appears to be the same process as
connecting to a single, physical machine. The user or client application does not know which node is
actually hosting the virtual server.
When the server is online, the resource monitor regularly checks its availability and automatically
restarts the server or initiates a failover to the other node if a failure is detected. The exact behavior
can be configured using the Failover Cluster Manager. Because clients connect to the virtual network
name and IP address, which are also taken over by the failover node, the impact on the availability of
the server is minimal.
Network Failover Process
Avid supports a configuration that uses connections to two public networks (VLAN 10 and VLAN
20) on a single switch. The cluster monitors both networks. If one fails, the cluster application stays
on line and can still be reached over the other network. If the switch fails, both networks monitored
by the cluster will fail simultaneously and the cluster application will go offline.
11
Server Failover Configurations
For a high degree of protection against network outages, Avid supports a configuration that uses two
network switches, each connected to a shared primary network (VLAN 30) and protected by a
failover protocol. If one network switch fails, the virtual server remains online through the other
VLAN 30 network and switch.
These configurations are described in the next section.
Windows Server 2012
This document describes a cluster configuration that uses the cluster application supplied with
Windows Server 2012 R2 Standard. For information about Microsoft clustering, see the Windows
Server 2012 R2 Failover Clustering site:
https://technet.microsoft.com/en-us/library/hh831579.aspx
Installation of the Interplay Engine and Interplay Archive Engine now supports Windows Server
2012 R2 Standard, but otherwise has not changed.
Server Failover Configurations
The following sections describe two supported configurations for integrating a failover cluster into an
existing network:
•
A cluster in an Avid ISIS® environment that is integrated into the intranet through two layer-3
switches (VLAN 30 in Zone 3). This “redundant-switch” configuration protects against both
hardware and network outages and thus provides a higher level of protection than the dualconnected configuration.
•
A cluster in an Avid ISIS environment that is integrated into the intranet through two public
networks (VLAN 10 and VLAN 20 in Zone 1). This “dual-connected” configuration protects
against hardware outages and network outages. If one network fails, the cluster application stays
on line and can be reached over the other network.
These configurations refer to multiple virtual networks (VLANS) that are used with ISIS 7000/7500
shared-storage systems. ISIS 5000/5500 and Avid NEXIS® systems typically do not use multiple
VLANS. You can adapt these configurations for use in ISIS 5000/5500 or Avid NEXIS
environments.
Redundant-Switch Configuration
The following diagram illustrates the failover cluster architecture for an Avid ISIS environment that
uses two layer-3 switches. These switches are configured for failover protection through either HSRP
(Hot Standby Router Protocol) or VRRP (Virtual Router Redundancy Protocol). The cluster nodes
are connected to one subnet (VLAN 30), each through a different network switch. If one of the
VLAN 30 networks fails, the virtual server remains online through the other VLAN 30 network and
switch.
n
This guide does not describe how to configure redundant switches for an Avid shared-storage
network. Configuration information is included in the ISIS Qualified Switch Reference Guide and
the Avid NEXIS Network and Switch Guide, which are available for download from the Avid
Customer Support Knowledge Base at www.avid.com\onlinesupport.
12
Server Failover Configurations
Two-Node Cluster in an Avid ISIS Environment (Redundant-Switch Configuration)
Avid network switch 2
running VRRP or HSRP
VLAN 30
Avid network switch 1
running VRRP or HSRP
VLAN 30
Interplay editing
clients
Interplay Engine cluster node 1
Private network
for heartbeat
Cluster-storage
RAID array
Interplay Engine cluster node 2
LEGEND
1 GB Ethernet connection
Fibre Channel connection
The following table describes what happens in the redundant-switch configuration as a result of an
outage:
Type of Outage
Result
Hardware (CPU, network adapter, The cluster detects the outage and triggers failover to the remaining
memory, cable, power supply) fails node.
The Interplay Engine is still accessible.
Network switch 1 (VLAN 30) fails
External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.
Network switch 2 (VLAN 30) fails
External switches running VRRP/HSRP detect the outage and make the
gateway available as needed.
The Interplay Engine is still accessible.
13
Server Failover Configurations
Dual-Connected Configuration
The following diagram illustrates the failover cluster architecture for an Avid ISIS environment. In
this environment, each cluster node is “dual-connected” to the network switch: one network interface
is connected to the VLAN 10 subnet and the other is connected to the VLAN 20 subnet. If one of the
subnets fails, the virtual server remains online through the other subnet.
Two-Node Cluster in an Avid ISIS Environment (Dual-Connected Configuration)
Avid network switch 1
running VRRP or HSRP
VLAN 10
VLAN 20
Interplay editing
clients
Interplay Engine cluster node 1
Private network
for heartbeat
Cluster-storage
RAID array
Interplay Engine cluster node 2
LEGEND
1 GB Ethernet connection
Fibre Channel Connection
The following table describes what happens in the dual-connected configuration as a result of an
outage:
Type of Outage
Result
Hardware (CPU, network adapter,
memory, cable, power supply) fails
The cluster detects the outage and triggers failover to the remaining
node.
The Interplay Engine is still accessible.
Left ISIS VLAN (VLAN10) fails
The Interplay Engine is still accessible through the right network.
Right ISIS VLAN (VLAN 20) fails The Interplay Engine is still accessible through the left network.
14
Server Failover Requirements
Server Failover Requirements
You should make sure the server failover system meets the following requirements.
Hardware
The automatic server failover system was qualified with the following hardware:
•
Two servers functioning as nodes in a failover cluster. Avid has qualified a Dell™ server and an
HP® server with minimum specifications, their equivalent, or better. See Interplay | Production
Dell and HP Server Support, which is available from the Avid Knowledge Base.
On-board network interface connectors (NICs) for these servers are qualified. There is no
requirement for an Intel network card.
•
Two Fibre Channel host adapters (one for each server in the cluster).
The ATTO Celerity FC-81EN is qualified for these servers. Other Fibre Channel adapters might
work but have not been qualified. Before using another Fibre Channel adapter, contact the
vendor to check compatibility with the server host, the storage area network (SAN), and most
importantly, a Microsoft failover cluster.
•
One of the following
-
One Infortrend® S12F-R1440 storage array. For more information, see the Infortrend
EonStor®DS S12F-R1440 Installation and Hardware Reference Manual.
-
One HP MSA 2040 SAN storage array. For more information, see the HP MSA 2040 Quick
Start Instructions and other HP MSA documentation, available here:
http://www.hp.com/support/msa2040/manuals
Also see “HP MSA 2040 Reference Information” on page 23.
The servers in a cluster are connected using one or more cluster shared-storage buses and one or
more physically independent networks acting as a heartbeat.
Server Software
The automatic failover system was qualified on the following operating system:
•
Windows Server 2012 R2 Standard
Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. One license is used for both nodes in an Interplay Engine failover cluster.
For installation information, see “Installing a Permanent License” on page 102.
Space Requirements
The default disk configuration for the shared RAID array is as follows:
Disk
Infortrend S12F-R1440
Disk 1 Quorum disk
10 GB
Disk 2 (not used)
10 GB
Disk 3 Database disk
814 GB or larger
15
Installing the Failover Hardware Components
Disk
HP MSA 2040
Disk 1 Quorum disk
10 GB
Disk 2 Database disk
870 GB or larger
Antivirus Software
You can run antivirus software on a cluster, if the antivirus software is cluster-aware. For information
about cluster-aware versions of your antivirus software, contact the antivirus vendor. If you are
running antivirus software on a cluster, make sure you exclude these locations from the virus
scanning: Q:\ (Quorum disk), C:\Windows\Cluster, and S:\Workgroup_Databases (database).
See also “Configuring Local Software Firewalls” on page 37.
Functions You Need To Know
Before you set up a cluster in an Avid Interplay environment, you should be familiar with the
following functions:
•
Microsoft Windows Active Directory domains and domain users
•
Microsoft Windows clustering for Windows Server 2012 R2 Standard (see “Clustering
Technology and Terminology” on page 24)
•
Disk configuration (format, partition, naming)
•
Network configuration
For information about Avid Networks and Interplay Production, see “Network Requirements for
ISIS/NEXIS” on the Customer Support Knowledge Base at http://avid.force.com/pkb/articles/
en_US/compatibility/en244197.
Installing the Failover Hardware Components
The following topics provide information about installing the failover hardware components for the
supported configurations:
•
“Slot Locations” on page 16
•
“Failover Cluster Connections: Redundant-Switch Configuration” on page 17
•
“Failover Cluster Connections, Dual-Connected Configuration” on page 20
Slot Locations
Each server requires a fibre channel host adapter to connect to the shared-storage RAID array.
Dell PowerEdge R630
The Dell PowerEdge R630 currently supplied by Avid includes three PCIe slots. Avid recommends
installing the fibre channel host adapter in slot 2, as shown in the following illustration.
16
Installing the Failover Hardware Components
Dell PowerEdge R630 (Rear View)
Adapter card in PCIe slot 2
n
The Dell system is designed to detect what type of card is in each slot and to negotiate optimum
throughput. As a result, using slot 2 for the fibre channel host adapter is recommended but not
required. For more information, see the Dell PowerEdge R630 Owner’s Manual.
HP ProLiant DL360 Gen 9
The HP ProLiant DL 360 Gen 9 includes two or three slots. Avid recommends installing the fibre
channel host adapter in slot 2, as shown in the following illustration.
HP ProLiant DL360 Gen 9 (Rear View)
Adapter card in PCIe slot 2
Failover Cluster Connections: Redundant-Switch Configuration
Make the following cable connections to add a failover cluster to an Avid ISIS environment, using
the redundant-switch configuration:
•
•
First cluster node:
-
Network interface connector 2 to layer-3 switch 1 (VLAN 30)
-
Network interface connector 3 to network interface connector 3 on the second cluster node
(private network for heartbeat)
-
Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 1 (top left) on the Infortrend RAID array or the HP MSA RAID array.
Second cluster node:
-
Network interface connector 2 to layer-3 switch 1 (VLAN 30)
-
Network interface connector 3 to the bottom-left network interface connector on the second
cluster node (private network for heartbeat)
-
Fibre Channel connector on the ATTO Celerity FC-81EN card to the Fibre Channel
connector Port 2 (bottom, second from left) on the Infortrend RAID array or the HP MSA
RAID array.
The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.
n
This configuration refers to a virtual network (VLAN) that is used with ISIS 7000/7500 sharedstorage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
17
Installing the Failover Hardware Components
Failover Cluster Connections: Avid ISIS, Redundant-Switch Configuration, Infortrend
Interplay Engine Cluster Node 1
Dell PowerEdge R630 Back Panel
Ethernet to Avid network
switch 1
Ethernet to node 2
(Private network)
Fibre Channel
to RAID Array
Infortrend RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
Ethernet to Avid network switch 2
LEGEND
1 GB Ethernet connection
Fibre Channel connection
18
Installing the Failover Hardware Components
Failover Cluster Connections: Avid ISIS, Redundant-Switch Configuration, HP MSA
Interplay Engine Cluster Node 1
Dell PowerEdge R630 Back Panel
Ethernet to Avid network
switch 1
Fibre Channel
to RAID Array
Ethernet to node 2
(Private network)
HP MSA RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
Ethernet to Avid network switch 2
LEGEND
1 GB Ethernet connection
Fibre Channel connection
19
Installing the Failover Hardware Components
Failover Cluster Connections, Dual-Connected Configuration
Make the following cable connections to add a failover cluster to an Avid ISIS environment as a dualconnected configuration:
•
•
First cluster node:
-
Network interface connector 2 to the ISIS left subnet (VLAN 10 public network)
-
Network interface connector 4 to the ISIS right subnet (VLAN 20 public network)
-
Network interface connector 3 to the bottom-left network interface connector on the second
cluster node (private network for heartbeat)
-
Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 1 (top left) on the Infortrend RAID array or the HP MSA RAID array.
Second cluster node:
-
Network interface connector 2 to the ISIS left subnet (VLAN 10 public network)
-
Network interface connector 4 to the ISIS right subnet (VLAN 20 public network)
-
Network interface connector 3 to the bottom-left network interface connector on the first
cluster node (private network for heartbeat)
-
Fibre Channel connector on the ATTO Celerity FC-81EN card to Fibre Channel connector
Port 2 (bottom, second from left) on the HP MSA RAID array.
The following illustrations show these connections. The illustrations use the Dell PowerEdge R630
as cluster nodes.
n
This configuration refers to virtual networks (VLANs) that are used with ISIS 7000/7500 sharedstorage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
20
Installing the Failover Hardware Components
Failover Cluster Connections, Avid ISIS, Dual-Connected Configuration, Infortrend
Interplay Engine Cluster Node 1
Dell PowerEdge R630 Back Panel
Ethernet to ISIS left subnet
Fibre Channel
to RAID Array
Ethernet to ISIS right subnet
Ethernet to node 2
(Private network)
Infortrend RAID Array
Back Panel
Fibre Channel
to RAID Array
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
Ethernet to ISIS left subnet
Ethernet to ISIS right subnet
LEGEND
1 GB Ethernet connection
Fibre Channel connection
21
Installing the Failover Hardware Components
Failover Cluster Connections, Avid ISIS, Dual-Connected Configuration, HP MSA
Interplay Engine Cluster Node 1
Dell PowerEdge R630 Back Panel
Ethernet to ISIS left subnet
Fibre Channel
to Infortrend
Ethernet to ISIS right subnet
Ethernet to node 2
(Private network)
HP MSA RAID Array
Back Panel
Fibre Channel
to Infortrend
Interplay Engine Cluster Node 2
Dell PowerEdge R630 Back Panel
Ethernet to ISIS left subnet
Ethernet to ISIS right subnet
LEGEND
1 GB Ethernet connection
Fibre Channel connection
22
HP MSA 2040 Reference Information
HP MSA 2040 Reference Information
The following topics provide information about components of the HP MSA 2040 with references to
additional documentation.
HP MSA 2040 Storage Management Utility
The HP MSA 2040 is packaged with a Storage Management Utility (SMU). The SMU is a browserbased tool that lets you configure, manage, and view information about the HP MSA 2040. Each
controller in the HP MSA 2040 has a default IP address and host name for connecting over a
network.
Default IP Settings
•
Management Port IP Address:
-
10.0.0.2 (controller A)
-
10.0.0.3 (controller B)
•
IP Subnet Mask: 255.255.255.0
•
Gateway IP Address: 10.0.0.1
You can change these settings to match local networks through the SMU, the Command Line
Interface (CLI), or the MSA Device Discovery Tool DVD that ships with the array.
Hostnames
Hostnames are predefined using the MAC address of the controller adapter, using the following
syntax:
•
http://hp-msa-storage-<last 6 digits of mac address>
For example:
•
http://hp-msa-storage-1dfcfc
You can find the MAC address through the SMU. Go to Enclosure Overview and click the Network
port. The hostname itself is not displayed in the SMU and cannot be changed.
Default User Names, Passwords, and Roles
The following are the default user names/passwords and roles:
•
monitor / !monitor – Can monitor the system, with some functions disabled. For example, the
Tools Menu allows log saving, but not Shut Down or Restart of controllers.
•
manage / !manage – Can manage the system, with all functions available.
For More Information
See the following HP documents:
•
HP MSA 2040 SMU Reference Guide
•
HP MSA Event Descriptions Reference Guide
23
Clustering Technology and Terminology
HP MSA 2040 Command Line Interface
The HP MSA 2040 is packaged with a Command Line Interface (CLI). To use the CLI, you need to
do the following:
•
Install a Windows USB driver from HP. This driver is available from the HP MSA support page
at http://www.hp.com/support. Search for the driver with the following name:
HP MSA 1040/2040 and P2000 G3 USB Driver for Windows Server x64
•
HP ships two USB cables with the HP MSA 2040. Use a USB cable to connect a server to each
controller in the HP MSA 2040.
For More Information
See the following HP documents:
•
For more information about connecting to the CLI, see Chapter 5 of the HP MSA 2040 User
Guide.
•
For information about commands, see the HP MSA 2040 CLI Reference Guide.
HP MSA 2040 Support Documentation
Documentation for the HP MSA 2040 is located on the HP support site:
http://www.hp.com/support/msa2040/manuals
The following are some of the available documents:
•
HP MSA 2040 Quick Start Instructions
•
HP MSA 2040 User’s Guide
See Chapter 5 for CLI information, Chapter 7 for Troubleshooting information, and Appendix A
for LED descriptions.
•
HP MSA 2040 SMU Reference Guide
See Chapter 3 for configuration and setup information.
•
HP MSA 2040 Events Description Reference Guide.
•
HP MSA 2040 CLI Reference Guide
•
HP MSA 2040 Best Practices
•
HP MSA 2040 Cable Configuration Guide
•
HP MSA Controller Replacement Instructions
•
HP MSA Drive Replacement Instructions
Clustering Technology and Terminology
Clustering can be complicated, so it is important that you get familiar with the technology and
terminology of failover clusters before you start. A good source of information is the Windows
Server 2012 R2 Failover Clustering site:
https://technet.microsoft.com/en-us/library/hh831579.aspx
24
Clustering Technology and Terminology
Here is a brief summary of the major concepts and terms, adapted from the Microsoft Windows
Server web site:
•
failover cluster; A group of independent computers that work together to increase the availability
of clustered roles (formerly called clustered applications and services). The clustered servers
(called nodes) are connected by physical cables and by software. If one of the nodes fails,
another node begins to provide services (a process known as failover).
•
Cluster service: The essential software component that controls all aspects of server cluster or
failover cluster operation and manages the cluster configuration database. Each node in a failover
cluster owns one instance of the Cluster service.
•
cluster resources: Cluster components (hardware and software) that are managed by the cluster
service. Resources are physical hardware devices such as disk drives, and logical items such as
IP addresses and applications.
•
clustered role: A collection of resources that are managed by the cluster service as a single,
logical unit and that are always brought online on the same node.
•
quorum: The quorum for a cluster is determined by the number of voting elements that must be
part of active cluster membership for that cluster to start properly or continue running. By
default, every node in the cluster has a single quorum vote. In addition, a quorum witness (when
configured) has an additional single quorum vote. A quorum witness can be a designated disk
resource or a file share resource.
An Interplay Engine failover cluster uses a disk resource, named Quorum, as a quorum witness.
25
2
Creating a Microsoft Failover Cluster
This chapter describes the processes for creating a Microsoft failover cluster for automatic server
failover. It is crucial that you follow the instructions given in this chapter completely, otherwise the
automatic server failover will not work.
This chapter covers the following topics:
•
Server Failover Installation Overview
•
Before You Begin the Server Failover Installation
•
Preparing the Server for the Failover Cluster
•
Configuring the Failover Cluster
Instructions for installing the Interplay Engine are provided in “Installing the Interplay | Engine for a
Failover Cluster” on page 75.
Server Failover Installation Overview
Installation and configuration of the automatic server failover consists of the following major tasks:
n
•
Make sure that the network is correctly set up and that you have reserved IP host names and IP
addresses (see “Before You Begin the Server Failover Installation” on page 27).
•
Prepare the servers for the failover cluster (see “Preparing the Server for the Failover Cluster” on
page 33). This includes configuring the nodes for the network and formatting the drives.
•
Install the Failover Cluster feature and configure the failover cluster (see “Configuring the
Failover Cluster” on page 50).
•
Install the Interplay Engine on both nodes (see “Installing the Interplay | Engine for a Failover
Cluster” on page 75).
•
Test the complete installation (see “Testing the Complete Installation” on page 102).
Do not install any other software on the cluster machines except the Interplay Engine. For example,
Media Indexer software needs to be installed on a different server. For complete installation
instructions, see the Interplay | Production Software Installation and Configuration Guide.
For more details about Microsoft clustering technology, see the Windows Server 2008 R2 Failover
Clustering resource site: www.microsoft.com/windowsserver2008/en/us/failover-clusteringtechnical.aspx
Before You Begin the Server Failover Installation
Before You Begin the Server Failover Installation
Use the following checklist to help you prepare for the server failover installation.
Cluster Installation Preparation Check List
Task
For More Information
b Make sure all cluster hardware connections are
correct.
b Make sure that the site has a network that is
See “Installing the Failover Hardware
Components” on page 16.
Facility staff
qualified to run Active Directory and DNS
services.
b Make sure the network includes an Active
Facility staff
Directory domain.
b Determine the subnet mask, the gateway, DNS, Facility staff
and WINS server addresses on the network.
b Create or select domain user accounts for
See “Requirements for Domain User Accounts” on
page 28.
creating and administering the cluster.
b Reserve static IP addresses for all network
See “List of IP Addresses and Network Names” on
page 30.
interfaces and host names.
b If necessary, download the ATTO Configuration See “Changing Default Settings for the ATTO Card
Utility.
on Each Node” on page 34.
b Make sure the time settings for both nodes are in Operating system documentation.
sync. If not, you must synchronize the times or
A Guide to Time Synchronization for Avid Interplay
you will not be able to add both nodes to the
Systems on the Avid Knowledge Base.
cluster. You should also sync the shared storage
array. You can use the Network Time Protocol
(NTP).
b Make sure the Remote Registry service is started Operating system documentation
and is enabled for Automatic startup. Open
Server Management and select Configuration >
Services > Remote Registry.
b Create an Avid shared-storage user account with Avid shared-storage documentation.
read and write privileges.
This account is not needed for the installation of
the Interplay Engine, but is required for the
operation of the Interplay Engine (for example,
media deletion from shared storage). The user
name and password must exactly match the user
name and password of the Server Execution
User.
27
Before You Begin the Server Failover Installation
Cluster Installation Preparation Check List(Continued)
Task
For More Information
b Install and set up an Avid shared-storage client
Avid shared-storage documentation.
on both servers. Check if shared-storage setup
requires an Intel® driver update.
Avid recommends installing and setting up the
shared-storage client before creating the cluster
and installing the Interplay Engine. This avoids a
driver update after the server failover cluster is
running.
b Install a permanent license. A temporary license See “Installing a Permanent License” on page 102.
is installed with the Interplay Engine software.
After the installation is complete, install the
permanent license. Permanent licenses are
supplied in one of two ways:
•
As a software license that is activated
through the ALC application
•
As a hardware license that is activated
through an application key (dongle)
Requirements for Domain User Accounts
Before beginning the cluster installation process, you need to select or create the following user
accounts in the domain that includes the cluster:
•
n
Server Execution User: Create or select an account that is used by the Interplay Engine services
(listed as the Avid Workgroup Engine Monitor and the Avid Workgroup TCP COM Bridge in the
list of Windows services). This account must be a domain user. The procedures in this document
use sqauser as an example of a Server Execution User. This account is automatically added to the
Local Administrators group on each node by the Interplay Engine software during the
installation process.
The Server Execution User is not used to start the Cluster service for a Windows Server 2012
installation. Windows Server 2012 uses the system account to start the Cluster service. The Server
Execution User is used to start the Avid Workgroup Engine Monitor and the Avid Workgroup TCP
COM Bridge.
The Server Execution User is critical to the operation of the Interplay Engine. If necessary, you
can change the name of the Server Execution User after the installation. For more information,
see “Troubleshooting the Server Execution User Account” and “Re-creating the Server
Execution User” in the Interplay | Engine and Interplay | Archive Engine Administration Guide
and the Interplay Help.
28
Before You Begin the Server Failover Installation
•
Cluster installation account: Create or select a domain user account to use during the
installation and configuration process. There are special requirements for the account that you
use for the Microsoft cluster installation and creation process (described below).
-
If your site allows you to use an account with the required privileges, you can use this
account throughout the entire installation and configuration process.
-
If your site does not allow you to use an account with the required privileges, you can work
with the site’s IT department to use a domain administrator’s account only for the Microsoft
cluster creation steps. For other tasks, you can use a domain user account without the
required privileges.
In addition, the account must have administrative permissions on the servers that will become
cluster nodes. You can do this by adding the account to the local Administrators group on each of
the servers that will become cluster nodes.
Requirements for Microsoft cluster creation: To create a user with the necessary rights for
Microsoft cluster creation, you need to work with the site’s IT department to access Active
Directory (AD). Depending on the account policies of the site, you can grant the necessary rights
for this user in one of the following ways:
-
Create computer objects for the failover cluster (virtual host name) and the Interplay Engine
(virtual host name) in the Active Directory (AD) and grant the user Full Control on them. In
addition, the failover cluster object needs Full Control over the Interplay Engine object. For
examples, see “List of IP Addresses and Network Names” on page 30.
The account for these objects must be disabled so that when the Create Cluster wizard and
the Interplay Engine installer are run, they can confirm that the account to be used for the
cluster is not currently in use by an existing computer or cluster in the domain. The cluster
creation process then enables the entry in the AD.
-
Make the user a member of the Domain Administrators group. There are fewer manual steps
required when using this type of account.
-
Grant the user the permissions “Create Computer objects” and “Read All Properties” in the
container in which new computer objects get created, such as the computer’s Organizational
Unit (OU).
For more information, see the Avid Knowledge Base article “How to Prestage Cluster Name
Object and Virtual Interplay Engine Name” at http://avid.force.com/pkb/articles/en_US/
How_To/How-to-prestage-cluster-name-object-and-virtual-Interplay-Engine-name. This article
references the Microsoft article “Failover Cluster Step-by-Step Guide: Configuring Accounts in
Active Directory” at http://technet.microsoft.com/en-us/library/cc731002%28WS.10%29.aspx
n
Roaming profiles are not supported in an Interplay Production environment.
•
Cluster administration account: Create or select a user account for logging in to and
administering the failover cluster server. Depending on the account policies of your site, this
account could be the same as the cluster installation account, or it can be a different domain user
account with administrative permissions on the servers that will become cluster nodes.
29
Before You Begin the Server Failover Installation
List of IP Addresses and Network Names
You need to reserve IP host names and static IP addresses on the in-network DNS server before you
begin the installation process. The number of IP addresses you need depends on your configuration:
n
n
•
An environment with a redundant-switch configuration requires 4 public IP addresses and 2
private IP addresses
•
An environment with a dual-connected configuration requires 8 public IP addresses and 2 private
IP addresses
Make sure that these IP addresses are outside of the range that is available to DHCP so they cannot
automatically be assigned to other machines.
All names must be valid and unique network host names.A hostname must comply with RFC 952
standards. For example, you cannot use an underscore in a hostname. For more information, see
“Naming Conventions in Active Directory for Computers, Domains, Sites, and OUs” on the
Microsoft Support Knowledge Base.
The following table provides a list of example names that you can use when configuring the cluster
for a redundant-switch configuration. You can fill in the blanks with your choices to use as a
reference during the configuration process.
IP Addresses and Node Names: Redundant-Switch Configuration
Node or Service
Item Required
Example Name
Where Used
Cluster node 1
•
SECLUSTER1
See “Creating the Failover
Cluster” on page 57.
SECLUSTER2
See “Creating the Failover
Cluster” on page 57.
1 Host Name
_____________________
•
1 shared-storage IP address public
_____________________
•
1 IP address - private
(Heartbeat)
_____________________
Cluster node 2
•
1 Host Name
_____________________
•
1 shared-storage IP address public
_____________________
•
1 IP address - private
(Heartbeat)
_____________________
30
Before You Begin the Server Failover Installation
IP Addresses and Node Names: Redundant-Switch Configuration(Continued)
Node or Service
Item Required
Example Name
Where Used
Microsoft failover
cluster
•
SECLUSTER
See “Creating the Failover
Cluster” on page 57.
SEENGINE
See “Specifying the
Interplay Engine Details”
on page 80 and
“Specifying the Interplay
Engine Service Name” on
page 81.
1 Network Name
(virtual host name)
_____________________
•
1 shared-storage IP address public
(virtual IP address)
_____________________
Interplay Engine cluster •
role
1 Network Name
(virtual host name)
_____________________
•
1 shared-storage IP address public
(virtual IP address)
_____________________
The following table provides a list of example names that you can use when configuring the cluster
for an dual-connected configuration. Fill in the blanks to use as a reference.
IP Addresses and Node Names: Dual-Connected Configuration
Node or Service
Item Required
Example Name
Where Used
Cluster node 1
•
SECLUSTER1
See “Creating the Failover
Cluster” on page 57.
SECLUSTER2
See “Creating the Failover
Cluster” on page 57.
1 Host Name
______________________
•
2 shared-storage IP addresses
- public
(left) __________________
(right) _________________
•
1 IP address - private
(Heartbeat)
______________________
Cluster node 2
•
1 Host Name
______________________
•
2 shared-storage IP addresses
- public
(left)__________________
(right)_________________
•
1 IP address - private
(Heartbeat)
______________________
31
Before You Begin the Server Failover Installation
IP Addresses and Node Names: Dual-Connected Configuration(Continued)
Node or Service
Item Required
Example Name
Where Used
Microsoft failover
cluster
•
SECLUSTER
See “Creating the Failover
Cluster” on page 57.
SEENGINE
See “Specifying the
Interplay Engine Details”
on page 80 and
“Specifying the Interplay
Engine Service Name” on
page 81.
1 Network Name
(virtual host name)
______________________
•
2 shared-storage IP addresses
- public
(virtual IP addresses)
(left) __________________
(right)__________________
Interplay Engine cluster •
role
1 Network Name
(virtual host name)
______________________
•
2 shared-storage IP addresses
- public (virtual IP addresses)
(left) __________________
(right) _________________
Active Directory and DNS Requirements
Use the following table to help you add Active Directory accounts for the cluster components to your
site’s DNS.
Windows Server 2012: DNS Entries
Component
Computer Account in DNS Dynamic
Active Directory
Entry a
DNS Static
Entry
Cluster node 1
node_1_name
Yes
No
Cluster node 2
node_2_name
Yes
No
Microsoft failover cluster
cluster_nameb
Yes
Yesc
Interplay Engine cluster role
ie_nameb
Yes
Yesc
a. Entries are dynamically added to the DNS when the node logs on to Active Directory.
b. If you manually created Active Directory entries for the Microsoft failover cluster and Interplay Engine cluster
role, make sure to disable the entries in Active Directory in order to build the Microsoft failover cluster (see
“Requirements for Domain User Accounts” on page 28).
c. Add reverse static entries only. Forward entries are dynamically added by the failover cluster. Static entries
must be exempted from scavenging rules.
32
Preparing the Server for the Failover Cluster
Preparing the Server for the Failover Cluster
Before you configure the failover cluster, you need to complete the tasks in the following procedures:
•
“Downloading the ATTO Driver and Configuration Tool” on page 33
•
“Changing Default Settings for the ATTO Card on Each Node” on page 34
•
“Changing Windows Server Settings on Each Node” on page 36
•
“Configuring Local Software Firewalls” on page 37
•
“Renaming the Local Area Network Interface on Each Node” on page 37
•
“Configuring the Private Network Adapter on Each Node” on page 40
•
“Configuring the Binding Order Networks on Each Node” on page 42
•
“Configuring the Public Network Adapter on Each Node” on page 44
•
“Configuring the Cluster Shared-Storage RAID Disks on Each Node” on page 44
The tasks in this section do not require the administrative privileges needed for Microsoft cluster
creation (see “Requirements for Domain User Accounts” on page 28).
Configuring the ATTO Fibre Channel Card
The following topics describe steps necessary to prepare the ATTO fibre channel card. This card is
installed in each server in a cluster and is used to communicate with the storage array.
n
•
“Downloading the ATTO Driver and Configuration Tool” on page 33
•
“Changing Default Settings for the ATTO Card on Each Node” on page 34
The ATTO Celerity FC-81EN is qualified for Dell and HP servers. Other Fibre Channel adapters
supported by Dell and HP are also supported for an Interplay Engine cluster. This guide does not
contain information about the configuration of these cards; the default factory settings should work
correctly. If the SAN drives are accessible on both nodes, and if the failover cluster validation
succeeds, the adapters are configured correctly.
Downloading the ATTO Driver and Configuration Tool
You need to download the ATTO drivers and the ATTO Configuration Tool from the ATTO web site
and install it on the server. You must register to download tools and drivers.
To download and install the ATTO Configuration Tool for the FC-81EN card:
1. Go to the 8Gb Celerity HBAs Downloads page and download the ATTO Configuration Tool:
https://www.attotech.com/downloads/70/
Scroll down several pages to find the Windows ConfigTool (currently version 4.22).
2. Double-click the downloaded file win_app_configtool_422.exe, then click Run.
3. Extract the files.
4. Locate the folder to which you extracted the files and double-click ConfigTool_422.exe.
5. Follow the system prompts for a Full Installation.
Then locate, download and install the appropriate driver. The current version for the Celerity FC81EN is version 1.85.
33
Preparing the Server for the Failover Cluster
Changing Default Settings for the ATTO Card on Each Node
You need to use the ATTO Configuration Tool to change some default settings on each node in the
cluster.
To change the default settings for the ATTO card:
1. On the first node, click Start, and select Programs > ATTO ConfigTool > ATTO ConfigTool.
The ATTO Configuration Tool dialog box opens.
2. In the Device Listing tree (left pane), click the expand box for “localhost.”
A login screen is displayed.
3. Type the user name and password for a local administrator account and click Login.
4. In the Device Listing tree, navigate to the appropriate channel on your host adapter.
5. Click the NVRAM tab.
34
Preparing the Server for the Failover Cluster
6. Change the following settings if necessary:
-
Boot driver: Disabled
-
Execution Throttle: 128
-
Device Discovery: Port WWN
-
Data Rate:
-
For connection to Infortrend, select 4 Gb/sec.
-
For connection to HP MSA, select 8 Gb/sec.
-
Interrupt Coalesce: Low
-
Spinup Delay: 0
You can keep the default values for the other settings.
7. Click Commit.
8. Reboot the system.
9. Open the Configuration tool again and verify the new settings.
10. On the other node, repeat steps 1 through 9.
35
Preparing the Server for the Failover Cluster
Changing Windows Server Settings on Each Node
On each node, set the processor scheduling for best performance of programs.
n
No other Windows server settings need to be changed. Later, you need to add features for clustering.
See “Installing the Failover Clustering Features” on page 51.
To change the processor scheduling:
1. Select Control Panel > System and Security > System.
2. In the list on the left side of the System dialog box, click “Advanced system settings.”
3. In the Advanced tab, in the Performance section, click the Settings button.
4. In the Performance Options dialog box, click the Advanced tab.
5. In the Processor scheduling section, for “Adjust for best performance of,” select Programs.
6. Click OK.
7. In the System Properties dialog box, click OK.
36
Preparing the Server for the Failover Cluster
Configuring Local Software Firewalls
Make sure any local software firewalls used in a failover cluster, such as Symantec End Point (SEP),
are configured to allow iPv6 communication and IPv6 over IPv4 communication.
n
The Windows Firewall service must be enabled for proper operation of a failover cluster. Note that
enabling the service is different from enabling or disabling the firewall itself and firewall rules
Currently the SEP Firewall does not support IPv6. Allow this communication in the SEP Manager.
Edit the rules shown in the following illustrations:
Renaming the Local Area Network Interface on Each Node
You need to rename the LAN interface on each node to appropriately identify each network.
c
Avid recommends that both nodes use identical network interface names. Although you can use
any name for the network connections, Avid suggests that you use the naming conventions
provided in the table in the following procedure.
To rename the local area network connections:
1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens. On a Dell PowerEdge, the Name shows the number of
the hardware (physical) port as it is labeled on the computer. The Device Name shows the name
of the network interface card. Note that the number in the Device Name does not necessarily
match the number of the hardware port.
37
Preparing the Server for the Failover Cluster
n
One way to find out which hardware port matches which Windows device name is to plug in
sequentially one network cable into each physical port and check in the Network Connections dialog
which device becomes connected.
3. Right-click a network connection and select Rename.
4. Depending on your Avid network and the device you selected, type a new name for the network
connection and press Enter.
Use the following illustration and table for reference. The illustration uses connections on a Dell
PowerEdge computer in both redundant and dual-connected configurations as an example.
Redundant Switch Configuration
Dell PowerEdge R630 Back Panel
Connector 2 to Avid network switch 1
(Public network)
Fibre Channel
to RAID Array
Connector 3 to node 2 (Private network)
Dual-Connected Configuration
Dell PowerEdge R630 Back Panel
Connector 2 to ISIS left subnet
(Public network)
Fibre Channel
to RAID Array
Connector 3 to node 2 (Private network)
Connector 4 to ISIS right subnet
(Public network)
38
Preparing the Server for the Failover Cluster
Naming Network Connections (Using Dell PowerEdge)
Network
Connector
s as
Labeled
New Names
(Redundant-switch
configuration)
New Names (Dualconnected configuration)
1
Not used
Not used
Broadcom NetXtreme Gigabit
Ethernet #4
2
Public
Right
This is a public network
connected to a network
switch
This is a public network
connected to network switch.
Broadcom NetXtreme Gigabit
Ethernet
Private
Private
This is a private network
used for the heartbeat
between the two nodes in
the cluster.
This is a private network used
for the heartbeat between the
two nodes in the cluster.
Not used
Left
3
4
Device Name
You can include the subnet
number of the interface. For
example, Right-10.
This is a public network
connected to network switch.
Broadcom NetXtreme Gigabit
Ethernet #2
Broadcom NetXtreme Gigabit
Ethernet #3
You can include the subnet
number of the interface. For
example, Left-20.
5. Repeat steps 3 and 4 for each network connection.
The following Network Connections window shows the new names used in a redundant-switch
environment.
6. Close the Network Connections window.
7. Repeat this procedure on node 2, using the same names that you used for node 1.
39
Preparing the Server for the Failover Cluster
Configuring the Private Network Adapter on Each Node
Repeat this procedure on each node.
To configure the private network adapter for the heartbeat connection:
1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens.
3. Right-click the Private network connection (Heartbeat) and select Properties.
The Private Properties dialog box opens.
4. On the Networking tab, click the following check box:
-
Internet Protocol Version 4 (TCP/IPv4)
Uncheck all other components.
Select this check box.
All others are unchecked.
5. Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.
The Internet Protocol Version 4 (TCP/IPv4) Properties dialog box opens.
40
Preparing the Server for the Failover Cluster
Type the private IP
address for the node
you are configuring.
6. On the General tab of the Internet Protocol (TCP/IP) Properties dialog box:
a.
Select “Use the following IP address.”
b. IP address: type the IP address for the Private network connection for the node you are
configuring. See “List of IP Addresses and Network Names” on page 30.
n
When performing this procedure on the second node in the cluster, make sure you assign a static
private IP address unique to that node. In this example, node 1 uses 192.168.100.1 and node 2 uses
192. 168. 100. 2.
c.
n
Subnet mask: type the subnet mask address
Make sure you use a completely different IP address scheme from the one used for the public
network.
d. Make sure the “Default gateway” and “Use the Following DNS server addresses” text boxes
are empty.
7. Click Advanced.
The Advanced TCP/IP Settings dialog box opens.
41
Preparing the Server for the Failover Cluster
8. On the DNS tab, make sure no values are defined and that the “Register this connection’s
addresses in DNS” and “Use this connection’s DNS suffix in DNS registration” are not selected.
9. On the WINS tab, do the following:
t
Make sure no values are defined in the WINS addresses area.
t
Make sure “Enable LMHOSTS lookup” is selected.
t
Select “Disable NetBIOS over TCP/IP.”
10. Click OK.
A message might by displayed stating “This connection has an empty primary WINS address.
Do you want to continue?” Click Yes.
11. Repeat this procedure on node 2, using the static private IP address for that node.
Configuring the Binding Order Networks on Each Node
Repeat this procedure on each node and make sure the configuration matches on both nodes.
To configure the binding order networks:
1. On node 1, click Start > Control Panel > Network and Sharing Center.
The Network and Sharing Center window opens.
2. Click “Change adapter settings” on the left side of the window.
The Network Connections window opens.
3. Press the Alt key to display the menu bar.
4. Select the Advanced menu, then select Advanced Settings.
The Advanced Settings dialog box opens.
42
Preparing the Server for the Failover Cluster
5. In the Connections area, use the arrow controls to position the network connections in the
following order:
-
-
For a redundant-switch configuration, use the following order:
-
Public
-
Private
For a dual-connected configuration, use the following order, as shown in the illustration:
-
Left
-
Right
-
Private
6. Click OK.
7. Repeat this procedure on node 2 and make sure the configuration matches on both nodes.
43
Preparing the Server for the Failover Cluster
Configuring the Public Network Adapter on Each Node
Make sure you configure the IP address network interfaces for the public network adapters as you
normally would. For examples of public network settings, see “List of IP Addresses and Network
Names” on page 30.
Avid recommends that you disable IPv6 for the public network adapters, as shown in the following
illustration:
n
Disabling IPv6 completely is not recommended.
Configuring the Cluster Shared-Storage RAID Disks on Each Node
Both nodes must have the same configuration for the cluster shared-storage RAID disks. When you
configure the disks on the second node, make sure the disks match the disk configuration you set up
on the first node.
n
Make sure the disks are Basic and not Dynamic.
The first procedure describes how to configure disks for the Infortrend array, which contains three
disks. The second procedure describes how to configure disks for the HP MSA array, which contains
two disks.
44
Preparing the Server for the Failover Cluster
To configure the Infortrend RAID disks on each node:
1. Shut down the server node you are not configuring at this time.
2. Open the Disk Management tool in one of the following ways:
t
Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t
Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1, Disk 2, and Disk 3. In this example they are offline, not initialized, and
unformatted.
3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 3. Do not bring Disk 2 online.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.
45
Preparing the Server for the Failover Cluster
Select Disk 1 and Disk 3 and make sure that MBR is selected. Click OK.
5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.
Use the following names and drive letters, depending on your storage array:
n
n
Disk
Name and Drive Letter
Infortrend S12F-R1440
Disk 1
Quorum (Q:)
10 GB
Disk 3
Database (S:)
814 GB or larger
Do not assign a name or drive letter to Disk 2.
If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
46
Preparing the Server for the Failover Cluster
The following illustration shows Disk 1 and Disk 3 with the required names and drive letters for
the Infortrend S12F-R1440:
6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a.
Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 3 online, as described in step 3.
c.
Right-click a partition, select Change Drive Letter, and enter the appropriate letter.
d. Repeat these actions for the other partitions.
9. Boot the first node.
10. Open the Disk Management tool to make sure that the disks are still online and have the correct
drive letters assigned.
At this point, both nodes should be running.
47
Preparing the Server for the Failover Cluster
To configure the HP MSA RAID disks on each node:
1. Shut down the server node you are not configuring at this time.
2. Open the Disk Management tool in one of the following ways:
t
Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t
Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. In this example they are initialized and formatted, but offline.
3. If the disks are offline, right-click Disk 1 (in the left column) and select Online. Repeat this
action for Disk 2.
4. If the disks are not already initialized, right-click Disk 1 (in the left column) and select Initialize
Disk.
The Initialize Disk dialog box opens.
48
Preparing the Server for the Failover Cluster
Select Disk 1 and Disk 2 and make sure that MBR is selected. Click OK.
5. Use the New Simple Volume wizard to configure the disks as partitions. Right-click each disk,
select New Simple Volume, and follow the instructions in the wizard.
Use the following names and drive letters.
n
Disk
Name and Drive Letter
HP MSA 2040
Disk 1
Quorum (Q:)
10 GB
Disk 2
Database (S:)
870 GB or larger
If you need to change the drive letter after running the wizard, right-click the drive letter in the right
column and select Change Drive Letter or Path. If you receive a warning tells you that some
programs that rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
49
Configuring the Failover Cluster
The following illustration shows Disk 1 and Disk 2 with the required names and drive letters.
6. Verify you can access the disk and that it is working by creating a file and deleting it.
7. Shut down the first node and start the second node.
8. On the second node, bring the disks online and assign drive letters. You do not need to initialize
or format the disks.
a.
Open the Disk Management tool, as described in step 2.
b. Bring Disk 1 and Disk 2 online, as described in step 3.
c.
Right-click a partition, select Change Drive Letter, and enter the appropriate letter.
d. Repeat these actions for the other partitions.
9. Boot the first node.
10. Open the Disk Management tool to make sure that the disks are still online and have the correct
drive letters assigned.
At this point, both nodes should be running.
Configuring the Failover Cluster
Take the following steps to configure the failover cluster:
1. Add the servers to the domain. See “Joining Both Servers to the Active Directory Domain” on
page 51.
2. Install the Failover Clustering feature. See “Installing the Failover Clustering Features” on
page 51.
3. Start the Create Cluster Wizard on the first node. See “Creating the Failover Cluster” on page 57.
This procedure creates the failover cluster for both nodes.
50
Configuring the Failover Cluster
4. Rename the cluster networks. See “Renaming the Cluster Networks in the Failover Cluster
Manager” on page 62.
5. Rename the Quorum disk. See “Renaming the Quorum Disk” on page 65.
6. Remove other disks from the cluster. See “Removing Disks Other Than the Quorum Disk” on
page 66
7. For a dual-connected configuration, add a second IP address. See “Adding a Second IP Address
to the Cluster” on page 67.
8. Test the failover. See “Testing the Cluster Installation” on page 72.
c
Creating the failover cluster requires an account with particular administrative privileges. For
more information, see “Requirements for Domain User Accounts” on page 28.
Joining Both Servers to the Active Directory Domain
After configuring the network information described in the previous topics, join the two servers to
the Active Directory domain. Each server requires a reboot to complete this process. At the login
window, use the domain administrator account (see “Requirements for Domain User Accounts” on
page 28).
Installing the Failover Clustering Features
Windows Server 2012 requires you to add the following features:
•
Failover Clustering (with Failover Cluster Management Tools and Failover Cluster Module for
Windows PowerShell)
•
Failover Cluster Command Interface
You need to install these on both servers.
To install the Failover Clustering features:
1. Open the Server Manager window (for example, right-click This PC and select Manage).
2. In the Server Manager window, select Local Server.
3. From the menu bar, select Manage > Add Roles and Features.
The Add Roles and Features Wizard opens.
4. Click Next.
The Installation Type screen is displayed.
51
Configuring the Failover Cluster
5. Select “Role-based or feature-based installation” and click Next.
The Server Selection screen is displayed.
52
Configuring the Failover Cluster
6. Make sure “Select a server from the server pool” is selected. Then select the server on which you
are working and click Next.
The Server Roles screen is displayed. Two File and Storage Services are installed. No additional
server roles are needed. Make sure that “Application Server” is not selected.
53
Configuring the Failover Cluster
7. Click Next.
The Features screen is displayed.
54
Configuring the Failover Cluster
8. Select Failover Clustering.
The Failover Clustering dialog box is displayed.
9. Make sure “Include management tools (if applicable)” is selected, then click Add Features.
The Features screen is displayed again.
55
Configuring the Failover Cluster
10. Scroll down the list of Features, select Remote Server Administration Tools > Feature
Administration Tools > Failover Clustering Tools, and select the following features:.
-
Failover Cluster Management Tools
-
Failover Cluster Module for Windows PowerShell
-
Failover Cluster Command Interface
11. Click Next.
The Confirmation screen is displayed.
12. Click Install.
The installation program starts. At the end of the installation, a message states that the
installation succeeded.
56
Configuring the Failover Cluster
13. Click Close.
14. Repeat this procedure on the other server.
Creating the Failover Cluster
To create the failover cluster:
1. Make sure all storage devices are turned on.
2. Log in to the operating system using the cluster installation account (see “Requirements for
Domain User Accounts” on page 28).
3. On the first node, open Failover Cluster Manager. There are several ways to open this window.
For example,
a.
On the desktop, right-click This Computer and select Manage.
The Server Manager window opens.
b. In the Server Manager list, click Tools and select Failover Cluster Manager.
The Failover Cluster Manager window opens.
4. In the Management section, click Create Cluster.
57
Configuring the Failover Cluster
The Create Cluster Wizard opens with the Before You Begin window.
5. Review the information and click Next (you will validate the cluster in a later step).
58
Configuring the Failover Cluster
6. In the Select Servers window, type the simple computer name of node 1 and click Add. Then
type the computer name of node 2 and click Add. The Cluster Wizard checks the entries and, if
the entries are valid, lists the fully qualified domain names in the list of servers, as shown in the
following illustration:
c
If you cannot add the remote node to the cluster, and receive an error message “Failed to
connect to the service manager on <computer-name>,” check the following:
- Make sure that the time settings for both nodes are in sync.
- Make sure that the login account is a domain account with the required privileges.
- Make sure the Remote Registry service is enabled.
For more information, see “Before You Begin the Server Failover Installation” on page 27.
7. Click Next.
The Validation Warning window opens.
8. Select Yes and click Next several times. When you can select a testing option, select Run All
Tests.
The automatic cluster validation tests begin. The tests take approximately five minutes. After
running these validation tests and receiving notification that the cluster is valid, you are eligible
for technical support from Microsoft.
The following tests display warnings, which you can ignore:
-
List Software Updates (Windows Update Service is not running)
-
Validate Storage Spaces Persistent Reservation
-
Validate All Drivers Signed
-
Validate Software Update Levels (Windows Update Service is not running)
9. In the Access Point for Administering the Cluster window, type a name for the cluster, then click
in the Address text box and enter an IP address. This is the name you created in the Active
Directory (see “Requirements for Domain User Accounts” on page 28).
59
Configuring the Failover Cluster
If you are configuring a dual-connected cluster, you need to add a second IP address after
renaming and deleting cluster disks. This procedure is described in “Adding a Second IP Address
to the Cluster” on page 67.
10. Click Next.
A message informs you that the system is validating settings. At the end of the process, the
Confirmation window opens.
60
Configuring the Failover Cluster
11. Review the information. Make sure “Add all eligible storage to the cluster” is selected. If all
information is correct, click Next.
The Create Cluster Wizard creates the cluster. At the end of the process, a Summary window
opens and displays information about the cluster.
You can click View Report to see a log of the entire cluster creation.
12. Click Finish.
Now when you open the Failover Cluster Manager, the cluster you created and information about
its components are displayed, including the networks available to the cluster (cluster networks).
To view the networks, select Networks in the list on the left side of the window.
The following illustration shows components of a cluster in a redundant-switch environment.
Cluster Network 1 is a public network (Cluster and Client) connecting to one of the redundant
switches, and Cluster Network 2 is a private, internal network for the heartbeat (Cluster only).
61
Configuring the Failover Cluster
If you are configuring a dual-connected cluster, three networks are listed. Cluster Network 1 and
Cluster Network 2 are external networks connected to VLAN 10 and VLAN 20 on Avid ISIS,
and Cluster Network 3 is a private, internal network for the heartbeat.
n
This configuration refers to virtual networks (VLAN) that are used with ISIS 7000/7500 sharedstorage systems. ISIS 5000/5500 and Avid NEXIS systems typically do not use multiple VLANS. You
can adapt this configuration for use in ISIS 5000/5500 or Avid NEXIS environments.
Renaming the Cluster Networks in the Failover Cluster Manager
You can more easily manage the cluster by renaming the networks that are listed under the Failover
Cluster Manager.
To rename the networks:
1. Right-click This PC and select Manage.
The Server Manager window opens.
2. In the Failover Cluster Manager, select cluster_name > Networks.
3. In the Networks window, right-click Cluster Network 1 and select Properties.
The Properties dialog box opens.
4. Click in the Name text box, and type a meaningful name, for example, a name that matches the
name you used in the TCP/IP properties. For a redundant-switch configuration, use Public, as
shown in the following illustration. For a dual-connected configuration, use Left. For this
network, keep the option “Allow clients to connect through this network.”
62
Configuring the Failover Cluster
5. Click OK.
6. If you are configuring a dual-connected cluster configuration, rename Cluster Network 2, using
Right. For this network, keep the option “Allow clients to connect through this network.” Click
OK.
7. Rename the other network Private. This network is used for the heartbeat. For this private
network, leave the option “Allow clients to connect through this network” unchecked. Click OK.
63
Configuring the Failover Cluster
The following illustration shows networks for a redundant-switch configuration.
64
Configuring the Failover Cluster
The following illustration shows networks for a dual-connected configuration.
Renaming the Quorum Disk
You can more easily manage the cluster by renaming the disk that is used as the Quorum disk.
To rename the Quorum disk:
1. In the Failover Cluster Manager, select cluster_name > Storage > Disks.
The Disks window opens. Check to make sure the smaller disk is labeled “Disk Witness in
Quorum.” This disk most likely has the number 1 in the Disk Number column.
2. Right-click the disk assigned to “Disk Witness in Quorum” and select Properties
The Properties dialog box opens.
3. In the Name dialog box, type a name for the cluster disk. In this case, Cluster Disk 2 is the
Quorum disk, so type Quorum as the name.
65
Configuring the Failover Cluster
4. Click OK.
Removing Disks Other Than the Quorum Disk
You must delete any disks other than the Quorum disk. There is most likely only one other disk,
which will be later be added by the Interplay Engine installer. In this operation, deleting the disk
means removing it from cluster control. After the operation, the disk is labeled offline in the Disk
Management tool. This operation does not delete any data on the disks.
To remove all disks other than the Quorum disk:
1. In the Failover Cluster Manager, select cluster_name > Storage and right-click any disks not
used as the Quorum disk (most likely only Cluster Disk1).
2. In the Actions panel on the right, select Remove.
66
Configuring the Failover Cluster
A confirmation box asks if you want the remove the selected disks.
3. Click Yes.
Adding a Second IP Address to the Cluster
If you are configuring a dual-connected cluster, you need to add a second IP address for the failover
cluster.
To add a second IP address to the cluster:
1. In the Failover Cluster Manager, select cluster_name > Networks.
Make sure that Cluster Use is enabled as “Cluster and Client” for both networks.
If a network is not enabled, right-click the network, select Properties, and select “Allow clients to
connect through this network.”
67
Configuring the Failover Cluster
2. In the Failover Cluster Manager, select the failover cluster by clicking on the Cluster name in the
left column.
3. In the Actions panel (right column), select Properties in the Name section.
68
Configuring the Failover Cluster
The Properties dialog box opens.
69
Configuring the Failover Cluster
4. In the General tab, do the following:
a.
Click Add.
b. Type the IP address for the other network.
c.
Click OK.
The General tab shows the IP addresses for both networks.
70
Configuring the Failover Cluster
5. Click Apply.
A confirmation box asks you to confirm that all cluster nodes need to be restarted. You will
restart the nodes later in this procedure, so select Yes.
71
Configuring the Failover Cluster
6. Click the Dependencies tab and check if the new IP address was added with an OR conjunction.
If the second IP address is not there, click “Click here to add a dependency.” Select “OR” from
the list in the AND/OR column and select the new IP address from the list in the Resource
column.
Testing the Cluster Installation
At this point, test the cluster installation to make sure the failover process is working.
To test the failover:
1. Make sure both nodes are running.
2. Determine which node is the active node (the node that owns the quorum disk). Open the
Failover Cluster Manager and select cluster_name > Storage > Disks. The server that owns the
Quorum disk is the active node.
In the following figure, the Owner Node is muc-vtldell2.
72
Configuring the Failover Cluster
3. Open a Command Prompt and enter the following command:
cluster group “Cluster Group” /move:node_hostname
This command moves the cluster group, including the Quorum disk, to the node you specify. To
test the failover, use the hostname of the non-active node. The following illustration shows the
command and result if the non-active node (node 2) is named warrm-ipe4. The status “Partially
Online” is normal.
4. Open the Failover Cluster Manager and select cluster_name > Storage > Disks. Make sure that
the Quorum disk is online and that current owner is node 2, as shown in the following
illustration.
5. In the Failover Cluster Manager, select cluster_name > Networks. The status of all networks
should be “Up.”
73
Configuring the Failover Cluster
The following illustration shows networks for a redundant-switch configuration.
The following illustration shows networks for a dual-connected configuration.
6. Repeat the test by using the Command Prompt to move the cluster back to node 1.
Configuration of the failover cluster on all nodes is complete and the cluster is fully operational. You
can now install the Interplay Engine.
74
3 Installing the Interplay | Engine for a
Failover Cluster
After you set up and configure the cluster, you need to install the Interplay Engine software on both
nodes. The following topics describe installing the Interplay Engine and other final tasks:
•
Disabling Any Web Servers
•
Installing the Interplay | Engine on the First Node
•
Installing the Interplay | Engine on the Second Node
•
Bringing the Interplay | Engine Online
•
After Installing the Interplay | Engine
•
Creating an Interplay | Production Database
•
Testing the Complete Installation
•
Installing a Permanent License
•
Updating a Clustered Installation (Rolling Upgrade)
•
Uninstalling the Interplay | Engine on a Clustered System
The tasks in this chapter do not require the domain administrator privileges that are required when
creating the Microsoft cluster (see “Requirements for Domain User Accounts” on page 28).
Disabling Any Web Servers
The Interplay Engine uses an Apache web server that can only be registered as a service if no other
web server (for example, IIS) is serving the port 80 (or 443). Stop and disable or uninstall any other
http services before you start the installation of the server. You must perform this procedure on both
nodes.
n
No action should be required, because IIS should be disabled in Windows Server 2012.
Installing the Interplay | Engine on the First Node
The following sections provide procedures for installing the Interplay Engine on the first node. For a
list of example entries, see “List of IP Addresses and Network Names” on page 30.
•
“Preparation for Installing on the First Node” on page 76
•
“Starting the Installation and Accepting the License Agreement” on page 78
•
“Installing the Interplay | Engine Using Custom Mode” on page 79
•
“Checking the Status of the Cluster Role” on page 91
•
“Creating the Database Share Manually” on page 92
•
“Adding a Second IP Address (Dual-Connected Configuration)” on page 93
Installing the Interplay | Engine on the First Node
c
Shut down the second node while installing Interplay Engine for the first time.
Preparation for Installing on the First Node
You are ready to start installing the Interplay Engine on the first node. During setup you must enter
the following cluster-related information:
•
Virtual IP Address: the Interplay Engine service IP address of the cluster role. For a list of
example names, see “List of IP Addresses and Network Names” on page 30.
•
Subnet Mask: the subnet mask on the local network.
•
Public Network: the name of the public network connection.
-
For a redundant-switch configuration, type Public, or whatever name you assigned in
“Renaming the Local Area Network Interface on Each Node” on page 37.
-
For a dual-connection configuration, type Left-subnet or whatever name you assigned in
“Renaming the Cluster Networks in the Failover Cluster Manager” on page 62. For a dualconnection configuration, you set the other public network connection after the installation.
See “Checking the Status of the Cluster Role” on page 91.
To check the public network connection on the first node, open the Networks view in the
Failover Cluster Manager and look up the name there.
c
n
•
Shared Drive: the letter for the shared drive that holds the database. Use S: for the shared drive
letter. You need to make sure this drive is online. See “Bringing the Shared Database Drive
Online” on page 76.
•
Cluster Account User and Password (Server Execution User): the domain account that is used to
run the cluster. See “Before You Begin the Server Failover Installation” on page 27.
Shut down the second node when installing Interplay Engine for the first time.
When installing the Interplay Engine for the first time on a machine with a failover cluster, you are
asked to choose between clustered and regular installation. The installation on the second node (or
later updates) reuses the configuration from the first installation without allowing you to change the
cluster-specific settings. In other words, it is not possible to change the configuration settings without
uninstalling the Interplay Engine.
Bringing the Shared Database Drive Online
You need to make sure that the shared database drive (S:) is online.
To bring the shared database drive online:
1. Shut down the second node.
2. On the first node, open Disk Management by doing one of the following:
t
Right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
t
Right-click Start, click search, type Disk, and select “Create and format hard disk
partitions.”
The Disk Management window opens. The following illustration shows the shared storage drives
labeled Disk 1 and Disk 2. Disk 1 is online, and Disk 2 is offline.
76
Installing the Interplay | Engine on the First Node
3. Right-click Disk 2 and select Online.
77
Installing the Interplay | Engine on the First Node
4. Make sure the drive letter is correct (S:) and the drive is named Database. If not, you can change
it here. Right-click the disk name and letter (right-column) and select Change Drive Letter or
Path.
If you attempt to change the drive letter, you receive a warning tells you that some programs that
rely on drive letters might not run correctly and asks if you want to continue. Click Yes.
Starting the Installation and Accepting the License Agreement
To start the installation:
1. Make sure the second node is shut down.
2. On the first node, start the Avid Interplay Servers installer.
A start screen opens.
3. Select the following from the Interplay Server Installer Main Menu:
Servers > Avid Interplay Engine > Avid Interplay Engine
The Welcome dialog box opens.
4. Close all Windows programs before proceeding with the installation.
5. Information about the installation of Apache is provided in the Welcome dialog box. Read the
text and then click Next.
The License Agreement dialog box opens.
6. Read the license agreement information and then accept the license agreement by selecting “I
accept the agreement.” Click Next.
The Specify Installation Type dialog box opens.
7. Continue the installation as described in the next topic.
c
If you receive a message that the Avid Workgroup Name resource was not found, you need to
check the registry. See “Changing the Resource Name of the Avid Workgroup Server” on
page 97.
78
Installing the Interplay | Engine on the First Node
Installing the Interplay | Engine Using Custom Mode
The first time you install the Interplay Engine on a cluster system, you should use the Custom
installation mode. This lets you specify all the available options for the installation. This is the
recommended option to use.
The following procedures are used to perform a Custom installation of the Interplay Engine:
•
“Specifying Cluster Mode During a Custom Installation” on page 79
•
“Specifying the Interplay Engine Details” on page 80
•
“Specifying the Interplay Engine Service Name” on page 81
•
“Specifying the Destination Location” on page 82
•
“Specifying the Default Database Folder” on page 83
•
“Specifying the Share Name” on page 84
•
“Specifying the Configuration Server” on page 85
•
“Specifying the Server User” on page 86
•
“Specifying the Preview Server Cache” on page 87
•
“Enabling Email Notifications” on page 87
•
“Installing the Interplay Engine for a Custom Installation on the First Node” on page 89
For information about updating the installation, see “Updating a Clustered Installation (Rolling
Upgrade)” on page 104.
n
If the Interplay Engine installer fails and you receive a message about a required Windows Hotfix or
Update, but it is already installed, reboot the server to make sure that the installation is completed.
Specifying Cluster Mode During a Custom Installation
To specify cluster mode:
1. In the Specify Installation Type dialog box, select Custom.
2. Click Next.
79
Installing the Interplay | Engine on the First Node
The Specify Cluster Mode dialog box opens.
3. Select Cluster and click Next to continue the installation in cluster mode.
The Specify Interplay Engine Details dialog box opens.
Specifying the Interplay Engine Details
In this dialog box, provide details about the Interplay Engine.
80
Installing the Interplay | Engine on the First Node
To specify the Interplay Engine details:
1. Type the following values:
-
Virtual IP address: This is the Interplay Engine service IP Address, not the failover cluster IP
address. For a list of examples, see “List of IP Addresses and Network Names” on page 30.
For a dual-connected configuration, you set the other public network connection after the
installation. See “Adding a Second IP Address (Dual-Connected Configuration)” on
page 93.
-
Subnet Mask: The subnet mask on the local network.
-
Public Network: For a redundant-switch configuration, type Public, or whatever name you
assigned in “Renaming the Local Area Network Interface on Each Node” on page 37. For a
dual-connected configuration, type the name of the public network on the first node, for
example, Left, or whatever name you assigned in “Renaming the Cluster Networks in the
Failover Cluster Manager” on page 62. This must be the cluster resource name.
To check the name of the public network on the first node, open the Networks view in the
Failover Cluster Manager and look up the name there.
-
c
Shared Drive: The letter of the shared drive that is used to store the database. Use S: for the
shared drive letter.
Make sure you type the correct information here, as this data cannot be changed afterwards.
Should you require any changes to the above values later, you will need to uninstall the server
on both nodes.
2. Click Next.
The Specify Interplay Engine Name dialog box opens.
Specifying the Interplay Engine Service Name
In this dialog box, type the name of the Interplay Engine service.
81
Installing the Interplay | Engine on the First Node
To specify the Interplay Engine name:
1. Specify the public names for the Avid Interplay Engine service by typing the following values:
-
The Network Name will be associated with the virtual IP Address that you entered in the
previous Interplay Engine Details dialog box. This is the Interplay Engine service name (see
“List of IP Addresses and Network Names” on page 30). It must be a new, unused name, and
must be registered in the DNS so that clients can find the server without having to specify its
address.
-
The Server Name is used by clients to identify the server. If you only use Avid Interplay
Clients on Windows computers, you can use the Network Name as the server name. If you
use several platforms as client systems, such as Macintosh® and Linux® you need to specify
the static IP address that you entered for the cluster role in the previous dialog box.
Macintosh systems are not always able to map server names to IP addresses. If you type a
static IP address, make sure this IP address is not provided by a DHCP server.
2. Click Next.
The Specify Destination Location dialog box opens.
Specifying the Destination Location
In this dialog box specify the folder in which you want to install the Interplay Engine program files.
To specify the destination location:
1. Avid recommends that you keep the default path C:\Program Files\Avid\Avid Interplay Engine.
c
Under no circumstances attempt to install to a shared disk; independent installations are
required on both nodes. This is because local changes are also necessary on both machines.
Also, with independent installations you can use a rolling upgrade approach later, upgrading
each node individually without affecting the operation of the cluster.
2. Click Next.
The Specify Default Database Folder dialog box opens.
82
Installing the Interplay | Engine on the First Node
Specifying the Default Database Folder
In this dialog box specify the folder where the database data is stored.
To specify the default database folder:
1. Type S:\Workgroup_Databases. Make sure the path specifies the shared drive (S:).
This folder must reside on the shared drive that is owned by the cluster role of the server. You
must use this shared drive resource so that it can be monitored and managed by the Cluster
service. The drive must be assigned to the physical drive resource that is mounted under the same
drive letter on the other machine.
2. Click Next.
The Specify Share Name dialog box opens.
83
Installing the Interplay | Engine on the First Node
Specifying the Share Name
In this dialog box specify a share name to be used for the database folder.
To specify the share name:
1. Accept the default share name.
Avid recommends you use the default share name WG_Database$. This name is visible on all
client platforms, such as Windows NT Windows 2000 and Windows XP. The “$” at the end
makes the share invisible if you browse through the network with the Windows Explorer. For
security reasons, Avid recommends using a “$” at the end of the share name. If you use the
default settings, the directory S:\Workgroup_Databases is accessible as
\\InterplayEngine\WG_Database$.
2. Click Next.
This step takes a few minutes. When finished the Specify Configuration Server dialog box opens.
84
Installing the Interplay | Engine on the First Node
Specifying the Configuration Server
In this dialog box, indicate whether this server is to act as a Central Configuration Server.
Set for both nodes.
Use this option for
Interplay Archive
Engine
A Central Configuration Server (CCS) is an Avid Interplay Engine with a special module that is used
to store server and database-spanning information. For more information, see the Interplay | Engine
and Interplay | Archive Engine Administration Guide.
To specify the server to act as the CCS server:
1. Select either the server you are installing or a previously installed server to act as the Central
Configuration Server.
Typically you are working with only one server, so the appropriate choice is “This Avid Interplay
Engine,” which is the default.
If you need to specify a different server as the CCS (for example, if an Interplay Archive Engine
is being used as the CCS), select “Another Avid Interplay Engine.” You need to type the name of
the other server to be used as the CCS in the next dialog box.
c
Only use a CCS that is at least as high availability as this cluster installation, typically another
clustered installation.
If you specify the wrong CCS, you can change the setting later on the server machine in the
Windows Registry. See “Automatic Server Failover Tips and Rules” on page 107.
2. Click Next.
The Specify Server User dialog box opens.
85
Installing the Interplay | Engine on the First Node
Specifying the Server User
In this dialog box, define the Cluster account (Server Execution User) used to run the Avid Interplay
Engine.
The Server Execution User is the Windows domain user that runs the Interplay Engine. This account
is automatically added to the Local Administrators group on the server. See “Before You Begin the
Server Failover Installation” on page 27.
To specify the Server Execution User:
1. Type the Cluster account user login information.
c
c
The installer cannot check the username or password you type in this dialog. Make sure that
the password is set correctly, or else you will need to uninstall the server and repeat the entire
installation procedure. Avid does not recommend changing the Server Execution User in
cluster mode afterwards, so choose carefully.
When typing the domain name do not use the full DNS name such as mydomain.company.com,
because the DCOM part of the server will be unable to start. You should use the NetBIOS
name, for example, mydomain.
2. Click Next.
The Specify Preview Server Cache dialog box opens.
If necessary, you can change the name of the Server Execution User after the installation. For more
information, see “Troubleshooting the Server Execution User Account” and “Re-creating the Server
Execution User” in the Interplay | Engine and Interplay | Archive Engine Administration Guide and
the Interplay ReadMe.
86
Installing the Interplay | Engine on the First Node
Specifying the Preview Server Cache
In this dialog box, specify the path for the cache folder.
n
For more information on the Preview Server cache and Preview Server configuration, see “Avid
Workgroup Preview Server Service” in the Interplay | Engine and Interplay | Archive Engine
Administration Guide.
To specify the preview server cache folder:
1. Type or browse to the path of the server cache folder. Typically, the default path is used.
2. Click Next.
The Enable Email Notification dialog box opens if you are installing the Avid Interplay Engine
for the first time.
Enabling Email Notifications
The first time you install the Avid Interplay Engine, the Enable Email Notification dialog box opens.
The email notification feature sends emails to your administrator when special events, such as
“Cluster Failure,” “Disk Full,” and “Out Of Memory” occur. Activate email notification if you want
to receive emails on special events, server or cluster failures.
87
Installing the Interplay | Engine on the First Node
To enable email notification:
1. (Option) Select Enable email notification on server events.
The Email Notification Details dialog box opens.
2. Type the administrator's email address and the email address of the server, which is the sender.
If an event, such as “Resource Failure” or “Disk Full” occurs on the server machine, the
administrator receives an email from the sender's email account explaining the problem, so that
the administrator can react to the problem. You also need to type the static IP address of your
SMTP server. The notification feature needs the SMTP server in order to send emails. If you do
not know this IP, ask your administrator.
3. Click Next.
The installer modifies the file Config.xml in the Workgroup_Data\Server\Config\Config
directory with your settings. If you need to change these settings, edit Config.xml.
88
Installing the Interplay | Engine on the First Node
The Ready to Install dialog box opens.
Installing the Interplay Engine for a Custom Installation on the First Node
In this dialog box, begin the installation of the engine software.
To install the Interplay Engine software:
1. Click Next.
Use the Back button to review or change the data you have entered. You can also terminate the
installer using the Cancel button, because no changes have been done to the system yet.
The first time you install the software, a dialog box opens and asks if you want to install the
Sentinel driver. This driver is used by the licensing system.
2. Click Continue.
The Installation Completed dialog box opens after the installation is completed.
89
Installing the Interplay | Engine on the First Node
The Windows Firewall could be on or off, depending on the customer’s policies. If the Firewall is
turned on, you get messages that the Windows Firewall has blocked nxnserver.exe (the Interplay
Engine) and the Apache server from public networks.
If your customer wants to allow communication on public networks, select the check box for
“Public networks, such as those in airports and coffee shops,” then click “Allow access.”
n
The Windows Firewall service must be enabled for proper operation of a failover cluster. Note that
enabling the service is different from enabling or disabling the firewall itself and firewall rules..
3. Do one of the following:
t
Click Finish.
t
Analyze and resolve any issues or failures reported.
4. Click OK if prompted for a restart the system.
The installation procedure requires the machine to restart (up to twice). For this reason it is very
important that the other node is shut down, otherwise the current node loses ownership of the
Avid Workgroup Server cluster role. This applies to the installation on the first node only.
90
Installing the Interplay | Engine on the First Node
n
Subsequent installations should be run as described in “Updating a Clustered Installation (Rolling
Upgrade)” on page 104 or in the Interplay | Production ReadMe.
Checking the Status of the Cluster Role
After installing the Interplay Engine, check the status of the resources in the Avid Workgroup Server
cluster role.
To check the status of the cluster role:
1. After the installation is complete, right-click My Computer and select Manage.
The Server Manager window opens.
2. In the Server Manager list, open Features > Failover Cluster Manager > cluster_name.
3. Click Roles.
The Avid Workgroup Server role is displayed.
4. Click the Resources tab.
The list of resources should look similar to those in the following illustration.
The Avid Workgroup Disk resources, Server Name, and File Server should be online and all
other resources offline. S$ and WG_Database$ should be listed in the Shared Folders tab.
Take one of the following steps:
-
If the File Server resource or the shared folder WG_Database$ is missing, you must create it
manually, as described in “Creating the Database Share Manually” on page 92.
91
Installing the Interplay | Engine on the First Node
n
-
If you are setting up a redundant-switch configuration, leave this node running so that it
maintains ownership of the cluster role and proceed to “Installing the Interplay | Engine on
the Second Node” on page 99.
-
If you are setting up a dual-connected configuration, proceed to “Adding a Second IP
Address (Dual-Connected Configuration)” on page 93.
Avid does not recommend starting the server at this stage, because it is not installed on the other
node and a failover would be impossible.
Creating the Database Share Manually
If the File Server resource or the database share (WG_Database$) is not created (see “Checking the
Status of the Cluster Role” on page 91), you can create it manually by using the following procedure.
c
If you copy the commands and paste them into a Command Prompt window, you must replace
any line breaks with a blank space.
To create the database share and File Server resource manually:
1. In the Failover Cluster Manager, make sure that the “Avid Workgroup Disk” resource (the S:
drive) is online.
2. Open a command prompt with full administrator permissions (also referred to as an elevated
command prompt).
3. To create the database share, enter the following command:
net share WG_Database$=S:\Workgroup_Databases /UNLIMITED /GRANT:users,FULL
/GRANT:Everyone,FULL /REMARK:"Avid Interplay database directory" /Y
If the command is successful the following message is displayed:
WG_Database$ was shared successfully.
4. Enter the following command. Substitute the virtual host name of the Interplay Engine service
for ENGINESERVER.
cluster res "File Server (\\ENGINESERVER)" /priv MyShare="WG_Database$":str
No message is displayed for a successful command.
5. Enter the following command. Again, substitute the virtual host name of the Interplay Engine
service for ENGINESERVER.
cluster res "Avid Workgroup Engine Monitor" /adddep:"File Server
(\\ENGINESERVER)"
If the command is successful the following message is displayed:
Making resource 'Avid Workgroup Engine Monitor' depend on resource 'File
Server (\\ENGINESERVER)'...
6. Make sure the File Server resource and the database share (WG_Database$) are listed in the
Failover Cluster Manager (see “Checking the Status of the Cluster Role” on page 91).
92
Installing the Interplay | Engine on the First Node
Adding a Second IP Address (Dual-Connected Configuration)
If you are setting up a dual-connected configuration, you need use the Failover Cluster Manager to
add a second IP address.
To add a second IP address:
1. Right-click My Computer and select Manage.
The Server Manager window opens.
2. In the Server Manager list, open Features > Failover Cluster Manager > cluster_name.
3. Select Avid Workgroup Server and click the Resources tab.
4. Bring the Name, IP Address, and File Server resources offline by doing one of the following:
-
Right-click the resource and select “Take Offline.”
-
Select all resources and select “Take Offline” in the Actions panel of the Server Manager
window.
The following illustration shows the resources offline.
5. Right-click the Name resource and select Properties.
The Properties dialog box opens.
93
Installing the Interplay | Engine on the First Node
c
Note that the Resource Name is listed as “Avid Workgroup Name.” Make sure to check the
Resource Name after adding the second IP address and bringing the resources on line in step 9.
If the Kerberos Status is offline, you can continue with the procedure. After bringing the server
online, the Kerberos Status should be OK.
6. Click the Add button below the IP Addresses list.
The IP Address dialog box opens.
The second sub-network and a static IP Address are already displayed.
7. Type the second Interplay Engine service IP address. See “List of IP Addresses and Network
Names” on page 30. Click OK.
The Properties dialog box is displayed with two networks and two IP addresses.
94
Installing the Interplay | Engine on the First Node
8. Check that you entered the IP address correctly, then click Apply.
9. Click the Dependencies tab and check that the second IP address was added, with an OR in the
AND/OR column.
10. Click OK.
The Resources screen should look similar to the following illustration.
95
Installing the Interplay | Engine on the First Node
11. Bring the Name, both IP addresses, and the File Server resource online by doing one of the
following:
-
Right-click the resource and select “Bring Online.”
-
Select the resources and select “Bring Online” in the Actions panel.
The following illustration shows the resources online.
12. Right-click the Name resource and select Properties.
96
Installing the Interplay | Engine on the First Node
The Resource Name must be listed as “Avid Workgroup Name.” If it is not, see “Changing the
Resource Name of the Avid Workgroup Server” on page 97.
13. Leave this node running so that it maintains ownership of the cluster role and proceed to
“Installing the Interplay | Engine on the Second Node” on page 99.
Changing the Resource Name of the Avid Workgroup Server
If you find that the resource name of the Avid Workgroup Server application is not “Avid Workgroup
Name” (as displayed in the properties for the Server Name), you need to change the name in the
Windows registry.
To change the resource name of the Avid Workgroup Server:
1. On the node hosting the Avid Workgroup Server (the active node), open the registry editor and
navigate to the key HKEY_LOCAL_MACHINE\Cluster\Resources.
c
If you are installing a dual-connected cluster, make sure to edit the “Cluster” key. Do not edit
other keys that include the word “Cluster,” such as the “0.Cluster” key.
2. Browse through the GUID named subkeys looking for the one subkey where the value “Type” is
set to “Network Name” and the value “Name” is set to <incorrect_name>.
3. Change the value “Name” to “Avid Workgroup Name.”
97
Installing the Interplay | Engine on the First Node
4. Do the following to shut down the cluster:
c
Make sure you have edited the registry entry before you shut down the cluster.
a.
In the Failover Cluster Manager tree (left panel) select the cluster. In the following example,
the cluster name is muc-vtlasclu1.VTL.local.
b. In the context menu or the Actions panel on the right side, select “More Actions > Shutdown
Cluster.”
5. Do the following to bring the cluster on line:
a.
In the Failover Cluster Manager tree (left panel) select the cluster.
b. In the context menu or the Actions panel on the right side, select “Start Cluster.”
98
Installing the Interplay | Engine on the Second Node
Installing the Interplay | Engine on the Second Node
To install the Interplay Engine on the second node:
1. Leave the first machine running so that it maintains ownership of the cluster role and start the
second node.
c
Do not attempt to move the cluster role over to the second node, or similarly, do not shut down
the first node while the second is up, before the installation is completed on the second node.
c
Do not attempt to initiate a failover before installation is completed on the second node and you
create an Interplay database. See “Testing the Complete Installation” on page 102.
2. Perform the installation procedure for the second node as described in “Installing the Interplay |
Engine on the First Node” on page 75. In contrast to the installation on the first node, the
installer automatically detects all settings previously entered on the first node.
The Attention dialog box opens.
3. Click OK.
4. The same installation dialog boxes will open that you saw before, except for the cluster related
settings that only need to be entered once. Enter the requested information and allow the
installation to proceed.
c
Make sure you use the installation mode that you used for the first node and enter the same
information throughout the installer. Using different values results in a corrupted installation.
5. The installation procedure requires the machine to restart (up to twice). Allow the restart as
requested.
c
If you receive a message that the Avid Workgroup Name resource was not found, you need to
check the registry. See “Changing the Resource Name of the Avid Workgroup Server” on
page 97.
99
Bringing the Interplay | Engine Online
Bringing the Interplay | Engine Online
To bring the Interplay Engine online:
1. Open the Failover Cluster Manager and select cluster_name > Roles.
The Avid Workgroup Server role is displayed.
2. Select Avid Workgroup Server, and in the Actions list, select Start Role.
All resources are now online, as shown in the following illustration. To view the resources, click
the Resources tab.
100
After Installing the Interplay | Engine
After Installing the Interplay | Engine
After you install the Interplay Engine, install the following applications on both nodes:
n
•
Interplay Access: From the Interplay Server Installer Main Menu, select Servers > Avid Interplay
Engine > Avid Interplay Access.
•
Avid shared-storage client (if not already installed).
If you cannot log in or connect to the Interplay Engine, make sure the database share
WG_Database$ exists. You might get the following error message when you try to log in: “The
network name cannot be found (0x80070043).” For more information, see “Creating the Database
Share Manually” on page 92.
Then create an Interplay database, as described in “Creating an Interplay | Production Database” on
page 101.
Creating an Interplay | Production Database
Before testing the failover cluster, you need to create a database. The following procedure describes
basic information about creating a database. For complete information, see the Interplay | Engine
and Interplay | Archive Engine Administration Guide.
To create an Interplay database:
1. Start the Interplay Administrator and log in.
2. In the Database section of the Interplay Administrator window, click the Create Database icon.
The Create Database view opens.
3. In the New Database Information area, leave the default “AvidWG” in the Database Name text
box. For an archive database, leave the default “AvidAM.” These are the only two supported
database names.
4. Type a description for the database in the Description text box, such as “Main Production
Server.”
5. Select “Create default Avid Interplay structure.”
After the database is created, a set of default folders within the database are visible in Interplay
Access and other Interplay clients. For more information about these folders, see the
Interplay | Access User’s Guide.
6. Keep the root folder for the New Database Location (Meta Data).
The metadata database must reside on the Interplay Engine server.
7. Keep the root folder for the New Data Location (Assets).
8. Click Create to create directories and files for the database.
The Interplay database is created.
101
Testing the Complete Installation
Testing the Complete Installation
After you complete all the previously described steps, you are now ready to test the installation.
Make yourself familiar with the Failover Cluster Manager and review the different failover-related
settings.
n
If you want to test the Microsoft cluster failover process again, see “Testing the Cluster Installation”
on page 72.
To test the complete installation:
1. Bring the Interplay Engine online, as described in “Bringing the Interplay | Engine Online” on
page 100.
2. Make sure you created a database (see “Creating an Interplay | Production Database” on
page 101).
You can use the default license for testing. Then install the permanent licenses, as described in
“Installing a Permanent License” on page 102.
3. Start Interplay Access and add some files to the database.
4. Start the second node, if it is not already running.
5. In the Failover Cluster Manager, initiate a failover by selecting Avid Workgroup Server and then
selecting Move > Best Possible Node from the Actions menu. Select another node.
After the move is complete, all resources should remain online and the target node should be the
current owner.
You can also simulate a failure by right-clicking a resource and selecting More Actions >
Simulate Failure.
n
A failure of a resource does not necessarily initiate failover of the complete Avid Workgroup Server
role.
6. You might also want to experiment by terminating the Interplay Engine manually using the
Windows Task Manager (NxNServer.exe). This is also a good way to get familiar with the
failover settings which can be found in the Properties dialog box of the Avid Workgroup Server
and on the Policies tab in the Properties dialog box of the individual resources.
7. Look at the related settings of the Avid Workgroup Server. If you need to change any
configuration files, make sure that the Avid Workgroup Disk resource is online; the configuration
files can be found on the resource drive in the Workgroup_Data folder.
Installing a Permanent License
During Interplay Engine installation a temporary license for one user is activated automatically so
that you can administer and install the system. There is no time limit for this license.
Starting with Interplay Production v3.3, new licenses for Interplay components are managed through
software activation IDs. In previous versions, licenses were managed through hardware application
keys (dongles). Dongles continue to be supported for existing licenses, but new licenses require
software licensing.
102
Installing a Permanent License
A set of permanent licenses is provided by Avid in one of two ways:
•
As a software license
For a clustered engine, Avid supplies a single license that should be used for both nodes in the
cluster. If you are licensing a clustered engine, follow the published procedure to activate the
license on each node of the cluster, using the same System ID and Activation ID for each node.
There are no special requirements to activate or deactivate a node before licensing. Log in
directly to each node and use the local version of the Avid License Control application or Avid
Application Manager (for Interplay Engine v3.8 and later) to install the license.
•
As a file with the extension .nxn on a USB flash drive or another delivery mechanism
For hardware licensing (dongle), these permanent licenses must match the Hardware ID of the
dongle. After installation, the license information is stored in a Windows registry key. Licenses
for an Interplay Engine failover cluster are associated with two Hardware IDs.
To install a permanent license through software licensing:
t
Use the Avid License Control application or Avid Application Manager (for Interplay Engine
v3.8 and later).
See “Software Licensing for Interplay Production” in the Interplay | Production Software
Installation and Configuration Guide.
To install a permanent license by using a dongle:
1. Make sure a dongle is connected to a USB port on each server.
2. Make a folder for the license file on the root directory (C:\) of an Interplay Engine server or
another server. For example:
C:\Interplay_Licenses
3. Connect the USB drive containing the license file and access the drive:
a.
Double-click the computer icon on the desktop.
b. Double-click the USB flash drive icon.
4. Copy the license file (*.nxn) into the new folder you created.
n
You can copy the license file from the USB flash drive. The advantage of copying the license file to a
server is that you have easy access to installer files if you should ever need them in the future.
5. Start and log in to the Interplay Administrator.
6. In the Server section of the Interplay Administrator window, click the Licenses icon.
7. Click the Import license button.
8. Browse for the *.nxn file.
9. Select the file and click Open.
You see information about the permanent license in the License Types area.
For more information on managing licenses, see the Interplay | Engine and Interplay | Archive
Engine Administration Guide.
103
Updating a Clustered Installation (Rolling Upgrade)
Updating a Clustered Installation (Rolling Upgrade)
A major benefit of a clustered installation is that you can perform “rolling upgrades.” You can keep a
node in production while updating the installation on the other, then move the resource over and
update the second node as well.
n
For information about updating specific versions of the Interplay Engine and a cluster, see the Avid
Interplay ReadMe. The ReadMe describes an alternative method of updating a cluster, in which you
lock and deactivate the database before you begin the update.
When updating a clustered installation, the settings that were entered to set up the cluster resources
cannot be changed. Additionally, all other values must be reused, so Avid strongly recommends
choosing the Typical installation mode. Changes to the fundamental attributes can only be achieved
by uninstalling both nodes first and installing again with the new settings.
Make sure you follow the procedure in this order, otherwise you might end up with a corrupted
installation.
To update a cluster:
1. On either node, determine which node is active:
a.
Right-click My Computer and select Manage. The Server Manager window opens.
b. In the Server Manager list, open Features and click Failover Cluster Manager.
c.
Click Roles.
d. On the Summary tab, check the name of the Owner Node.
Consider this the active node or the first node.
2. Run the Interplay Engine installer to update the installation on the non-active node (second
node). Select Typical mode to reuse values set during the previous installation on that node.
Restart as requested and continue with the installation.
c
Do not move the Avid Workgroup Server to the second node yet.
3. Make sure that first node is active. Run the Interplay Engine installer to update the installation on
the first node. Select Typical mode so that all values are reused.
4. The installer displays a dialog box that asks you to move the Avid Workgroup Server to the
second node. Move the application, then click OK in the installation dialog box to continue.
Restart as requested and continue with the installation. The installer will ask you to restart again.
After completing the above steps, your entire clustered installation is updated to the new version.
Should you encounter any complications or face a specialized situation, contact Avid Support as
instructed in “If You Need Help” on page 8.
104
Uninstalling the Interplay | Engine on a Clustered System
Uninstalling the Interplay | Engine on a Clustered
System
To uninstall the Avid Interplay Engine, use the Avid Interplay Engine uninstaller, first on the inactive
node, then on the active node.
c
The uninstall mechanism of the cluster resources only functions properly if the names of the
resources or the cluster roles are not changed. Never change these names.
To uninstall the Interplay Engine:
1. If you plan to reinstall the Interplay Engine and reuse the existing database, create a complete
backup of the AvidWG database and the _InternalData database in S:\Workgroup_Databases.
For information about creating a backup, see “Creating and Restoring Database Backups” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.
2. (Dual-connected configuration only) Remove the second network address within the Avid
Workgroup Server group.
a.
In the Cluster Administrator, right-click Avid Workgroup Server.
b. Right-click Avid Workgroup Address 2 and select Remove.
3. Make sure that both nodes are running before you start the uninstaller.
4. On the inactive node (the node that does not own the Avid Workgroup Server cluster role), start
the uninstaller by selecting Programs > Avid > Avid Interplay Engine > Uninstall Avid Interplay
Engine.
5. When you are asked if you want to delete the cluster resources, click No.
6. When you are asked if you want to restart the system, click Yes.
7. At the end of the uninstallation process, if you are asked to restart the system, click Yes.
8. After the uninstallation on the inactive node is complete, wait until the last restart is done. Then
open the Failover Cluster Manager on the active node and make sure the inactive node is shown
as online.
9. Start the uninstallation on the active node (the node that owns the Avid Workgroup Server cluster
role).
10. When you are asked if you want to delete the cluster resources, click Yes.
A confirmation dialog box opens.
105
Uninstalling the Interplay | Engine on a Clustered System
11. Click Yes.
12. When you are asked if you want to restart the system, click Yes.
13. At the end of the uninstallation process, if you are asked to restart the system, click Yes.
14. After the uninstallation is complete, but before you reinstall the Interplay Engine, rename the
folder S:\Workgroup_Data (for example, S:\Workgroup_Data_Old) so that it will be preserved
during the reinstallation process. In case of a problem with the new installation, you can check
the old configuration information in that folder.
c
If you do not rename the Workgroup_Data, the reinstallation might fail because of old
configuration files within the folder. Make sure to rename the folder before you reinstall the
Interplay Engine.
106
4 Automatic Server Failover Tips and Rules
This chapter provides some important tips and rules to use when configuring the automatic server
failover.
Don't Access the Interplay Engine Through Individual Nodes
Don't access the Interplay Engine directly through the individual machines (nodes) of the cluster. Use
the virtual network name or IP address that has been assigned to the Interplay Engine resource group
(see “List of IP Addresses and Network Names” on page 30).
Make Sure to Connect to the Interplay Engine Resource Group
The network names and the virtual IP addresses resolve to the physical machine they are being
hosted on. For example, it is possible to mistakenly connect to the Interplay Engine using the
network name or IP address of the cluster group (see “List of IP Addresses and Network Names” on
page 30). The server is found using the alternative address also, but only while it is online on the
same node. Therefore, under no circumstances connect the clients to a network name other than what
was used to set up the Interplay Engine resource group.
Do Not Rename Resources
Do not rename resources. The resource plugin, the installer, and the uninstaller all depend on the
names of the cluster resources. These are assigned by the installer and even though it is possible to
modify them using the cluster administrator, doing so corrupts the installation and is most likely to
result in the server not functioning properly.
Do Not Install the Interplay Engine Server on a Shared Disk
The Interplay Engine must be installed on the local disk of the cluster nodes and not on a shared
resource. This is because local changes are also necessary on both machines. Also, with independent
installations you can later use a rolling upgrade approach, upgrading each node individually without
affecting the operation of the cluster. The Microsoft documentation is also strongly against installing
on shared disks.
Do Not Change the Interplay Engine Server Execution User
The domain account that was entered when setting up the cluster (the Cluster Account —see “Before
You Begin the Server Failover Installation” on page 27) also has to be the Server Execution User of
the Interplay Engine. Given that you cannot easily change the cluster user, the Interplay Engine
execution user has to stay fixed as well. For more information, see “Troubleshooting the Server
Execution User Account” in the Interplay | Engine and Interplay | Archive Engine Administration
Guide.
Do Not Edit the Registry While the Server is Offline
If you edit the registry while the server is offline, you will lose your changes. This is something that
most likely will happen to you since it is very easy to forget the implications of the registry
replication. Remember that the registry is restored by the resource monitor before the process is put
online, thereby wiping out any changes that you made while the resource (the server) was offline.
Only changes that take place while the resource is online are accepted.
Do Not Remove the Dependencies of the Affiliated Services
The TCP-COM Bridge, the Preview Server, and the Server Browser services must be in the same
resource group and assigned to depend on the server. Removing these dependencies might speed up
some operations but prohibit automatic failure recovery in some scenarios.
Consider Disabling Failover When Experimenting
If you are performing changes that could make the Avid Interplay Engine fail, consider disabling
failover. The default behavior is to restart the server twice (threshold = 3) and then initiate the
failover, with the entire procedure repeating several times before final failure. This can take quite a
while.
Changing the CCS
If you specify the wrong Central Configuration Server (CCS), you can change the setting later on the
server machine in the Windows Registry under:
(32-bit OS) HKEY_LOCAL_MACHINE\Software\Avid Technology\Workgroup\DatabaseServer
(64-bit) HKEY_LOCAL_MACHINE\Software\Wow6432Node\Avid
Technology\Workgroup\DatabaseServer
The string value CMS specifies the server. Make sure to set the CMS to a valid entry while the
Interplay Engine is online, otherwise your changes to the registry won't be effective. After the
registry is updated, stop and restart the server using the Cluster Administrator (in the Administration
Tools folder in Windows).
Specifying an incorrect CCS can prevent login. See “Troubleshooting Login Problems” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.
For more information, see “Understanding the Central Configuration Server” in the
Interplay | Engine and Interplay | Archive Engine Administration Guide.
108
5 Expanding the Database Volume for an
Interplay Engine Cluster
This document describes how to add drives to the HP MSA 2040 storage array to expand the drive
space available for the Interplay Production database. The procedure is described in the following
topics:
•
Before You Begin
•
Task 1: Add Drives to the MSA Storage Array
•
Task 2: Expand the Databases Volume Using the HP SMU (Version 2)
•
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
•
Task 3: Extend the Databases Volume in Windows Disk Management
Before You Begin
•
Obtain additional drives. The MSA array expansion was qualified with five of the following
drives:
-
HP 300GB 3.5in Internal Hard Drive - SAS - 15K RPM
Certified HP Vendor Item ID:J9V68A
Four will be configured to expand the Database volume and one will be configured as a spare.
•
Schedule a convenient time to perform the expansion. You do not need to take the Interplay
Engine offline. However, performance might be affected, so consider performing the expansion
during a maintenance window. Adding the drives to the existing RAID 10 Vdisk takes several
hours. Allow approximately 3 to 4 hours for the entire expansion.
•
Make sure you have a complete, recent backup of the Interplay database, created through the
Interplay Administrator.
•
Make sure you can access the HP Storage Management Utility (SMU).
The HP SMU is a web-based application. The following are default settings for accessing the
application:
-
IP address: http://10.0.0.2
-
User name: manage
-
Password: !manage
Check if these settings have been changed by an administrator.
The HP MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using either version:
-
“Task 2: Expand the Databases Volume Using the HP SMU (Version 2)” on page 110
-
“Task 2: Expand the Databases Volume Using the HP SMU (Version 3)” on page 115
Task 1: Add Drives to the MSA Storage Array
Task 1: Add Drives to the MSA Storage Array
You do not need to shut down the storage array to add new drives.
To add the new drives:
1. Remove the blank insert from an available slot.
2. Insert the hard drive and tray.
3. Repeat this for each drive.
Task 2: Expand the Databases Volume Using the HP
SMU (Version 2)
This task requires you to use the HP Storage Management Utility (SMU) Version 2 to add the new
drives to the RAID 10 Vdisk and to expand the Databases volume. See “Before You Begin” on
page 109 for login information.
To expand the HP MSA Vdisk:
1. Open the HP SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.
2. Click “Click to launch previous version.”
The splash screen for SMU V2 opens.
3. Supply the user name and password and click Sign In.
4. In the Configuration View, select Physical > Enclosure 1.
110
Task 2: Expand the Databases Volume Using the HP SMU (Version 2)
The following illustration shows the five additional drives, labeled AVAIL.
5. In the Configuration View, right-click the Vdisk (named dg01 in the illustration) and select
Tools > Expand Vdisk.
The Expand Vdisk page is displayed.
6. In the “Additional number of sub-vdesks” field, select 2.
The SMU automatically creates a mirrored pair of two new sub-vdisks, named RAID1-4 and
RAID 1-5.
111
Task 2: Expand the Databases Volume Using the HP SMU (Version 2)
7. In the table, assign the available disks:
t
Select Disk-1.8 and Disk-1.9 for RAID1-4.
t
Select Disk-1.10 and Disk-1.11 for RAID1-5.
Leave Disk-1.12 as a spare.
112
Task 2: Expand the Databases Volume Using the HP SMU (Version 2)
The following illustration shows these assignments.
8. Click Expand Vdisk.
A message box asks you to confirm the operation. Click Yes.
Another message box tells you that expansion of the Vdisk was started. Click OK.
113
Task 2: Expand the Databases Volume Using the HP SMU (Version 2)
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. When the process is complete, the SMU displays the additional space
as unallocated (green in the following illustration).
9. In the Configuration View, right-click Volume Databases and select Tools > Expand Volume.
10. On the Expand Volume page, select the entire amount of available space, then click Expand
Volume.
114
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
At the end of the process, the expanded Vdisk and Databases Volume are displayed.
11. Close the SMU.
Task 2: Expand the Databases Volume Using the HP
SMU (Version 3)
This task requires you to use the HP Storage Management Utility (SMU) Version 3 to add the new
drives to the RAID 10 disk group and to expand the Databases volume. See “Before You Begin” on
page 109 for login information.
To expand the HP MSA disk group:
1. Open the HP SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.
2. Sign in using the user name and password.
115
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
3. In the navigation bar on the left side of the screen, click System.
The following illustration shows the five additional drives, labeled SAS but without the gray
highlight.
4. In the navigation bar, click Pools.
5. From the Action menu, select Modify Disk Group.
6. In the Modify Disk Group dialog box, select Expand.
116
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
The dialog box enlarges to show the disk group and the available disks.
7. From the Additional sub-groups menu, select 2.
The SMU automatically creates a mirrored pair of two new sub-groups, named RAID1-4 and
RAID 1-5.
8. For each new RAID group, assign two of the available disks:
t
For RAID1-4, click the first two side-by-side disks.
t
For RAID1-5, click the next two side-by-side disks.
Leave one disk as a spare.
The following illustration shows these assignments.
117
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
9. Click Modify.
A message box describes how the expansion can take a significant amount of time and asks you
to confirm the operation. Click Yes.
Another message box tells you that the disk group was successfully modified. Click OK.
The process of adding the new paired drives to the RAID 10 Vdisk begins. This process can take
approximately 2.5 hours. You can track the progress on the Pools page, in the Related Disk
Groups section, under Current Job.
118
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
When the process is complete, the SMU displays the additional space as available. Note the
amount of available space, which you will need to enter in the Modify Volume dialog box.
10. In the navigation bar, click Volumes.
11. From the Action menu, click Modify Volume.
119
Task 2: Expand the Databases Volume Using the HP SMU (Version 3)
12. In the Modify Volume dialog box, type the available space exactly as displayed on the Pools
page (in this example, 599.4GB) and click OK.
At the end of the process, the new size of the expanded Databases Volume is displayed on the
Volumes page.
The new size of the disk group is also displayed on the Pools page.
13. Close the SMU.
120
Task 3: Extend the Databases Volume in Windows Disk Management
Task 3: Extend the Databases Volume in Windows Disk
Management
This task requires you to open the Windows Disk Management page and extend the Databases
volume.
To extend the Databases volume:
1. On the online node of the cluster, open Computer Management > Disk Management.
2. Right-click Disk2, Database (S:), and select Extend Volume.
The Extend Volume Wizard opens.
3. On the Welcome page, click Next.
The Select Disks page is displayed, with Disk 2 selected.
121
Task 3: Extend the Databases Volume in Windows Disk Management
4. Click Next.
The Completing page is displayed.
5. Click Finish.
The Database volume is extended.
122
Task 3: Extend the Databases Volume in Windows Disk Management
6. Close the Disk Management window and the Computer Management window.
7. Perform a cluster failover.
The expansion is complete and the Interplay Database has the new space available. You can
check the size of the Database disk (Avid Workgroup Disk) in the Failover Cluster Manager.
123
6 Adding Storage for File Assets for an
Interplay Engine Cluster
This document describes how to add drives to the HP MSA 2040 storage array to expand the drive
space available for the Interplay Production database’s file assets. The procedure is described in the
following topics:
•
Before You Begin
•
Task 1: Add Drives to the MSA Storage Array
•
Task 2: Create a Disk and Volume Using the HP SMU V3
•
Task 3: Initialize the Volume in Windows Disk Management
•
Task 4: Add the Disk to the Failover Cluster Manager
•
Task 5: Copy the File Assets to the New Drive
•
Task 6: Mount the FileAssets Partition in the _Master Folder
•
Task 7: Create Cluster Dependencies for the New Disk
Before You Begin
•
Obtain additional drives. The MSA array expansion was qualified with three of the following
drives:
-
HP MSA 4TB 12G SAS 7.2K LFF (3.5in) 512e Midline 1yr Warranty Hard Drive
Certified HP Vendor Item ID:K2Q82A (Seagate ST4000NM0034 disk)
Other configurations are possible depending on customer requirements and the number of slots
available in the HP MSA. Creating at least one separate volume for file assets is required.
•
Determine the RAID Level for configuring the new volume. The MSA array expansion was
qualified with three drives configured as RAID Level 5. The RAID level you select depends on
the number of drives you are adding and the customer’s requirements. Use the following table for
guidance. If necessary, consult technical information about RAID levels.
Number of drives
Recommended RAID Level
2
RAID Level 1
3
One of the following:
•
RAID Level 1 plus spare
•
RAID Level 5
Task 1: Add Drives to the MSA Storage Array
Number of drives
Recommended RAID Level
4
One of the following:
•
RAID Level 1 (two pairs)
•
RAID Level 5
•
RAID Level 6
•
Schedule a convenient time to perform the expansion. You need to bring the Interplay Engine
offline during the process, so this procedure is best performed during a maintenance window.
The configuration itself will take approximately one hour, with the engine offline for
approximately 5 to 15 minutes. In addition, allow time for the copying of file assets, which
depends on the number of file assets in the database.
•
Decide if you want to allocate the entire drive space to file assets, or reserve space for snapshots
or future expansion. See “Task 2: Create a Disk and Volume Using the HP SMU V3” on
page 126.
•
Make sure you have the following complete, recent backups:
-
Interplay database, created through the Interplay Administrator
-
_Master folder (file assets), created through a backup utility.
The Interplay Administrator does not have a backup mechanism for the _Master folder.
•
Make sure you can access the HP Storage Management Utility (SMU).
The HP SMU is a web-based application. To access the SMU, it needs to be connected to a LAN
through at least one of its Ethernet ports. The following are default settings for accessing the
application:
-
IP address: http://10.0.0.2
-
User name: manage
-
Password: !manage
Check if these settings have been changed by an administrator.
The HP MSA firmware includes two different versions of the SMU (version 2 and version 3).
This document includes instructions for using SMU version 3.
Task 1: Add Drives to the MSA Storage Array
You do not need to shut down the storage array to add new drives.
To add the new drives:
1. Remove the blank insert from an available slot.
2. Insert the hard drive and tray.
3. Repeat this for each drive.
125
Task 2: Create a Disk and Volume Using the HP SMU V3
Task 2: Create a Disk and Volume Using the HP SMU
V3
This topic provides instructions for creating a disk group for the added disks, and then creating a
volume in the new disk group, using the HP Storage Management Utility (SMU) Version 3.
To create a disk group:
1. Open the HP SMU by typing the IP address in a browser.
The splash screen for SMU V3 opens.
2. Sign in using the user name and password.
The Home screen opens.
3. In the navigation bar on the left side of the screen, click System and select View System.
126
Task 2: Create a Disk and Volume Using the HP SMU V3
The following illustration shows the three additional drives, labeled MDL, which is an HP name
for a “midline” drive. Click the drive to display disk information.
4. In the navigation bar, click Pools.
5. From the Action menu, select Add Disk Group.
The Add Disk Group dialog box is displayed.
127
Task 2: Create a Disk and Volume Using the HP SMU V3
6. For Type, select Linear.
The dialog box changes to the Linear options.
7. Specify the following information:
a.
Enter a name, for example, one that increments the name of the existing disk group.
b. Select the RAID Level. In this example, the three drives have been qualified with RAID
Level 5. For more information about RAID levels, see “Before You Begin” on page 124.
c.
Select the check boxes for the new drives.
The following illustration shows this information.
d. Click Add.
A progress bar is displayed. At the end of the process, a success message is displayed. Click
OK.The new disk group is displayed. If you select the name, information is displayed in the
Related Disk Groups section.
128
Task 2: Create a Disk and Volume Using the HP SMU V3
To create a new volume:
1. In the navigation bar, click Volumes.
2. In the Action menu, click Create Linear Volumes.
The Create Linear Volumes dialog box opens.
3. Do the following:
a.
For Pool, click the down arrow and select the new disk group, in this case, vd0002.
b. For Volume Name, enter a meaningful name, such as FileAssets.
c.
For Volume Size, you can specify the entire volume (the default) or reserve some of the
volume for future use. For example, you could enable snapshots (see the HP MSA
documentation). In this example, the entire volume is included.
129
Task 2: Create a Disk and Volume Using the HP SMU V3
d. Click OK.
A progress bar is displayed. At the end of the process, a success message is displayed. Click OK.
The new volume is added to the list of volumes.
To map the new volume:
1. Select the new volume.
2. From the Action menu, select Map Volumes.
The Map dialog box is displayed.
130
Task 2: Create a Disk and Volume Using the HP SMU V3
3. Select “All Other Initiators” and click the Map button.
The default mapping information is displayed. Accept these defaults.
4. Click Apply. A confirmation dialog is displayed. Click Yes. At the end of the process, a success
message is displayed. Click OK.
The Volumes page shows the new volume fully configured.
5. Sign out of the SMU.
131
Task 3: Initialize the Volume in Windows Disk Management
Task 3: Initialize the Volume in Windows Disk
Management
This topic provides instructions for naming, bringing online, and initializing the new FileAssets
volume you created in the HP SMU, using the Windows Disk Management utility.
To initialize the FileAssets volume:
1. On Node 1, right-click This PC and select Manage. From the Tools menu, select Computer
Management. In the Computer Management list, select Storage > Disk Management.
The Disk Management window opens. The FileAssets volume is displayed as an unknown disk
that is offline.
2. Right-click the new disk and select Online.
3. Right-click the new disk, and select Initialize Disk.
The Initialize Disk dialog box opens.
132
Task 3: Initialize the Volume in Windows Disk Management
4. Select the new disk, select GPT, and click OK.
n
The MBR partition style has a limit of 2 TB.
5. Use the New Simple Volume wizard to configure the volume as a partition.
a.
Right-click the new disk.
b. Select New Simple Volume.
The New Simple Volume wizard opens with the Specify Volume Size page.
6. Accept the volume size and click Next.
7. Assign the drive letter L and click Next.
You will remove this drive letter in a later step.
8. Name the volume label FileAssets, select “Perform a quick format,” and click Next.
The completion screen is displayed.
9. Click Finish.
133
Task 3: Initialize the Volume in Windows Disk Management
At the end of the process the new disk is named and online.
Close Disk Management on Node 1.
10. On Node 2, open Disk Management and do the following:
a.
Right-click the new disk and select Online.
b. Right-click the FileAssets partition and select Change Drive Letter and Paths.
The Change Drive Letter and Paths dialog box opens.
c.
Click Change.
The Change Drive Letter or Path dialog box opens.
d. From the drive letter drop down menu, select L.
134
Task 4: Add the Disk to the Failover Cluster Manager
n
The disk does not need to be initialized, because the initialization was done on Node 1.
e.
Click OK.
A confirmation box asks if you want to continue. Click Yes.
The new disk is now named and online on Node 2.
11. Close Disk Management on Node 2.
Task 4: Add the Disk to the Failover Cluster Manager
This topic provides instructions for adding the FileAssets volume as a disk in the Windows Failover
Cluster Manager.
c
This task and all remaining tasks must be performed on the online node.
To add the new disk to the cluster:
1. On the online node, right-click This PC and select Manage. From the Tools menu, select Failover
Cluster Manager.
The Failover Cluster Manager opens.
2. In the navigation panel, select Storage > Disks.
The Disks pane is displayed.
3. In the Actions panel, select Add Disk.
The Add Disks to a Cluster dialog box opens.
4. In the dialog box, select the new disk and click OK.
135
Task 4: Add the Disk to the Failover Cluster Manager
The new disk is displayed in the cluster list as Cluster Disk 1.
5. Right-click Cluster Disk 1 and select Properties.
The Cluster Disk 1 Properties dialog box opens.
6. On the General tab, type a name for the disk, for example, Avid Workgroup File Assets, and
click OK.
7. In the Failover Cluster Manager navigation pane, select Roles.
8. In the Avid Workgroup Server menu, select Add Storage.
136
Task 4: Add the Disk to the Failover Cluster Manager
The Add Storage dialog box opens.
9. Select the check box for the new disk and click OK.
The new disk, named Avid Workgroup File Assets, is listed as storage for the Avid Workgroup
Server.
10. Keep Failover Cluster Manager open for use in Tasks 6 and 7.
137
Task 5: Copy the File Assets to the New Drive
Task 5: Copy the File Assets to the New Drive
This topic provides instructions for using the Robocopy program to copy the existing file assets
folder to the new drive.
c
This task and all remaining tasks must be performed on the online node.
Copying the existing file assets folder is likely to be the most time-consuming part of the
configuration process, depending on the size of the folder. The Interplay Engine can continue to be
running during the copying process, but best practice is to perform this copy during a maintenance
window. The following illustration shows the contents of the _Master folder, which holds the file
assets.
Avid recommends using a copy program such as Robocopy, which preserves timestamps for the file
assets and copies empty files, through the /E parameter. The following procedure uses Robocopy,
executed from a command line.
To copy the file assets to the new drive:
1. On the online node, open a Windows command prompt with administrative rights.
2. Type the source, target, and /E parameter, using the following syntax:
C:\Windows\system32>robocopy source_directory destination_partition /E
For example:
C:\Windows\system32>robocopy S:\Workgroup_Databases\AvidWG\_Master L: /E
138
Task 6: Mount the FileAssets Partition in the _Master Folder
Task 6: Mount the FileAssets Partition in the _Master
Folder
This topic provides instructions for mounting the FileAssets partition in the _Master folder, using the
Disk Management utility.
c
c
You must take the Engine services offline for this task. The other resources, especially the disk
resources, must stay online.
This task and all remaining tasks must be performed on the online node.
To mount the FileAssets partition in the _Master folder:
1. In the Failover Cluster Manager, select Roles.
2. In the Roles section of the Avid Workgroup Server, right-click Avid Workgroup Engine Monitor
and select Take Offline.
Wait until all roles are offline.
139
Task 6: Mount the FileAssets Partition in the _Master Folder
3. In Windows Explorer, rename the original S:\Workgroup_Databases\AvidWG\_Master folder
_Master_Old. (_Master_Old will serve as a backup.)
4. Create a new folder named _Master.
The following illustration shows the new folder and the renamed folder.
5. Open the Disk Management utility.
6. Right-click the new FileAssets partition and select Change Drive Letter and Paths.
7. In the Change Drive letter and Paths dialog box, select drive letter L: and click Remove.
140
Task 6: Mount the FileAssets Partition in the _Master Folder
A warning is displayed. Click Yes.
8. Right-click the FileAssets partition again and again select Change Drive Letter and Paths.
9. In the Change Drive letter and Paths dialog box, click Add.
The Add Drive Letter or Path dialog box opens.
10. Select Mount in the following empty NTFS folder and click Browse.
11. Navigate to the new _Master folder and click OK.
The path is displayed.
12. Click OK, then click OK again.
The disk now points to a path.
In Windows Explorer, the icon for the _Master folder has changed from folder to mount point. If
you double-click _Master, you see the file assets subfolders, which are located on the new drive.
141
Task 7: Create Cluster Dependencies for the New Disk
Task 7: Create Cluster Dependencies for the New Disk
This topic provides instructions for creating dependencies for the new disk resource, using the
Failover Cluster Manager. You bring the cluster online as part of this task.
c
This task must be performed on the online node.
To create cluster dependencies for the new disk resource:
1. On the online node, open Failover Cluster Manager, if it is not already open.
2. In the navigation pane, click Roles.
3. In the Avid Workgroup Server section, right-click the Avid Workgroup File Assets disk and
select Properties.
The Properties dialog box opens.
4. Click the Dependencies tab, click in the Resource column, click the drop-down arrow, and select
the Avid Workgroup Disk.
142
Task 7: Create Cluster Dependencies for the New Disk
5. Click Apply, then click OK.
The dependency is created.
6. In the Avid Workgroup Server section , right-click File Server and select Properties.
The Properties dialog box opens.
7. Click the Dependencies tab, click in the AND/OR column to add AND. Then click in the
Resource column, click the drop-down arrow, and select Avid Workgroup File Assets.
8. Click Apply, then click OK.
143
Task 7: Create Cluster Dependencies for the New Disk
9. Now bring the cluster back online. Select Avid Workgroup Server, and in the Actions list, click
Start Role.
At the end of the process, all resources are online.
10. Close the Failover Cluster Manager.
The File Assets volume configuration is complete.
144
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Index
A
Active Directory domain
adding cluster servers 51
Antivirus software
running on a failover cluster 15
Apache web server
on failover cluster 75
AS3000 server
slot locations (failover cluster) 16
ATTO card
setting link speed 34
Avid
online support 8
training services 9
Avid ISIS
failover cluster configurations 12
failover cluster connections for dual-connected
configuration 20
failover cluster connections for redundant-switch
configuration 17
Avid Unity MediaNetwork
failover cluster configuration 12
B
Binding order networks
configuring 42
C
Central Configuration Server (CCS)
changing for failover cluster 107
specifying for failover cluster 85
Cluster
configuring 57
overview 10
See also Failover cluster
specifying name 57
Cluster disks
renaming 65
Cluster group
partition 44
Cluster installation
updating 104
Cluster installation and administration account
described 27
Cluster networks
renaming in Failover Cluster Manager 62
Cluster service
defined 24
Cluster Service account
Interplay Engine installation 86
specify name 57
Create Database view 101
D
Database
creating 101
Database folder
default location (failover cluster) 83
Dual-connected cluster configuration 12
E
Email notification
setting for failover cluster 87
F
Failover cluster
adding second IP address in Failover Cluster Manager
67
Avid ISIS dual-connected configuration 20
Avid ISIS redundant-switch configuration 17
before installation 27
configurations 12
hardware and software requirements 15
installation overview 26
system components 11
system overview 10
Failover Clustering feature
adding 51
H
Hardware
requirements for failover cluster system 15
Heartbeat connection
configuring 40
145
Index
ABCDEFGHIJKLMNOPQRSTUVWXYZ
HP MSA 2040
Command Line Interface (CLI) 24
installing 16
Storage Management Utility (SMU) 23
I
Importing
license 102
Infortrend shared-storage RAID array
supported models 16
Installation (failover cluster)
testing 72
Installing
Interplay Engine (failover cluster) 79
Interplay Access
default folders in 101
Interplay Engine
Central Configuration Server, specifying for failover
cluster 85
cluster information for installation 80
default database location for failover cluster 83
enabling email notifications 87
installing on first node 75
preparation for installing on first node 76
Server Execution User, specifying for failover cluster
86
share name for failover cluster 84
specify engine name 81
specifying server cache 87
uninstalling 105
Interplay Portal
viewing 9
IP addresses (failover cluster)
private network adapter 40
public network adapter 44
required 30
L
License requirements
failover cluster system 15
Licenses
importing 102
permanent 102
N
Network connections
naming for failover cluster 37
Network interface
renaming LAN for failover cluster 37
Network names
examples for failover cluster 30
Node
defined 24
name examples 30
O
Online resource
defined 24
Online support 8
P
Permanent license 102
Port
for Apache web server 75
Private network adapter
configuring 40
Public Network
for failover cluster 80
Public network adapter
configuring 44
Q
Quorum disk
configuring 57
Quorum resource
defined 24
R
RAID array
configuring for failover cluster 44
Redundant-switch cluster configuration 12
Registry
editing while offline 107
Resource group
connecting to 107
defined 24
services 107
Resources
defined 24
renaming 107
Rolling upgrade (failover cluster) 104
S
Server cache
Interplay Engine cluster installation 87
Server Execution User
changing 107
described 27
specifying for failover cluster 86
Server Failover
overview 10
See also Failover cluster
146
ABCDEFGHIJKLMNOPQRSTUVWXYZ
Service name
examples for failover cluster 30
Services
dependencies 107
Shared drive
bringing online 76
configuring for failover cluster 44
specifying for Interplay Engine 80
Slot locations
AS3000 server (failover cluster) 16
Software
requirements for failover cluster system 15
Subnet Mask 80
T
Training services 9
Troubleshooting 8
server failover 107
U
Uninstalling
Interplay Engine (failover cluster) 105
Updating
cluster installation 104
V
Virtual IP address
for Interplay Engine (failover cluster) 80
W
Web servers
disabling 75
Windows server settings
changing before installation 36
147
Index