Installation Guide - Fujitsu Technology Solutions

ServerView Resource Orchestrator
Virtual Edition V3.0.0
Installation Guide
Windows/Linux
J2X1-7603-01ENZ0(05)
April 2012
Preface
Purpose
This manual explains how to install ServerView Resource Orchestrator (hereinafter Resource Orchestrator).
Target Readers
This manual is written for people who will install Resource Orchestrator.
It is strongly recommended that you read the "ServerView Resource Orchestrator Virtual Edition V3.0.0 Setup Guide" and the Software
Release Guide before using this manual.
When setting up systems, it is assumed that readers have the basic knowledge required to configure the servers, storage, and network
devices to be installed.
Organization
This manual is composed as follows:
Title
Description
Chapter 1 Operational Environment
Explains the operational environment of Resource Orchestrator.
Chapter 2 Installation
Explains how to install Resource Orchestrator.
Chapter 3 Uninstallation
Explains how to uninstall Resource Orchestrator.
Chapter 4 Upgrading from Earlier Versions
Explains how to upgrade from earlier versions of Resource Coordinator.
Appendix A Advisory Notes for Environments
with Systemwalker Centric Manager or
ETERNUS SF Storage Cruiser
Explains advisory notes regarding use of Resource Orchestrator with
Systemwalker Centric Manager or ETERNUS SF Storage Cruiser.
Appendix B Manager Cluster Operation
Settings and Deletion
Explains the settings necessary when using Resource Orchestrator on cluster
systems, and the method for deleting Resource Orchestrator from cluster
systems.
Glossary
Explains the terms used in this manual. Please refer to it when necessary.
Notational Conventions
The notation in this manual conforms to the following conventions.
- When using Resource Orchestrator and the functions necessary differ due to the necessary basic software (OS), it is indicated as
follows:
[Windows]
Sections related to Windows (When not using Hyper-V)
[Linux]
Sections related to Linux
[Red Hat Enterprise Linux]
Sections related to Red Hat Enterprise Linux
[Solaris]
Sections related to Solaris
[VMware]
Sections related to VMware
[Hyper-V]
Sections related to Hyper-V
[Xen]
Sections related to Xen
[KVM]
Sections related to RHEL-KVM
[Solaris Containers]
Sections related to Solaris containers
[Windows/Hyper-V]
Sections related to Windows and Hyper-V
-i-
[Windows/Linux]
Sections related to Windows and Linux
[Linux/VMware]
Sections related to Linux and VMware
[Linux/Xen]
Sections related to Linux and Xen
[Xen/KVM]
Sections related to Xen and RHEL-KVM
[Linux/Solaris/VMware]
Sections related to Linux, Solaris, and VMware
[Linux/VMware/Xen]
Sections related to Linux, VMware, and Xen
[Linux/Xen/KVM]
Sections related to Linux, Xen, and RHEL-KVM
[VMware/Hyper-V/Xen]
Sections related to VMware, Hyper-V, and Xen
[Linux/Solaris/VMware/Xen]
Sections related to Linux, Solaris, VMware, and Xen
[Linux/VMware/Xen/KVM]
Sections related to Linux, VMware, Xen, and RHEL-KVM
[VMware/Hyper-V/Xen/KVM]
Sections related to VMware, Hyper-V, Xen, and RHEL-KVM
[Linux/Solaris/VMware/Xen/
KVM]
Sections related to Linux, Solaris, VMware, Xen, and RHEL-KVM
[VM host]
Sections related to VMware, Windows Server 2008 with Hyper-V enabled,
Xen, RHEL-KVM, and Solaris containers
- Unless specified otherwise, the blade servers mentioned in this manual refer to PRIMERGY BX servers.
- Oracle Solaris may also be indicated as Solaris, Solaris Operating System, or Solaris OS.
- References and character strings or values requiring emphasis are indicated using double quotes ( " ).
- Window names, dialog names, menu names, and tab names are shown enclosed by brackets ( [ ] ).
- Button names are shown enclosed by angle brackets (< >) or square brackets ([ ]).
- The order of selecting menus is indicated using [ ]-[ ].
- Text to be entered by the user is indicated using bold text.
- Variables are indicated using italic text and underscores.
- The ellipses ("...") in menu names, indicating settings and operation window startup, are not shown.
Menus in the ROR console
Operations on the ROR console can be performed using either the menu bar or pop-up menus.
By convention, procedures described in this manual only refer to pop-up menus.
Documentation Road Map
The following manuals are provided with Resource Orchestrator. Please refer to them when necessary:
Manual Name
Abbreviated Form
Purpose
Please read this first.
ServerView Resource Orchestrator Virtual
Edition V3.0.0 Setup Guide
Setup Guide VE
ServerView Resource Orchestrator Virtual
Edition V3.0.0 Installation Guide
Installation Guide VE
Read this when you want information about how
to install Resource Orchestrator.
ServerView Resource Orchestrator Virtual
Edition V3.0.0 Operation Guide
Operation Guide VE
Read this when you want information about how
to operate systems that you have configured.
- ii -
Read this when you want information about the
purposes and uses of basic functions, and how to
install Resource Orchestrator.
Manual Name
Abbreviated Form
Purpose
ServerView Resource Orchestrator Virtual
Edition V3.0.0 User's Guide
User's Guide VE
Read this when you want information about how
to operate the GUI.
ServerView Resource Orchestrator Virtual
Edition V3.0.0 Command Reference
Command Reference
Read this when you want information about how
to use commands.
ServerView Resource Orchestrator Virtual
Edition V3.0.0 Messages
Messages VE
Read this when you want detailed information
about the corrective actions for displayed
messages.
Related Documentation
Please refer to these manuals when necessary.
- Systemwalker Resource Coordinator Virtual server Edition Installation Guide
- Systemwalker Resource Coordinator Virtual server Edition Setup Guide
- ServerView Resource Orchestrator Installation Guide
- ServerView Resource Orchestrator Setup Guide
- ServerView Resource Orchestrator Operation Guide
Abbreviations
The following abbreviations are used in this manual:
Abbreviation
Products
Windows
Microsoft(R) Windows Server(R) 2008 Standard
Microsoft(R) Windows Server(R) 2008 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Standard
Microsoft(R) Windows Server(R) 2008 R2 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Datacenter
Microsoft(R) Windows Server(R) 2003 R2, Standard Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise Edition
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
Windows(R) 7 Professional
Windows(R) 7 Ultimate
Windows Vista(R) Business
Windows Vista(R) Enterprise
Windows Vista(R) Ultimate
Microsoft(R) Windows(R) XP Professional operating system
Windows Server 2008
Microsoft(R) Windows Server(R) 2008 Standard
Microsoft(R) Windows Server(R) 2008 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Standard
Microsoft(R) Windows Server(R) 2008 R2 Enterprise
Microsoft(R) Windows Server(R) 2008 R2 Datacenter
Windows 2008 x86 Edition
Microsoft(R) Windows Server(R) 2008 Standard (x86)
Microsoft(R) Windows Server(R) 2008 Enterprise (x86)
Windows 2008 x64 Edition
Microsoft(R) Windows Server(R) 2008 Standard (x64)
Microsoft(R) Windows Server(R) 2008 Enterprise (x64)
Windows Server 2003
Microsoft(R) Windows Server(R) 2003 R2, Standard Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise Edition
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
- iii -
Abbreviation
Products
Windows 2003 x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Standard x64 Edition
Microsoft(R) Windows Server(R) 2003 R2, Enterprise x64 Edition
Windows 7
Windows(R) 7 Professional
Windows(R) 7 Ultimate
Windows Vista
Windows Vista(R) Business
Windows Vista(R) Enterprise
Windows Vista(R) Ultimate
Windows XP
Microsoft(R) Windows(R) XP Professional operating system
Windows PE
Microsoft(R) Windows(R) Preinstallation Environment
Linux
Red Hat(R) Enterprise Linux(R) AS (v.4 for x86)
Red Hat(R) Enterprise Linux(R) ES (v.4 for x86)
Red Hat(R) Enterprise Linux(R) AS (v.4 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (v.4 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.5 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.5 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.5 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.5 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.6 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.6 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.6 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.6 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.7 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.7 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.7 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.7 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.8 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.8 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.8 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.8 for EM64T)
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6 (for x86)
Red Hat(R) Enterprise Linux(R) 6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.1 (for x86)
Red Hat(R) Enterprise Linux(R) 6.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
SUSE(R) Linux Enterprise Server 10 Service Pack2 for x86
SUSE(R) Linux Enterprise Server 10 Service Pack2 for EM64T
SUSE(R) Linux Enterprise Server 10 Service Pack3 for x86
- iv -
Abbreviation
Products
SUSE(R) Linux Enterprise Server 10 Service Pack3 for EM64T
SUSE(R) Linux Enterprise Server 11 for x86
SUSE(R) Linux Enterprise Server 11 for EM64T
SUSE(R) Linux Enterprise Server 11 Service Pack1 for x86
SUSE(R) Linux Enterprise Server 11 Service Pack1 for EM64T
Oracle Enterprise Linux Release 5 Update 4 for x86 (32 Bit)
Oracle Enterprise Linux Release 5 Update 4 for x86_64 (64 Bit)
Oracle Enterprise Linux Release 5 Update 5 for x86 (32 Bit)
Oracle Enterprise Linux Release 5 Update 5 for x86_64 (64 Bit)
Red Hat Enterprise Linux
Red Hat(R) Enterprise Linux(R) AS (v.4 for x86)
Red Hat(R) Enterprise Linux(R) ES (v.4 for x86)
Red Hat(R) Enterprise Linux(R) AS (v.4 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (v.4 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.5 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.5 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.5 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.5 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.6 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.6 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.6 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.6 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.7 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.7 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.7 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.7 for EM64T)
Red Hat(R) Enterprise Linux(R) AS (4.8 for x86)
Red Hat(R) Enterprise Linux(R) ES (4.8 for x86)
Red Hat(R) Enterprise Linux(R) AS (4.8 for EM64T)
Red Hat(R) Enterprise Linux(R) ES (4.8 for EM64T)
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6 (for x86)
Red Hat(R) Enterprise Linux(R) 6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.1 (for x86)
Red Hat(R) Enterprise Linux(R) 6.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
Red Hat Enterprise Linux 5
Red Hat(R) Enterprise Linux(R) 5 (for x86)
Red Hat(R) Enterprise Linux(R) 5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.1 (for x86)
Red Hat(R) Enterprise Linux(R) 5.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.2 (for x86)
-v-
Abbreviation
Products
Red Hat(R) Enterprise Linux(R) 5.2 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.3 (for x86)
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.4 (for x86)
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.5 (for x86)
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.6 (for x86)
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 5.7 (for x86)
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64)
Red Hat Enterprise Linux 6
Red Hat(R) Enterprise Linux(R) 6 (for x86)
Red Hat(R) Enterprise Linux(R) 6 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.1 (for x86)
Red Hat(R) Enterprise Linux(R) 6.1 (for Intel64)
Red Hat(R) Enterprise Linux(R) 6.2 (for x86)
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64)
RHEL-KVM
Red Hat(R) Enterprise Linux(R) 6.1 (for x86) Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 6.1 (for Intel64) Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 6.2 (for x86) Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 6.2 (for Intel64) Virtual Machine Function
Xen
Citrix XenServer(TM) 5.5
Citrix Essentials(TM) for XenServer 5.5, Enterprise Edition
Red Hat(R) Enterprise Linux(R) 5.3 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.3 (for Intel64) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.4 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.4 (for Intel64) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.5 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.5 (for Intel64) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.6 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.6 (for Intel64) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.7 (for x86) Linux Virtual Machine Function
Red Hat(R) Enterprise Linux(R) 5.7 (for Intel64) Linux Virtual Machine Function
DOS
Microsoft(R) MS-DOS(R) operating system, DR DOS(R)
SUSE Linux Enterprise Server
SUSE(R) Linux Enterprise Server 10 Service Pack2 for x86
SUSE(R) Linux Enterprise Server 10 Service Pack2 for EM64T
SUSE(R) Linux Enterprise Server 10 Service Pack3 for x86
SUSE(R) Linux Enterprise Server 10 Service Pack3 for EM64T
SUSE(R) Linux Enterprise Server 11 for x86
SUSE(R) Linux Enterprise Server 11 for EM64T
SUSE(R) Linux Enterprise Server 11 Service Pack1 for x86
SUSE(R) Linux Enterprise Server 11 Service Pack1 for EM64T
Oracle Enterprise Linux
Oracle Enterprise Linux Release 5 Update 4 for x86 (32 Bit)
Oracle Enterprise Linux Release 5 Update 4 for x86_64 (64 Bit)
Oracle Enterprise Linux Release 5 Update 5 for x86 (32 Bit)
Oracle Enterprise Linux Release 5 Update 5 for x86_64 (64 Bit)
Solaris
Solaris(TM) 10 Operating System
VMware
VMware(R) Infrastructure 3
VMware vSphere(R) 4
VMware vSphere(R) 4.1
VMware vSphere(R) 5
VIOM
ServerView Virtual-IO Manager
- vi -
Abbreviation
Products
ServerView Agent
ServerView SNMP Agents for MS Windows (32bit-64bit)
ServerView Agents Linux
ServerView Agents VMware for VMware ESX Server
Excel
Microsoft(R) Office Excel(R) 2010
Microsoft(R) Office Excel(R) 2007
Microsoft(R) Office Excel(R) 2003
Excel 2010
Microsoft(R) Office Excel(R) 2010
Excel 2007
Microsoft(R) Office Excel(R) 2007
Excel 2003
Microsoft(R) Office Excel(R) 2003
ROR VE
ServerView Resource Orchestrator Virtual Edition
ROR CE
ServerView Resource Orchestrator Cloud Edition
Resource Coordinator
Systemwalker Resource Coordinator
Resource Coordinator VE
ServerView Resource Coordinator VE
Systemwalker Resource Coordinator Virtual server Edition
Resource Orchestrator
ServerView Resource Orchestrator
Export Administration Regulation Declaration
Documents produced by FUJITSU may contain technology controlled under the Foreign Exchange and Foreign Trade Control Law of
Japan. Documents which contain such technology should not be exported from Japan or transferred to non-residents of Japan without first
obtaining authorization from the Ministry of Economy, Trade and Industry of Japan in accordance with the above law.
Trademark Information
- BMC, BMC Software, and the BMC Software logo are trademarks or registered trademarks of BMC Software, Inc. in the United
States and other countries.
- Citrix(R), Citrix XenServer(TM), Citrix Essentials(TM), and Citrix StorageLink(TM) are trademarks of Citrix Systems, Inc. and/or
one of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries.
- Dell is a registered trademark of Dell Computer Corp.
- HP is a registered trademark of Hewlett-Packard Company.
- IBM is a registered trademark or trademark of International Business Machines Corporation in the U.S.
- Linux is a trademark or registered trademark of Linus Torvalds in the United States and other countries.
- Microsoft, Windows, MS, MS-DOS, Windows XP, Windows Server, Windows Vista, Windows 7, Excel, and Internet Explorer are
either registered trademarks or trademarks of Microsoft Corporation in the United States and other countries.
- Oracle and Java are registered trademarks of Oracle and/or its affiliates in the United States and other countries.
- Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
- Red Hat, RPM and all Red Hat-based trademarks and logos are trademarks or registered trademarks of Red Hat, Inc. in the United
States and other countries.
- Spectrum is a trademark or registered trademark of Computer Associates International, Inc. and/or its subsidiaries.
- SUSE is a registered trademark of SUSE LINUX AG, a Novell business.
- VMware, the VMware "boxes" logo and design, Virtual SMP, and VMotion are registered trademarks or trademarks of VMware, Inc.
in the United States and/or other jurisdictions.
- ServerView and Systemwalker are registered trademarks of FUJITSU LIMITED.
- vii -
- All other brand and product names are trademarks or registered trademarks of their respective owners.
Notices
- The contents of this manual shall not be reproduced without express written permission from FUJITSU LIMITED.
- The contents of this manual are subject to change without notice.
Month/Year Issued,
Edition
Manual Code
November 2011, First
Edition
J2X1-7603-01ENZ0(00)
December 2011, 1.1
J2X1-7603-01ENZ0(01)
January 2012, 1.2
J2X1-7603-01ENZ0(02)
February 2012, 1.3
J2X1-7603-01ENZ0(03)
March 2012, 1.4
J2X1-7603-01ENZ0(04)
April 2012, 1.5
J2X1-7603-01ENZ0(05)
Copyright FUJITSU LIMITED 2010-2012
- viii -
Contents
Chapter 1 Operational Environment.........................................................................................................................................1
Chapter 2 Installation................................................................................................................................................................2
2.1 Manager Installation............................................................................................................................................................................2
2.1.1 Preparations..................................................................................................................................................................................2
2.1.1.1 Software Preparation and Checks..........................................................................................................................................2
2.1.1.2 Collecting and Checking Required Information....................................................................................................................7
2.1.1.3 Configuration Parameter Checks.........................................................................................................................................10
2.1.2 Installation [Windows]...............................................................................................................................................................11
2.1.3 Installation [Linux].....................................................................................................................................................................12
2.1.4 License Setup..............................................................................................................................................................................13
2.2 Agent Installation...............................................................................................................................................................................13
2.2.1 Preparations................................................................................................................................................................................13
2.2.1.1 Software Preparation and Checks........................................................................................................................................13
2.2.1.2 Collecting and Checking Required Information..................................................................................................................18
2.2.2 Installation [Windows/Hyper-V]................................................................................................................................................19
2.2.3 Installation [Linux/VMware/Xen/KVM/Oracle VM]................................................................................................................22
2.2.4 Installation [Solaris]....................................................................................................................................................................23
2.3 HBA address rename setup service Installation................................................................................................................................24
2.3.1 Preparations................................................................................................................................................................................24
2.3.1.1 Software Preparation and Checks........................................................................................................................................24
2.3.1.2 Collecting and Checking Required Information..................................................................................................................25
2.3.2 Installation [Windows]...............................................................................................................................................................25
2.3.3 Installation [Linux].....................................................................................................................................................................26
Chapter 3 Uninstallation.........................................................................................................................................................28
3.1 Manager Uninstallation......................................................................................................................................................................28
3.1.1 Preparations................................................................................................................................................................................28
3.1.2 Uninstallation [Windows]...........................................................................................................................................................29
3.1.3 Uninstallation [Linux].................................................................................................................................................................30
3.2 Agent Uninstallation..........................................................................................................................................................................30
3.2.1 Uninstallation [Windows/Hyper-V]...........................................................................................................................................30
3.2.2 Uninstallation [Linux/VMware/Xen/KVM/Oracle VM]............................................................................................................31
3.2.3 Uninstallation [Solaris]...............................................................................................................................................................32
3.3 HBA address rename setup service Uninstallation............................................................................................................................33
3.3.1 Uninstallation [Windows]...........................................................................................................................................................33
3.3.2 Uninstallation [Linux].................................................................................................................................................................34
3.4 Uninstall (middleware) Uninstallation...............................................................................................................................................35
Chapter 4 Upgrading from Earlier Versions............................................................................................................................37
4.1 Overview............................................................................................................................................................................................37
4.2 Manager.............................................................................................................................................................................................38
4.3 Agent..................................................................................................................................................................................................45
4.4 Client..................................................................................................................................................................................................50
4.5 HBA address rename setup service...................................................................................................................................................50
Appendix A Advisory Notes for Environments with Systemwalker Centric Manager or ETERNUS SF Storage Cruiser.......52
Appendix B Manager Cluster Operation Settings and Deletion..............................................................................................54
B.1 What are Cluster Systems.................................................................................................................................................................54
B.2 Installation.........................................................................................................................................................................................55
B.2.1 Preparations................................................................................................................................................................................55
B.2.2 Installation..................................................................................................................................................................................57
B.3 Configuration....................................................................................................................................................................................58
B.3.1 Configuration [Windows]..........................................................................................................................................................58
B.3.2 Settings [Linux]..........................................................................................................................................................................74
- ix -
B.4 Releasing Configuration....................................................................................................................................................................82
B.4.1 Releasing Configuration [Windows]..........................................................................................................................................82
B.4.2 Releasing Configuration [Linux]...............................................................................................................................................84
B.5 Advisory Notes..................................................................................................................................................................................89
Glossary................................................................................................................................................................................. 91
-x-
Chapter 1 Operational Environment
For the operational environment of Resource Orchestrator, refer to "1.4 Software Environment" and "1.5 Hardware Environment" of the
"Setup Guide VE".
-1-
Chapter 2 Installation
This chapter explains the installation of ServerView Resource Orchestrator.
2.1 Manager Installation
This section explains installation of managers.
2.1.1 Preparations
This section explains the preparations and checks required before commencing installation.
- Host Name Checks
Refer to "Host Name Checks".
- System Time Checks
Refer to "System Time Checks".
- Exclusive Software Checks
Refer to "Exclusive Software Checks".
- Required Software Preparation and Checks
Refer to "Required Software Preparation and Checks".
- Required information collection and checks
Refer to "2.1.1.2 Collecting and Checking Required Information".
- Configuration Parameter Checks
Refer to "2.1.1.3 Configuration Parameter Checks".
2.1.1.1 Software Preparation and Checks
Software preparation and checks are explained in the following sections.
This section explains the preparations and checks required before commencing the installation of Resource Orchestrator.
Host Name Checks
It is necessary to set the host name (FQDN) for the admin server to operate normally. Describe the host name in the hosts file, using
256 characters or less. In the hosts file, for the IP address of the admin server, configure the host name (FQDN) and then the computer
name.
hosts File
[Windows]
System_drive\WindowsSystem32\drivers\etc\hosts
[Linux]
/etc/hosts
Note
For the admin client, either configure the hosts file so access to the admin server is possible using the host name (FQDN), or name
resolution using a DNS server.
When registering local hosts in the hosts file, take either one of the following actions.
- When setting the local host name to "127.0.0.1", ensure that an IP address accessible from remote hosts is described first.
-2-
- Do not set the local host name to "127.0.0.1".
Example
When configuring the admin server with the IP address "10.10.10.10", the host name (FQDN) "remote1.example.com", and
the computer name "remote1"
10.10.10.10 remote1.example.com remote1
127.0.0.1 remote1.example.com localhost. localdomain localhost
System Time Checks
Set the same system time for the admin server and managed servers.
Exclusive Software Checks
Before installing Resource Orchestrator, check that the software listed in "1.4.2.3 Exclusive Software" of the "Setup Guide VE" and the
manager of Resource Orchestrator have not been installed on the system.
Use the following procedure to check that exclusive software has not been installed.
[Windows]
1. Open "Add or Remove Programs" on the Windows Control Panel.
The [Add or Remove Programs] window will be displayed.
2. Check that none of the software listed in "1.4.2.3 Exclusive Software" of the "Setup Guide VE" or the following software are
displayed.
- "ServerView Resource Orchestrator Manager"
3. If the names of exclusive software have been displayed, uninstall them according to the procedure described in the relevant manual
before proceeding.
If managers of an earlier version of Resource Orchestrator have been installed, they can be upgraded. Refer to "4.2 Manager".
When reinstalling a manager on a system on which the same version of Resource Orchestrator has been installed, perform
uninstallation referring to "3.1 Manager Uninstallation" and then perform the reinstallation.
Information
For Windows Server 2008, select "Programs and Features" from the Windows Control Panel.
[Linux]
1. Check that none of the software listed in "1.4.2.3 Exclusive Software" in the "Setup Guide VE" or the following software are
displayed.
Execute the following command and check if Resource Orchestrator Manager has been installed.
# rpm -q FJSVrcvmr <RETURN>
2. If the names of exclusive software have been displayed, uninstall them according to the procedure described in the relevant manual
before proceeding.
If managers of an earlier version of Resource Orchestrator have been installed, they can be upgraded. Refer to "4.2 Manager".
When reinstalling a manager on a system on which the same version of Resource Orchestrator has been installed, perform
uninstallation referring to "3.1 Manager Uninstallation" and then perform the reinstallation.
-3-
Note
- When uninstalling exclusive software, there are cases where other system administrators might have installed the software, so check
that deleting the software causes no problems before actually doing so.
- With the standard settings of Red Hat Enterprise Linux 5 or later, when DVD-ROMs are mounted automatically, execution of programs
on the DVD-ROM cannot be performed. Release the automatic mount settings and perform mounting manually, or start installation
after copying the contents of the DVD-ROM to the hard disk.
When copying the contents of the DVD-ROM, replace "DVD-ROM_mount_point" with the used directory in the procedures in this
manual.
Required Software Preparation and Checks
Before installing Resource Orchestrator, check that the required software given in "1.4.2.2 Required Software" of the "Setup Guide VE"
has been installed. If it has not been installed, install it before continuing.
When operating managers in cluster environments, refer to "Appendix B Manager Cluster Operation Settings and Deletion" and "B.2.1
Preparations", and perform preparations and checks in advance.
Note
[Windows]
- Microsoft LAN Manager Module
Before installing Resource Orchestrator, obtain the Microsoft LAN Manager module from the following FTP site.
The Microsoft LAN Manager module can be used regardless of the CPU architecture type (x86, x64).
URL: ftp://ftp.microsoft.com/bussys/clients/msclient/dsk3-1.exe (As of February 2012)
When installing Resource Orchestrator to an environment that already has ServerView Deployment Manager installed, it is not
necessary to obtain the Microsoft LAN Manager module.
Prepare for the following depending on the architecture of the installation target.
- When installing a manager on Windows 32bit(x86)
The Microsoft LAN Manager module can be installed without extracting it in advance.
After obtaining the module, extract it to a work folder (such as C:\temp) on the system for installation.
- When installing a manager on Windows 64bit(x64)
The obtained module must be extracted using the Expand command on a computer with x86 CPU architecture.
The obtained module is for x86 CPU architecture, and cannot be extracted on computers with x64 CPU architecture.
For details on how to extract the module, refer to the following examples:
Example
When dsk3-1.exe was deployed on c:\temp
>cd /d c:\temp <RETURN>
>dsk3-1.exe <RETURN>
>Expand c:\temp\protman.do_ /r <RETURN>
>Expand c:\temp\protman.ex_ /r <RETURN>
Use the Windows 8.3 format (*1) for the folder name or the file name.
After manager installation is complete, the extracted Microsoft LAN Manager module is no longer necessary.
*1: File names are limited to 8 characters and extensions limited to 3 characters.
-4-
After extracting the following modules, deploy them to the folder that has been set for environment_variable%SystemBoot%.
- PROTMAN.DOS
- PROTMAN.EXE
- NETBIND.COM
- Settings for ServerView Operations Manager 4.X for Windows
In order for Resource Orchestrator to operate correctly, ensure that when installing ServerView Operations Manager for Windows
you do not select "IIS (MS Internet Information Server)" for Select Web Server.
For the settings, refer to the ServerView Operations Manager 4.X for Windows manual.
- SNMP Trap Service Settings
In order for Resource Orchestrator to operate correctly, the following settings for the standard Windows SNMP trap service are
required.
- Open "Services" from "Administrative Tools" on the Windows Control Panel, and then configure the startup type of SNMP Trap
service as "Manual" or "Automatic" on the [Services] window.
- Settings for ServerView Virtual-IO Manager
When using VIOM, in order for Resource Orchestrator to operate correctly, ensure that the following settings are made when installing
ServerView Virtual-IO Manager for Windows.
- When using the I/O Virtualization Option
Clear the "Select address ranges for IO Virtualization" checkbox on the virtual I/O address range selection window.
- When not using the I/O Virtualization Option
Check the "Select address ranges for IO Virtualization" checkbox on the virtual I/O address range selection window, and then
select address ranges for the MAC address and the WWN address.
When there is another manager, select address ranges which do not conflict those of the other manager.
For details, refer to the ServerView Virtual-IO Manager for Windows manual.
- DHCP server installation
Installation of the Windows standard DHCP Server is necessary when managed servers belonging to different subnets from the admin
server are to be managed.
Install the DHCP Server following the procedure below:
1. Add DHCP Server to the server roles.
Perform binding of the network connections for the NIC to use as the admin LAN.
For the details on adding and binding, refer to the manual for Windows.
2. Open "Services" from "Administrative Tools" on the Windows Control Panel, and then configure the startup type of DHCP
Server service as "Manual" on the [Services] window.
3. From the [Services] window, stop the DHCP Server service.
When an admin server is a member of a domain, perform 4.
4. Authorize DHCP servers.
a. Open "DHCP" from "Administrative Tools" on the Windows Control Panel, and select [Action]-[Managed authorized
servers] on the [DHCP] window.
The [Manage Authorized Servers] window will be displayed.
b. Click <Authorize>.
The [Authorize DHCP Server] window will be displayed.
c. Enter the admin IP address of the admin server in "Name or IP address".
-5-
d. Click <OK>.
The [Confirm Authorization] window will be displayed.
e. Check the "Name" and "IP Address".
f. Click <OK>.
The server will be displayed in the "Authorized DHCP servers" of the [Manage Authorized Servers] window.
- ETERNUS SF Storage Cruiser
When using ESC, configure the Fibre Channel switch settings in advance.
[Linux]
- Microsoft LAN Manager Module
Before installing Resource Orchestrator, obtain the Microsoft LAN Manager module from the following FTP site.
The Microsoft LAN Manager module can be used regardless of the CPU architecture type (x86, x64).
URL: ftp://ftp.microsoft.com/bussys/clients/msclient/dsk3-1.exe (As of February 2012)
When installing Resource Orchestrator to an environment that already has ServerView Deployment Manager installed, it is not
necessary to obtain the Microsoft LAN Manager module.
The obtained module must be extracted using the Expand command on a computer with x86 CPU Windows architecture. For details
on how to extract the module, refer to the following examples:
Example
When dsk3-1.exe was deployed on c:\temp
>cd /d c:\temp <RETURN>
>dsk3-1.exe <RETURN>
>Expand c:\temp\protman.do_ /r <RETURN>
>Expand c:\temp\protman.ex_ /r <RETURN>
Use the Windows 8.3 format (*1) for the folder name or the file name.
After manager installation is complete, the extracted Microsoft LAN Manager module is no longer necessary.
*1: File names are limited to 8 characters and extensions limited to 3 characters.
After extracting the following modules, deploy them to a work folder (/tmp) on the system for installation.
- PROTMAN.DOS
- PROTMAN.EXE
- NETBIND.COM
- SNMP Trap Daemon
In order for Resource Orchestrator to operate correctly to operate correctly, ensure that the following settings are made for the "/etc/
snmp/snmptrapd.conf" file, when installing the net-snmp package. When there is no file, add the following setting after creating the
file.
disableAuthorization yes
- ETERNUS SF Storage Cruiser
When using ESC, configure the Fibre Channel switch settings in advance.
-6-
User Account Checks
The OS user account name used for the database connection for Resource Orchestrator is fixed as "rcxdb". When applications using the
OS user account "rcxdb" exist, delete them after confirming there is no effect on them.
Single Sign-On Preparation and Checks (when used)
To use Single Sign-On (SSO) coordination, before installing Resource Orchestrator, certificate preparation and user registration with the
registration service are required.
For details, refer to "4.6 Installing and Configuring Single Sign-On" in the "Setup Guide VE".
2.1.1.2 Collecting and Checking Required Information
Before installing Resource Orchestrator, collect required information and check the system status, then determine the information to be
specified on the installation window. The information that needs to be prepared is given below.
- Installation Folder
Decide the installation folder for Resource Orchestrator.
Note that folders on removable disks cannot be specified.
Check that there are no files or folders in the installation folder.
Check that the necessary disk space can be secured on the drive for installation.
For the amount of disk space necessary for Resource Orchestrator, refer to "1.4.2.4 Static Disk Space" and "1.4.2.5 Dynamic Disk
Space" of the "Setup Guide VE".
- Image File Storage Folder
The image file storage folder is located in the installation folder.
Check that sufficient disk space can be secured on the drive where the storage folder will be created.
For the necessary disk space, refer to "1.4.2.5 Dynamic Disk Space" of the "Setup Guide VE".
For details of how to change the image file storage folder, refer to "5.5 rcxadm imagemgr" of the "Command Reference".
- Port Number
When Resource Orchestrator is installed, the port numbers used by it will automatically be set in the services file of the system. So
usually, there is no need to pay attention to port numbers.
If the port numbers used by Resource Orchestrator are being used for other applications, a message indicating that the numbers are in
use is displayed when the installer is started, and installation will stop.
In that case, describe the entries for the following port numbers used by Resource Orchestrator in the services file using numbers not
used by other software, and then start the installer.
Example
# service name port number/
protocol name
nfdomain
23455/tcp
nfagent
23458/tcp
rcxmgr
23460/tcp
rcxweb
23461/tcp
rcxtask
23462/tcp
rcxmongrel1
23463/tcp
rcxmongrel2
23464/tcp
rcxdb
23465/tcp
For details, refer to "3.1.2 Changing Port Numbers" of the "User's Guide VE".
- Directory Service Connection Information for Single Sign-On
When using Single Sign-On coordination with ServerView Operations Manager, check the settings of the directory service to be used.
-7-
Refer to the following manual when the OpenDS that is included with ServerView Operations Manager is used.
"ServerView user management via an LDAP directory service" in the "ServerView Suite User Management in ServerView" manual
of ServerView Operations Manager
Parameters Used for Installation
The following table contains the parameters used for installation.
[Windows]
No.
Window
Entry
Description
This is the folder where Resource Orchestrator is installed.
(*1)
Default value: C:\Fujitsu\ROR
1
Installation Folder Selection
The installation folder can contain 14 characters or less
including the drive letter and "\".
Installation Folder
A path starting with "\\" or a relative path cannot be specified.
It can contain alphanumeric characters, including hyphens
("-") and underscores ("_").
This is the user account name to be used for logging into
Resource Orchestrator as an administrative user.
- When using Single Sign-On
Specify the administrator account, "administrator", of
ServerView Operations Manager when the OpenDS that
is included with ServerView Operations Manager is
used.
User Account
- When not using Single Sign-On
2
The name must start with an alphabetic character and can
be up to 16 alphanumeric characters long, including
underscores ("_"), hyphens ("-"), and periods, (".").
Administrative User Creation
Input is case-sensitive.
Password
The password of the administrative user.
- When using Single Sign-On
The string must be composed of alphanumeric characters
and symbols, and can be 8 - 64 characters long.
Retype password
- When not using Single Sign-On
The string must be composed of alphanumeric characters
and symbols, and can be up to 16 characters long.
3
Admin LAN Selection
4
Authentication Method
Selection
Network to use for the
admin LAN
This is the network to be used as the admin LAN. Select it
from the list.
Select one of the following options.
Authentication Method
- "Internal Authentication"
- "ServerView Single sign-on (SSO)"
5
Directory Server Information
1/2
This is the IP address of the directory server to be connected
to.
IP Address
-8-
No.
Window
Entry
Description
This is the port number of the directory server to be connected
to. Check the settings of the directory server to be used.
Port number
Select one of the following options.
Use of SSL for connecting
to the Directory Server
- SSL Authentication enabled
- SSL Authentication disabled
Specify the folder storing directory server CA certificates.
Directory Server
Certificates
6
Only server CA certificates are stored in this folder.
Base (DN)
This is the base name (DN) of the directory server to be
connected to. Check the settings of the directory server to be
used. "dc=fujitsu,dc=com" is set as the initial value when the
OpenDS that is included with ServerView Operations
Manager is used.
Administrator (DN)
This is the administrator name (DN) of the directory server
to be connected to. Check the settings of the directory server
to be used. "cn=Directory Manager" is set when the OpenDS
that is included with ServerView Operations Manager is
used.
Directory Server Information
2/2
This is the password of the administrator of the directory
server to be connected to. Check the settings of the directory
server to be used. Refer to the following manual when the
OpenDS that is included with ServerView Operations
Manager is used.
Password
"ServerView user management with OpenDS" in the
"ServerView Suite User Management in ServerView"
manual of ServerView Operations Manager
*1: Specify an NTFS disk.
[Linux]
No.
Window
Entry
Description
This is the user account name to be used for logging into
Resource Orchestrator as an administrative user.
- When using Single Sign-On
Specify the administrator account, "administrator", of
ServerView Operations Manager when the OpenDS that
is included with ServerView Operations Manager is
used.
User Account
1
Administrative User
Creation
- When not using Single Sign-On
The name must start with an alphabetic character and can
be up to 16 alphanumeric characters long, including
underscores ("_"), hyphens ("-"), and periods, ("."). Input
is case-sensitive.
Password
The password of the administrative user.
Retype password
The string must be composed of alphanumeric characters and
symbols, and can be up to 16 characters long.
-9-
No.
Window
2
Admin LAN Selection
3
Authentication Method
Selection
Entry
Description
Network to use for the
admin LAN
This is the network to be used as the admin LAN. Select it
from the list.
Select one of the following options.
Authentication Method
- Internal Authentication
- ServerView Single sign-on (SSO)
IP Address
This is the IP address of the directory server to be connected
to.
Port number
This is the port number of the directory server to be connected
to. Check the settings of the directory server to be used.
Select one of the following options.
Use of SSL for connecting
to the Directory Server
- SSL Authentication enabled
- SSL Authentication disabled
Specify the directory used to store directory server CA
certificates.
Directory Server
Certificates
Only server CA certificates are stored in this directory.
4
Directory Server Information
Base (DN)
This is the base name (DN) of the directory server to be
connected to. Check the settings of the directory server to be
used. "dc=fujitsu,dc=com" is set as the initial value when the
OpenDS that is included with ServerView Operations
Manager is used.
Administrator (DN)
This is the administrator name (DN) of the directory server
to be connected to. Check the settings of the directory server
to be used. "cn=Directory Manager" is set when the OpenDS
that is included with ServerView Operations Manager is
used.
This is the password of the administrator of the directory
server to be connected to. Check the settings of the directory
server to be used. Refer to the following manual when the
OpenDS that is included with ServerView Operations
Manager is used.
Password
"ServerView user management with OpenDS" in the
"ServerView Suite User Management in ServerView"
manual of ServerView Operations Manager
2.1.1.3 Configuration Parameter Checks
Check configuration parameters following the procedure below:
[Windows]
1. Log in as the administrator.
2. Start the installer.
The installer is automatically displayed when the first DVD-ROM is set in the DVD drive. If it is not displayed, execute
"RcSetup.exe" to start the installer.
3. Select "Tool" on the window displayed, and then click "Environment setup conditions check tool". Configuration parameter checking
will start.
- 10 -
4. When configuration parameter checking is completed, the check results will be saved in the following location.
C:\tmp\ror_precheckresult-YYYY-MM-DD-hhmmss.txt
Refer to the check results, and check that no error is contained. If any error is contained, remove the causes of the error.
[Linux]
1. Log in to the system as the OS administrator (root).
Boot the admin server in multi-user mode to check that the server meets requirements for installation, and then log in to the system
using root.
2. Set the first Resource Orchestrator DVD-ROM.
3. Execute the following command to mount the DVD-ROM. If the auto-mounting daemon (autofs) is used for starting the mounted
DVD-ROM, the installer will fail to start due to its "noexec" mount option.
# mount /dev/hdc DVD-ROM_mount_point <RETURN>
4. Execute the agent installation command (the RcSetup.sh command).
# cd DVD-ROM_mount_point <RETURN>
# ./RcSetup.sh <RETURN>
5. Select "Environment setup conditions check tool" from the menu to execute the tool.
6. When the check is completed, the result will be sent to standard output.
2.1.2 Installation [Windows]
The procedure for manager installation is given below.
Before installing Resource Orchestrator, check that the preparations given in "2.1.1 Preparations" have been performed.
Installation
The procedure for manager installation is given below.
1. Log on as the administrator.
Log on to the system on which the manager is to be installed.
Log on as a user belonging to the local Administrators group.
2. Start the installer.
The installer is automatically displayed when the first DVD-ROM is set in the DVD drive. If it is not displayed, execute
"RcSetup.exe" to start the installer.
3. Click "Manager(Virtual Edition) installation".
4. Following the installation wizard, enter the parameters prepared and confirmed in "Parameters Used for Installation" properly.
Note
- In the event of installation failure, restart and then log in as the user that performed the installation, and perform uninstallation following
the uninstallation procedure.
After that, remove the cause of the failure referring to the meaning of the output message and the suggested corrective actions, and
then perform installation again.
- If there are internal inconsistencies detected during installation, the messages "The problem occurred while installing it" or "Native
Installer Failure" will be displayed and installation will fail. In this case, uninstall the manager and reinstall it. If the problem persists,
please contact Fujitsu technical staff.
- 11 -
- Nullifying Firewall Settings for Ports to be used by Resource Orchestrator
When installing Resource Orchestrator on systems with active firewalls, in order to enable correct communication between the
manager, agents, and clients, disable the firewall settings for the port numbers to be used for communication.
For the port numbers used by Resource Orchestrator and required software, refer to "Appendix A Port List" of the "Setup Guide VE".
However, when port numbers have been changed by editing the services file during installation of Resource Orchestrator, replace the
default port numbers listed in "Appendix A Port List" of the "Setup Guide VE" with the port numbers changed to during installation.
2.1.3 Installation [Linux]
The procedure for manager installation is given below.
Before installing Resource Orchestrator, check that the preparations given in "2.1.1 Preparations" have been performed.
1. Log in to the system as the OS administrator (root).
Boot the admin server that Resource Orchestrator is to be installed on in multi-user mode, and then log in to the system using root.
2. Set the first Resource Orchestrator DVD-ROM and execute the following command to mount the DVD-ROM. If the auto-mounting
daemon (autofs) is used for DVD-ROM auto-mounting, the installer fails to start due to its "noexec" mount option.
# mount /dev/hdc DVD-ROM_mount_point <RETURN>
3. Execute the manager installation command (the RcSetup.sh command).
# DVD-ROM_mount_point/ RcSetup.sh <RETURN>
4. Select "1. Manager(Virtual Edition) installation".
5. Perform installation according to the installer's interactive instructions.
Enter the parameters prepared and confirmed in "Parameters Used for Installation" in "2.1.1.2 Collecting and Checking Required
Information".
Note
- Current Directory Setting
Do not set the current directory to the DVD-ROM to allow disk switching.
- In Single User Mode
In single user mode, X Windows does not start, and one of the following operations is required.
- Switching virtual consoles (using the CTL + ALT + PFn keys)
- Making commands run in the background
- Corrective Action for Installation Failure
In the event of installation failure, restart the OS and then log in as the user that performed the installation, and perform uninstallation
following the uninstallation procedure.
After that, remove the cause of the failure referring to the meaning of the output message and the suggested corrective actions, and
then perform installation again.
- If there are internal inconsistencies detected during installation, the messages "The problem occurred while installing it" or "It failed
in the installation" will be displayed and installation will fail. In this case, uninstall the manager and reinstall it. If the problem persists,
please contact Fujitsu technical staff.
- Nullifying Firewall Settings for Ports to be used by Resource Orchestrator
When installing Resource Orchestrator on systems with active firewalls, in order to enable correct communication between the
manager, agents, and clients, disable the firewall settings for the port numbers to be used for communication.
- 12 -
For the port numbers used by Resource Orchestrator and required software, refer to "Appendix A Port List" of the "Setup Guide VE".
However, when port numbers have been changed by editing the services file during installation of Resource Orchestrator, replace the
default port numbers listed in "Appendix A Port List" of the "Setup Guide VE" with the port numbers changed to during installation.
- Uninstall the Related Services
When locating ServerView Deployment Manager in the same subnet, it is necessary to uninstall the related services.
For the method for uninstalling the related services, please refer to "5.12 deployment_service_uninstall" of the "Command Reference".
- Destination Directories
The destination directories are fixed as those below and cannot be changed.
- /opt
- /etc/opt
- /var/opt
2.1.4 License Setup
This section explains license setup.
To use Resource Orchestrator, license setup is required after the installation.
For details on license configuration, refer to "Chapter 7 Logging in to Resource Orchestrator" in the "Setup Guide VE".
2.2 Agent Installation
This section explains the procedure for physical server or VM host agent installation.
When deploying multiple new managed servers using Windows or Linux, cloning enables copying of the data installed on a server (OS,
updates, Resource Orchestrator agents, and common software installed on servers) to other servers.
For details, refer to "Chapter 7 Cloning [Windows/Linux]" of the "User's Guide VE".
2.2.1 Preparations
This section explains the preparations and checks required before commencing installation.
2.2.1.1 Software Preparation and Checks
Software preparation and checks are explained in the following sections.
- Exclusive Software Checks
Refer to "Exclusive Software Checks".
- Required Software Checks
Refer to "Required Software Preparation and Checks".
Exclusive Software Checks
Before installing Resource Orchestrator, check that the software listed in "1.4.2.3 Exclusive Software" of the "Setup Guide VE" and the
agent of Resource Orchestrator have not been installed on the system.
Use the following procedure to check that exclusive software has not been installed.
[Windows/Hyper-V]
1. Open "Add or Remove Programs" on the Windows Control Panel.
The [Add or Remove Programs] window will be displayed.
- 13 -
2. Check that none of the software listed in "1.4.2.3 Exclusive Software" of the "Setup Guide VE" has been installed on the system,
and the there is no information which shows a Resource Orchestrator agent has not been installed.
- "ServerView Resource Orchestrator Agent"
3. When any of the exclusive software is displayed on the [Add or Remove Programs] window, uninstall it according to the procedure
described in the relevant manual before installing Resource Orchestrator.
If agents of an earlier version of Resource Orchestrator have been installed, they can be upgraded. Refer to "4.3 Agent".
When reinstalling an agent on a system on which an agent of the same version of Resource Orchestrator has been installed, perform
uninstallation referring to "3.2.1 Uninstallation [Windows/Hyper-V]" and then perform installation.
Information
For Windows Server 2008, select "Programs and Features" from the Windows Control Panel.
[Linux]
1. Check that none of the software listed in "1.4.2.3 Exclusive Software" in the "Setup Guide VE" are displayed. Execute the following
command and check if Resource Orchestrator Agent has been installed.
# rpm -q FJSVrcvat <RETURN>
2. If the names of exclusive software have been displayed, uninstall them according to the procedure described in the relevant manual
before proceeding.
If agents of an earlier version of Resource Orchestrator have been installed, they can be upgraded. Refer to "4.3 Agent".
When reinstalling an agent on a system on which an agent of the same version of Resource Orchestrator has been installed, perform
uninstallation referring to "3.2.2 Uninstallation [Linux/VMware/Xen/KVM/Oracle VM]" and then perform installation.
Note
- When uninstalling exclusive software, there are cases where other system administrators might have installed the software, so
check that deleting the software causes no problems before actually doing so.
- With the standard settings of Red Hat Enterprise Linux 5 or later, when DVD-ROMs are mounted automatically, execution of
programs on the DVD-ROM cannot be performed. Release the automatic mount settings and perform mounting manually, or
start installation after copying the contents of the DVD-ROM to the hard disk.
When copying the contents of the DVD-ROM, replace "DVD-ROM_mount_point" with the used directory in the procedures
in this manual.
Required Software Preparation and Checks
Before installing Resource Orchestrator, check that the required software given in "1.4.2.2 Required Software" of the "Setup Guide VE"
has been installed. If it has not been installed, install it before continuing.
Note
- ServerView Agents Settings
To operate Resource Orchestrator correctly on PRIMERGY series servers, perform the necessary settings for SNMP services during
installation of ServerView Agents.
For how to perform SNMP service settings, refer to the ServerView Agents manual.
- For the SNMP community name, specify the same value as the SNMP community name set for the management blade.
- For the SNMP community name, set Read (reference) or Write (reference and updating) authority.
- 14 -
- For the host that receives SNMP packets, select "Accept SNMP packets from any host" or "Accept SNMP packets from these
hosts" and set the admin LAN IP address of the admin server.
- For the SNMP trap target, set the IP address of the admin server.
When an admin server with multiple NICs is set as the SNMP trap target, specify the IP address of the admin LAN used for
communication with the managed server.
- The "setupcl.exe" and "sysprep.exe" Modules
For Windows OS's other than Windows Server 2008, it is necessary to specify storage locations for the "setupcl.exe" and "sysprep.exe"
modules during installation. Obtain the newest modules before starting installation of Resource Orchestrator.
For the method of obtaining the modules, refer to "1.4.2.2 Required Software" of the "Setup Guide VE".
Extract the obtained module using the following method:
Example
When WindowsServer2003-KB926028-v2-x86-JPN.exe has been placed in c:\temp
>cd /d c:\temp <RETURN>
>WindowsServer2003-KB926028-v2-x86-JPN.exe /x <RETURN>
During installation, specify the cabinet file "deploy.cab" in the extracted folder, or the "setupcl.exe" and "sysprep.exe" modules
contained in "deploy.cab".
After agent installation is complete, the extracted module is no longer necessary.
Red Hat Enterprise Linux 6 Preconfiguration
Only perform this when using Red Hat Enterprise Linux 6 as the basic software of a server.
When using cloning and server switchover, perform the following procedure to modify the configuration file.
1. Execute the following command.
# systool -c net <RETURN>
Example
# systool -c net <RETURN>
Class = "net"
Class Device = "eth0"
Device =
"0000:01:00.0"
Class Device = "eth1"
Device =
"0000:01:00.1"
Class Device = "eth2"
Device =
"0000:2:00 AM.0"
Class Device = "eth3"
Device =
"0000:2:00 AM.1"
Class Device = "lo"
Class Device = "sit0"
2. Confirm the device name which is displayed after "Class Device =" and the PCI bus number which is displayed after "Device ="
in the command output results.
- 15 -
3. Correct the configuration file.
After confirming support for device name and MAC address
ATTR{address}=="MAC_address" to KERNELS=="PCI_bus_number".
in
the
following
configuration
file,
change
All corresponding lines should be corrected.
Configuration File Storage Location
/etc/udev/rules.d/70-persistent-net.rules
Example
- Before changing
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="MAC_address", ATTR{type}=="1", KERNEL=="eth*",
NAME="Device_name"
- After changing
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
KERNELS=="PCI_bus_number", ATTR{type}=="1", KERNEL=="eth*",
NAME="Device_name"
4. After restarting the managed servers, check if communication with the entire network is possible.
Configuration File Check
- When using Red Hat Enterprise Linux
When using the following functions, check the configuration files of network interfaces before installing Resource Orchestrator, and
release any settings made to bind MAC addresses.
- Server switchover
- Cloning
Only perform this when using Red Hat Enterprise Linux 4 AS/ES as the basic software of a server.
Refer to the /etc/sysconfig/network-scripts/ifcfg-ethX file (ethX is an interface name such as eth0 or eth1), and check that there is no
line starting with "HWADDR=" in the file.
If there is a line starting with "HWADDR=", this is because the network interface is bound to a MAC address. In that case, comment
out the line.
Example
When the admin LAN interface is eth0
DEVICE=eth0
#HWADDR=xx:xx:xx:xx:xx:xx <- If this line exists, comment it out.
ONBOOT=yes
TYPE=Ethernet
- When using SUSE Linux Enterprise Server
- When using cloning and server switchover, perform the following procedure to modify the configuration file.
1. Execute the following command.
# systool -c net <RETURN>
- 16 -
Example
# systool -c net <RETURN>
Class = "net"
Class Device = "eth0"
Device = "0000:01:00.0"
Class Device = "eth1"
Device = "0000:01:00.1"
Class Device = "eth2"
Device = "0000:2:00 AM.0"
Class Device = "eth3"
Device = "0000:2:00 AM.1"
Class Device = "lo"
Class Device = "sit0"
2. Confirm the device name which is given after "Class Device =" and PCI bus number which is given after "Device =" in the
command output results.
3. Modify the configuration file.
When using SUSE Linux Enterprise Server 10
a. After confirming support for device name and MAC address in the following configuration file, change
SYSFS{address}=="MAC_address" to ID=="PCI_bus_number".
All corresponding lines should be corrected.
Support of the device name and MAC address will be used after step b.
/etc/udev/rules.d/30-net_persistent_names.rules
Before changing
SUBSYSTEM=="net", ACTION=="add", SYSFS{address}=="MAC_address",
IMPORT="/lib/udev/rename_netiface %k device_name"
After changing
SUBSYSTEM=="net", ACTION=="add", ID=="PCI_bus_number", IMPORT="/lib/
udev/rename_netiface %k device_name"
b. Based on the results of step 1. and a. in step 3., change the name of the following file will be to a name that includes the
PCI bus number.
Before changing
/etc/sysconfig/network/ifcfg-eth-id-MAC address
After changing
/etc/sysconfig/network/ifcfg-eth-bus-pci-PCI bus number
When using SUSE Linux Enterprise Server 11
Change ATTR{address}=="MAC address" to KERNELS=="PCI_bus_number" in the following configuration file.
All corresponding lines should be corrected.
/etc/udev/rules.d/70-persistent-net.rules
Before changing
- 17 -
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
ATTR{address}=="MAC_address", ATTR{type}=="1", KERNEL=="eth*",
NAME="device_name"
After changing
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
KERNELS=="PCI_bus_number", ATTR{type}=="1", KERNEL=="eth*",
NAME="device_name"
4. After restarting the managed servers, check if communication with the entire network is possible.
- When using cloning or the backup and restore method for server switchover, during installation perform partition settings so that
names of device paths are defined using the "Device Name" format (for example: /dev/sda1) in the /etc/fstab file.
When installation is already complete, change the names of device paths defined in the boot configuration files /boot/efi/SuSE/
elilo.conf and /boot/grub/menu.lst, and the /etc/fstab file so they use the "Device Name" format (for example: /dev/sda1).For
specific details about the mount definition, please refer to the following URL and search for the Document ID:3580082.
URL: http://www.novell.com/support/microsites/microsite.do (As of February 2012)
2.2.1.2 Collecting and Checking Required Information
Before installing Resource Orchestrator, collect required information and check the system status, then determine the information to be
specified on the installation window. The information that needs to be prepared is given below.
- Installation folder and available disk space
Decide the installation folder for Resource Orchestrator. Check that the necessary disk space can be secured on the drive for installation.
For the amount of disk space necessary for Resource Orchestrator, refer to "1.4.2.4 Static Disk Space" and "1.4.2.5 Dynamic Disk
Space" of the "Setup Guide VE".
- Port number
When Resource Orchestrator is installed, the port numbers used by it will automatically be set in the services file of the system. So
usually, there is no need to pay attention to port numbers.
If the port numbers used by Resource Orchestrator are being used for other applications, a message indicating that the numbers are in
use is displayed when the installer is started, and installation will stop.
In that case, describe the entries for the port numbers used by Resource Orchestrator in the services file using numbers not used by
other software, and then start the installer.
For details, refer to "3.2.6 Changing Port Numbers" of the "User's Guide VE".
- Check the status of the admin LAN and NIC
Decide the network (IP addresses) to be used for the admin LAN.
Check that the NIC used for communication with the admin LAN is enabled.
For admin LANs, refer to "4.2.2 IP Addresses (Admin LAN)" of the "Setup Guide VE".
[Linux/Xen/KVM]
Make the numeral of the managed server's network interface name (ethX) one of a consecutive sequence starting from 0. For the
settings, refer to the manual for the OS.
- Check the target disk of image operations
For backup and restoration of system images to disks, refer to "8.1 Overview" of the "Operation Guide VE".
For cloning of disks, refer to "7.1 Overview" of the "User's Guide VE".
- Windows Volume License Information [Windows]
When using the following functions, you must have a volume license for the version of Windows to be installed on managed servers
by Resource Orchestrator.
Check whether the Windows license you have purchased is a volume license.
- 18 -
- Server switchover (HBA address rename method/ VIOM server profile switchover method)
- Cloning
- Restoration after server replacement
- Server replacement using HBA address rename
When using cloning, you must enter volume license information when installing Resource Orchestrator.
Depending on the version of Windows being used, check the following information prior to installation.
- For Windows Server 2003
Check the product key.
Generally, the product key is provided with the DVD-ROM of the Windows OS purchased.
- For Windows Server 2008
Check the information necessary for license authentication (activation).
The two activation methods are Key Management Service (KMS) and Multiple Activation Key (MAK). Check which method
you will use.
Check the following information required for activation depending on the method you will use.
- Activation Information
Table 2.1 Activation Information Methods and Information to Check
Method
Information to Check
- The KMS host name (FQDN) or the computer name or IP address
KMS (*1)
- Port number (Default 1688) (*2)
MAK
The MAK key
*1: When using Domain Name Service (DNS) to automatically find the KMS host, checking is not necessary.
*2: When changing the port number from the default (1688), correct the definition file after installing agents. For details, refer
to "7.2 Collecting a Cloning Image" of the "User's Guide VE".
- Proxy Server Information
When using a proxy server to connect to the KMS host (KMS method) or the Volume Activation Management Tool (VAMT)
to authenticate a proxy license (MAK method), check the host name or IP address, and the port number of the proxy server.
- Administrator's Password
Check the password as it is necessary for performing activation.
- Windows Administrator accounts
When using Windows Server 2008, check whether Administrator accounts have been changed (renamed).
Environments which have been changed (renamed) are not supported.
2.2.2 Installation [Windows/Hyper-V]
This section explains the procedure for agent installation.
Before installing Resource Orchestrator, check that the preparations given in "2.2.1 Preparations" have been performed.
1. Log on to Windows as the administrator.
Log on to the system on which the agent is to be installed. Log on as a user belonging to the local Administrators group.
2. Start the installer from the window displayed when the first Resource Orchestrator DVD-ROM is set.
Click "Agent installation" which is displayed on the window.
3. The Resource Orchestrator setup window will be displayed.
Check the contents of the license agreement window etc. and then click <Yes>.
- 19 -
4. The [Select Installation Folder] window will be displayed.
Click <Next>> to use the default Installation Folder. To change folders, click <Browse>, change folders, and click <Next>>.
Note
When changing the folders, be careful about the following points.
- Do not specify the installation folder of the system (such as C:\).
- Enter the location using 100 characters or less. Do not use double-byte characters or the following symbols in the folder name.
""", "|", ":", "*", "?", "/", ".", "<", ">", ",", "%", "&", "^", "=", "!", ";", "#", "'", "+", "[", "]", "{", "}"
- When installing this product to Windows 2003 x64 Edition or Windows 2008 x64 Edition, the following folder names cannot
be specified for the installation folders.
- "%SystemRoot%\System32\"
- When using cloning, installation on the OS system drive is advised, because Sysprep initializes the drive letter during deployment
of cloning images.
5. The [Admin Server Registration] window will be displayed.
Specify the admin LAN IP address of the admin server, the folder containing the "setupcl.exe" and "sysprep.exe" modules, then
click <Next>>.
Admin Server IP Address
Specify the IP address of the admin server. When the admin server has multiple IP addresses, specify the IP address used for
communication with managed servers.
Folder Containing the setupcl.exe and sysprep.exe Modules
Click <Refer> and specify "deploy.cab" or "setupcl.exe" and "sysprep.exe" as prepared in "2.2.1.1 Software Preparation and
Checks".
Information
With Windows Server 2008, "setupcl.exe" and "sysprep.exe" are installed along with the OS so specification is not necessary
(<Refer> will be disabled).
6. The [License Authentication] window will be displayed.
Enter the license authentication information for the Windows volume license.
As license authentication information is not necessary if cloning will not be used, click <Next>> without selecting "Using the cloning
feature of this product".
If cloning will be used, depending on the version of Windows being used, specify the following information collected in "Windows
Volume License Information [Windows]" of "2.2.1.2 Collecting and Checking Required Information", and click <Next>>.
For Windows Server 2003
Product Key
Enter the Windows product key for the computer the agent is to be installed on.
Confirm Product Key
Enter the Windows product key again to confirm it.
For Windows Server 2008
License Authentication Method
Select the license authentication method from Key Management Service (KMS) and Multiple Activation Key (MAK).
- 20 -
- When Key Management Service (KMS) is selected
KMS host
Enter the host name of the KMS host name, and the computer name or IP address.
When using Domain Name Service (DNS) to automatically find the KMS host, this is not necessary.
- When Multiple Activation Key (MAK) is selected
The MAK key
Enter the MAK key for the computer the agent is to be installed on.
Confirm Multiple Activation Key
Enter the MAK key again to confirm it.
Proxy server used for activation
Enter the host name or IP address of the proxy server.
When the proxy server has a port number, enter the port number.
Administrator's Password
Enter the administrator password for the computer the agent is to be installed on.
Note
If an incorrect value is entered for "Product Key", "Key Management Service host", "The MAK Key", or "Proxy server used for
activation" on the [License Authentication Information Entry] window, cloning will be unable to be used.
Check that the correct values have been entered.
7. The [Start Copying Files] window will be displayed.
Check that there are no mistakes in the contents displayed on the window, and then click <Install>>.
Copying of files will start.
To change the contents, click <<Back>.
8. The Resource Orchestrator setup completion window will be displayed.
When setup is completed, the [Installshield Wizard Complete] window will be displayed.
Click <Finish> and close the window.
Note
- Corrective Action for Installation Failure
When installation is stopped due to errors (system errors, processing errors such as system failure, or errors due to execution conditions)
or cancellation by users, remove the causes of any problems, and then take corrective action as follows.
- Open "Add or Remove Programs" from the Windows Control Panel, and when "ServerView Resource Coordinator VE Agent"
is displayed, uninstall it and then install the agent again.
For uninstallation, refer to "3.2 Agent Uninstallation".
Information
For Windows Server 2008, select "Programs and Features" from the Windows Control Panel.
- If "ServerView Resource Coordinator VE Agent" is not displayed, install it again.
- Nullifying Firewall Settings for Ports to be used by Resource Orchestrator
When installing Resource Orchestrator on systems with active firewalls, in order to enable the manager to communicate with agents
correctly, disable the firewall settings for the port numbers to be used for communication.
For the port numbers used by Resource Orchestrator and required software, refer to "Appendix A Port List" of the "Setup Guide VE".
- 21 -
However, when port numbers have been changed by editing the services file during installation of Resource Orchestrator, replace the
default port numbers listed in "Appendix A Port List" of the "Setup Guide VE" with the port numbers changed to during installation.
- Uninstall the Related Services
When installing ServerView Deployment Manager after Resource Orchestrator has been installed, or using ServerView Deployment
Manager in the same subnet, it is necessary to uninstall the related services.
For the method for uninstalling the related services, please refer to "5.12 deployment_service_uninstall" of the "Command Reference".
2.2.3 Installation [Linux/VMware/Xen/KVM/Oracle VM]
This section explains the procedure for agent installation.
Before installing Resource Orchestrator, check that the preparations given in "2.2.1 Preparations" have been performed.
1. Log in to the system as the OS administrator (root).
Boot the managed server that agent (Cloud Edition for Dashboard) is to be installed on in multi-user mode, and then log in to the
system using root.
[Xen/KVM]
Log in from the console.
2. Set the first Resource Orchestrator DVD-ROM.
3. Execute the following command to mount the DVD-ROM. If the auto-mounting daemon (autofs) is used for starting the mounted
DVD-ROM, the installer will fail to start due to its "noexec" mount option.
# mount /dev/hdc DVD-ROM_mount_point <RETURN>
4. Execute the agent installation command (RcSetup.sh).
# cd DVD-ROM_mount_point <RETURN>
# ./RcSetup.sh <RETURN>
5. Perform installation according to the installer's interactive instructions.
6. Enter the host name or IP address of a connected admin server.
[Xen]
When using Citrix XenServer, restart the managed server.
When using the Linux virtual machine function of Red Hat Enterprise Linux 5 or Red Hat Enterprise Linux 6, use the following procedure.
1. After using the following command to disable the automatic startup setting of the xend daemon, restart the managed server.
# chkconfig xend off <RETURN>
2. After the restart is complete, use the following command to update the bind settings of the MAC address. Then, enable the automatic
startup setting of the xend daemon and start the xend daemon.
# /usr/local/sbin/macbindconfig create <RETURN>
# chkconfig xend on <RETURN>
# service xend start <RETURN>
Note
- Corrective Action for Installation Failure
In the event of installation failure, restart and then log in as the user that performed the installation, and perform uninstallation following
the uninstallation procedure.
After that, remove the cause of the failure referring to the meaning of the output message and the suggested corrective actions, and
then perform installation again.
- 22 -
- Nullifying Firewall Settings for Ports to be used by Resource Orchestrator
When installing Resource Orchestrator on systems with active firewalls, in order to enable the manager to communicate with agents
correctly, disable the firewall settings for the port numbers to be used for communication.
Example
[VMware]
# /usr/sbin/esxcfg-firewall -openPort 23458,tcp,in,"nfagent" <RETURN>
For the port numbers used by Resource Orchestrator and required software, refer to "Appendix A Port List" of the "Setup Guide VE".
However, when port numbers have been changed by editing the services file during installation of Resource Orchestrator, replace the
default port numbers listed in "Appendix A Port List" of the "Setup Guide VE" with the port numbers changed to during installation.
- When installation was performed without using the console [Xen/KVM]
When installation is performed by logging in from somewhere other than the console, the network connection will be severed before
installation is complete, and it is not possible to confirm if the installation was successful. Log in from the console and restart the
managed server. After restarting the server, follow the procedure in "Corrective Action for Installation Failure" and perform installation
again.
- Uninstall the Related Services
When installing ServerView Deployment Manager after Resource Orchestrator has been installed, or using ServerView Deployment
Manager in the same subnet, it is necessary to uninstall the related services.
For the method for uninstalling the related services, please refer to "5.12 deployment_service_uninstall" of the "Command Reference".
2.2.4 Installation [Solaris]
This section explains the procedure for agent installation.
Before installing Resource Orchestrator, check that the preparations given in "2.2.1 Preparations" have been performed.
1. Log in to the system as the OS administrator (root).
Boot the managed server that agent (Cloud Edition for Dashboard) is to be installed on in multi-user mode, and then log in to the
system using root.
2. Set the first DVD-ROM and execute the following command, then move to the directory where the installer is stored.
# cd DVD-ROM_mount_point/DISK1/Agent/Solaris/agent <RETURN>
3. Execute the agent installer (the rcxagtinstall command).
# ./rcxagtinstall <RETURN>
ServerView Resource Orchestrator V3.0.0
Copyright FUJITSU LIMITED 2007-2011
This program will install ServerView Resource Orchestrator Agent on your system.
4. The license agreement is displayed.
This program is protected by copyright law and international treaties.
Unauthorized reproduction or distribution of this program, or any portion of it,may result in severe civil and
criminal penalties, and will be prosecuted to the maximum extent possible under law.
Copyright FUJITSU LIMITED 2007-2011
Do you want to continue the installation of this software? [y,n,?,q]
- 23 -
Check the content of the agreement, and enter "y" if you agree or "n" if you do not agree.
- If "n" or "q" is entered
The installation is discontinued.
- If "?" is entered
An explanation of the entry method is displayed.
5. Enter "y" and the installation will start.
Example
INFO : Starting Installation of ServerView Resource Orchestrator Agent...
INFO : Package FJSVrcxat was successfully installed.
...
6. When installation is completed successfully, the following message will be displayed.
INFO : ServerView Resource Orchestrator Agent was installed successfully.
Note
- Corrective Action for Installation Failure
Execute the following command, delete the packages from the environment in which installation failed, and then perform installation
again.
# cd DVD-ROM_mount_point/agent <RETURN>
# ./rcxagtuninstall <RETURN>
- Nullifying Firewall Settings for Ports to be used by Resource Orchestrator
When installing Resource Orchestrator on systems with active firewalls, in order to enable the manager to communicate with agents
correctly, disable the firewall settings for the port numbers to be used for communication.
For the port numbers used by Resource Orchestrator and required software, refer to "Appendix A Port List" of the "Setup Guide VE".
However, when port numbers have been changed by editing the services file during installation of Resource Orchestrator, replace the
default port numbers listed in "Appendix A Port List" of the "Setup Guide VE" with the port numbers changed to during installation.
2.3 HBA address rename setup service Installation
This section explains installation of the HBA address rename setup service.
The HBA address rename setup service is only necessary when using HBA address rename.
For details, refer to "1.6 System Configuration" of the "Setup Guide VE".
2.3.1 Preparations
This section explains the preparations and checks required before commencing installation.
2.3.1.1 Software Preparation and Checks
Software preparation and checks are explained in the following sections.
- 24 -
Exclusive Software Checks
Before installing Resource Orchestrator, check that the software listed in "1.4.2.3 Exclusive Software" of the "Setup Guide VE" and the
HBA address rename setup service of Resource Orchestrator have not been installed on the system.
Use the following procedure to check that exclusive software has not been installed.
1. Check if the HBA address rename setup service has been installed using the following procedure:
[Windows]
Open "Add or Remove Programs" from the Windows Control Panel, and check that none of the software listed in "1.4.2.3 Exclusive
Software" of the "Setup Guide VE" or the HBA address rename setup service have been installed.
Information
For Windows Server 2008 or Windows Vista, select "Programs and Features" from the Windows Control Panel.
[Linux]
Execute the following command and check if the package has been installed.
# rpm -q FJSVrcvhb FJSVscw-common FJSVscw-tftpsv <RETURN>
2. If exclusive software has been installed, uninstall it according to the procedure described in the relevant manual before proceeding.
2.3.1.2 Collecting and Checking Required Information
Before installing Resource Orchestrator, collect required information and check the system status, then determine the information to be
specified on the installation window. The information that needs to be prepared is given below.
- Installation folder and available disk space
Decide the installation folder for Resource Orchestrator. Check that the necessary disk space can be secured on the drive for installation.
For the amount of disk space necessary for Resource Orchestrator, refer to "1.4.2.4 Static Disk Space" and "1.4.2.5 Dynamic Disk
Space" of the "Setup Guide VE".
2.3.2 Installation [Windows]
Install the HBA address rename setup service using the following procedure.
Before installing Resource Orchestrator, check that the preparations given in "2.3.1 Preparations" have been performed.
1. Log on to Windows as the administrator.
Log on to the system on which the HBA address rename setup service is to be installed. Log on as a user belonging to the local
Administrators group.
2. Start the installer from the window displayed when the first Resource Orchestrator DVD-ROM is set.
Click "HBA address rename setup service installation" on the window.
Information
If the above window does not open, execute "RcSetup.exe" from the DVD-ROM drive.
3. Enter parameters prepared and confirmed in "Parameters Used for Installation" according to the installer's instructions.
4. The Resource Orchestrator setup window will be displayed.
Check the contents of the license agreement window etc. and then click <Yes>.
5. The [Select Installation Folder] window will be displayed.
Click <Next>> to use the default Installation Folder. To change folders, click <Browse>, change folders, and click <Next>>.
- 25 -
Note
When changing the folders, be careful about the following points.
- Do not specify the installation folder of the system (such as C:\).
- Enter the location using 100 characters or less. Do not use double-byte characters or the following symbols in the folder name.
""", "|", ":", "*", "?", "/", ".", "<", ">", ",", "%", "&", "^", "=", "!", ";", "#", "'", "+", "[", "]", "{", "}"
- When installing this product to Windows 2003 x64 Edition or Windows 2008 x64 Edition, the following folder names cannot
be specified for the installation folders.
- "%SystemRoot%\System32\"
- Folder names including "Program Files" (except the default "C:\Program Files (x86)")
6. The [Start Copying Files] window will be displayed.
Check that there are no mistakes in the contents displayed on the window, and then click <Install>>.
Copying of files will start.
To change the contents, click <<Back>.
7. The Resource Orchestrator setup completion window will be displayed.
When using the HBA address rename setup service immediately after configuration, check the "Yes, launch it now." checkbox.
Click <Finish> and close the window.
- If the check box is checked
The HBA address rename setup service will start after the window is closed.
- If the check box is not checked
Refer to "8.2.1 Settings for the HBA address rename Setup Service" of the "Setup Guide VE", and start the HBA address rename
setup service.
Note
- Corrective Action for Installation Failure
When installation is stopped due to errors (system errors, processing errors such as system failure, or errors due to execution conditions)
or cancellation by users, remove the causes of any problems, and then take corrective action as follows.
- Open "Add or Remove Programs" from the Windows Control Panel, and if "ServerView Resource Orchestrator HBA address
rename Setup Service" is displayed, uninstall it and then install the service again.
For uninstallation, refer to "3.3 HBA address rename setup service Uninstallation".
Information
For Windows Server 2008 or Windows Vista, select "Programs and Features" from the Windows Control Panel.
- If "ServerView Resource Orchestrator HBA address rename setup service" is not displayed, install it again.
2.3.3 Installation [Linux]
Install the HBA address rename setup service using the following procedure.
Before installing Resource Orchestrator, check that the preparations given in "2.3.1 Preparations" have been performed.
- 26 -
1. Log in to the system as the OS administrator (root).
Boot the managed server that agent (Cloud Edition for Dashboard) is to be installed on in multi-user mode, and then log in to the
system using root.
2. Set the first Resource Orchestrator DVD-ROM.
3. Execute the following command to mount the DVD-ROM. If the auto-mounting daemon (autofs) is used for starting the mounted
DVD-ROM, the installer will fail to start due to its "noexec" mount option.
# mount /dev/hdccd DVD-ROM_mount_point <RETURN>
4. Execute the installation command (RcSetup.sh).
# cd DVD-ROM_mount_point <RETURN>
# ./ RcSetup.sh <RETURN>
5. Perform installation according to the installer's instructions.
Note
- Corrective Action for Installation Failure
Execute the following command, delete the packages from the environment in which installation failed, and then perform installation
again.
# cd DVD-ROM_mount_point/DISK1/HBA/Linux/hbaar <RETURN>
# ./rcxhbauninstall <RETURN>
- 27 -
Chapter 3 Uninstallation
This chapter explains the uninstallation of ServerView Resource Orchestrator.
The uninstallation of managers, agents, and the HBA address rename setup service is performed in the following order:
1. Manager Uninstallation
Refer to "3.1 Manager Uninstallation".
2. Agent Uninstallation
Refer to "3.2 Agent Uninstallation".
3. HBA address rename setup service Uninstallation
For uninstallation, refer to "3.3 HBA address rename setup service Uninstallation".
When uninstalling a Resource Orchestrator Manager, use "Uninstall (middleware)" for the uninstallation.
"Uninstall (middleware)" is a common tool for Fujitsu middleware products.
The Resource Orchestrator Manager is compatible with "Uninstall (middleware)".
When a Resource Orchestrator Manager is installed, "Uninstall (middleware)" is installed first, and then "Uninstall (middleware)" will
control the installation and uninstallation of Fujitsu middleware products. If "Uninstall (middleware)" has already been installed, the
installation is not performed.
For the uninstallation of Uninstall (middleware), refer to "3.3 HBA address rename setup service Uninstallation".
3.1 Manager Uninstallation
The uninstallation of managers is explained in the following sections.
The procedure for manager uninstallation is given below.
- Preparations
Refer to "3.1.1 Preparations".
- Uninstallation
Refer to "3.1.2 Uninstallation [Windows]" or "3.1.3 Uninstallation [Linux]".
3.1.1 Preparations
This section explains the preparations and checks required before commencing uninstallation.
Pre-uninstallation Advisory Notes
- Checking system images and cloning images
[Windows]
All system images and cloning images obtained using Resource Orchestrator will not be deleted.
These images will remain in the specified image file storage folder.
When they are not necessary, delete them manually after uninstalling Resource Orchestrator.
Folder storing the image files (default)
Installation_folder\SVROR\ScwPro\depot
[Linux]
The system and cloning images collected by this product are deleted.
If changed from the default, it remains in the directory storing the image file.
- 28 -
- Checking HBA address rename
When using HBA address rename, the manager sets the WWN for the HBA of each managed server.
When uninstalling a manager be sure to do so in the following order:
1. Delete servers (*1)
2. Uninstall the manager
*1: For the server deletion method, refer to "5.2 Deleting Managed Servers" in the "User's Guide VE".
Note
When using HBA address rename, if the manager is uninstalled without servers being deleted, the WWNs of the servers are not reset
to the factory default values.
Ensure uninstallation of managers is performed only after servers are deleted.
When operating without resetting the WWN, if the same WWN is setup on another server, data may be damaged if the volume is
accessed at the same time.
Also, when operating managers in cluster environments, release cluster settings before uninstalling managers. For how to release
cluster settings, refer to "Appendix B Manager Cluster Operation Settings and Deletion".
- Back up (copy) certificates
When operating managers in cluster environments, back up (copy) certificates before performing uninstallation.
Manager certificates are stored in the following folders:
[Windows]
Drive_name:\Fujitsu\ROR\SVROR\certificate
[Linux]
Shared_disk_mount_point/Fujitsu/ROR/SVROR
- Definition files
All definition files created for using Resource Orchestrator will be deleted.
If the definition files are necessary, before uninstalling Resource Orchestrator back up (copy) the folder below to another folder.
[Windows]
Installation_folder\Manager\etc\customize_data
[Linux]
/etc/opt/FJSVrcvmr/custmize_data
3.1.2 Uninstallation [Windows]
The procedure for manager uninstallation is given below.
Before uninstalling this product, check that the preparations given in "3.1.1 Preparations" have been performed.
1. Log on to Windows as the administrator.
Log on to the system from which the manager is to be uninstalled. Log on as a user belonging to the local Administrators group.
2. Start the uninstaller. Select [Start]-[All Programs]-[Fujitsu]-[Uninstall (middleware)] from the Windows menu. Click the product
name then <Remove>, and the uninstallation window will open.
Note
- After uninstallation, the installation_folder (by default, C:\Fujitsu\ROR\SVROR or C:\ProgramFiles(x86)\Resource Orchestrator)
may remain. If they are not necessary, delete the following folders (along with files contained in the folders) and files.
- The Installation_folder (by default, C:\Fujitsu\ROR\SVROR or C:\ProgramFiles(x86)\Resource Orchestrator) and its contents
(Confirm whether folders can be deleted referring to the caution below.)
- 29 -
If the system and cloning images backed up during uninstallation are no longer needed, delete them manually.
3.1.3 Uninstallation [Linux]
The procedure for manager uninstallation is given below.
Before uninstalling this product, check that the preparations given in "3.1.1 Preparations" have been performed.
1. Log in to the system as the OS administrator (root).
Log in to the managed server from which Resource Orchestrator will be uninstalled, using root.
2. Launch the uninstallation command (cimanager.sh).
Perform uninstallation according to the uninstaller's interactive instructions.
# /opt/FJSVcir/cimanager.sh -c <RETURN>
Note
- When the PATH variable has been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing uninstallation will delete any patches that have been applied to Resource Orchestrator so there is no need to return
it to the state prior to application.
When the PATH variable has not been configured, return it to the state prior to application before performing uninstallation.
- If a manager is uninstalled and then reinstalled without agents being deleted, it will not be able to communicate with agents used
before uninstallation.
In this case, the certificates to indicate that it is the same manager are necessary.
After installation, manager certificates are stored in the following directory:
/etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
- When uninstalling the manager, the certificates are backed up in the following directory.
When reinstalling a manager and using the same certificates, copy the backed up certificates to the above folder.
/var/tmp/back/site/certificate
- If the certificates backed up on uninstallation are not necessary, delete them manually.
When operating managers in cluster environments, back up the certificates as indicated in the preparations for uninstallation.
When reinstalling a manager in a cluster environment and using the same certificates, copy the backed up certificates from the primary
node to the above folders.
- After uninstallation, when a password is saved using the rcxlogin command, the password saved in the following directory remains
depending on the user account for an OS on which the rcxlogin commands are executed. When re-installing the manager, delete it.
/Directory_set_for_each_user's_HOME_environment_variable/.rcx/
3.2 Agent Uninstallation
The uninstallation of agents is explained in the following sections.
3.2.1 Uninstallation [Windows/Hyper-V]
This section explains the procedure for uninstallation of agents.
1. Log on to Windows as the administrator.
Log on to the system from which the agent is to be uninstalled. Log on as a user belonging to the local Administrators group.
2. Delete agents.
Open "Add or Remove Programs" from the Windows Control Panel, and if "ServerView Resource Orchestrator Agent" is not
displayed on the [Add or Remove Programs] window, delete any remaining resources manually.
- 30 -
Information
For Windows Server 2008, select "Programs and Features" from the Windows Control Panel.
3. The [Confirm Uninstall] dialog will be displayed.
Click <OK>.
Information
The services of Resource Orchestrator are automatically stopped and deleted.
4. When uninstallation is completed, the confirmation window will be displayed.
Click <Finish>.
Note
- Any updates that have been applied to Resource Orchestrator will be deleted during uninstallation.
- When uninstallation is stopped due to errors (system errors or processing errors such as system failure) or cancellation by users,
resolve the causes of any problems, and then attempt uninstallation again.
If uninstallation fails even when repeated, the executable program used for uninstallation may have become damaged somehow.
In this case, set the first Resource Orchestrator DVD-ROM, open the command prompt and execute the following command:
>"DVD-ROM_drive\DISK1\Agent\Windows\agent\win\setup.exe" /z"UNINSTALL" <RETURN>
Open "Add or Remove Programs" from the Windows Control Panel, and if "ServerView Resource Orchestrator Agent" is not
displayed on the [Add or Remove Programs] window, delete any remaining folders manually.
Information
For Windows Server 2008, select "Programs and Features" from the Windows Control Panel.
3.2.2 Uninstallation [Linux/VMware/Xen/KVM/Oracle VM]
This section explains the procedure for uninstallation of agents.
1. Log in to the system as the OS administrator (root).
Log in to the managed server from which Resource Orchestrator will be uninstalled, using root.
2. Execute the rcxagtuninstall command.
Executing this command performs uninstallation, and automatically deletes the packages of Resource Orchestrator.
# /opt/FJSVrcxat/bin/rcxagtuninstall <RETURN>
When uninstallation is completed successfully, the following message will be displayed.
INFO : ServerView Resource Orchestrator Agent was uninstalled successfully.
If uninstallation fails, the following message will be displayed.
ERROR : Uninstalling package_name was failed.
- 31 -
Information
When the uninstaller of Resource Orchestrator is started, its services are stopped.
3. If uninstallation fails, use the rpm command to remove the packages given in the message, and start the process from 1. again.
# rpm -e package_name <RETURN>
Note
- When the PATH variable has been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing uninstallation will delete any patches that have been applied to Resource Orchestrator so there is no need to return
it to the state prior to application.
When the PATH variable has not been configured, return it to the state prior to application before performing uninstallation.
- After uninstallation, the installation directories and files below may remain. In that case, delete any remaining directories and files
manually.
Directories
- /opt/FJSVnrmp
- /opt/FJSVrcxat
- /opt/FJSVrcximg
- /opt/FJSVrcxkvm
- /opt/FJSVssagt
- /opt/FJSVssqc
- /opt/systemcastwizard
- /etc/opt/FJSVnrmp
- /etc/opt/FJSVrcxat
- /etc/opt/FJSVssagt
- /etc/opt/FJSVssqc
- /var/opt/systemcastwizard
- /var/opt/FJSVnrmp
- /var/opt/FJSVrcxat
- /var/opt/FJSVssagt
- /var/opt/FJSVssqc
Files
- /boot/clcomp2.dat
- /etc/init.d/scwagent
- /etc/scwagent.conf
3.2.3 Uninstallation [Solaris]
This section explains the procedure for uninstallation of agents.
1. Log in to the system as the OS administrator (root).
Log in to the managed server from which Resource Orchestrator will be uninstalled, using root.
- 32 -
2. Execute the rcxagtuninstall command.
Executing this command performs uninstallation, and automatically deletes the packages of Resource Orchestrator.
# /opt/FJSVrcvat/bin/rcxagtuninstall <RETURN>
When uninstallation is completed successfully, the following message will be displayed.
INFO : ServerView Resource Orchestrator Agent was uninstalled successfully.
If uninstallation fails, the following message will be displayed.
ERROR : Uninstalling package_name was failed.
Information
When the uninstaller of Resource Orchestrator is started, its services are stopped.
If uninstallation fails, use the pkgrm command to remove the packages given in the message, and start the process from 1. again.
# pkgrm package_name <RETURN>
Note
- When the PATH variable has been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing uninstallation will delete any patches that have been applied to Resource Orchestrator so there is no need to return
it to the state prior to application.
When the PATH variable has not been configured, return it to the state prior to application before performing uninstallation.
- After uninstallation, the installation directories below may remain. In that case, delete any remaining directories manually.
- /opt/FJSVrcxat
- /etc/opt/FJSVrcxat
- /var/opt/FJSVrcxat
3.3 HBA address rename setup service Uninstallation
This section explains uninstallation of the HBA address rename setup service.
3.3.1 Uninstallation [Windows]
The procedure for uninstallation of the HBA address rename setup service is given below.
1. Log on to Windows as the administrator.
Log on to the system from which the HBA address rename setup service is to be uninstalled. Log on as a user belonging to the local
Administrators group.
2. Delete the HBA address rename setup service.
Open "Add or Remove Programs" from the Windows Control Panel, and select "ServerView Resource Orchestrator HBA address
rename setup service" and delete it from the [Add or Remove Programs] window.
Information
For Windows Server 2008 or Windows Vista, select "Programs and Features" from the Windows Control Panel.
- 33 -
3. The [Confirm Uninstall] dialog will be displayed.
Click <OK>.
Information
The services of Resource Orchestrator are automatically stopped and deleted.
4. When uninstallation is completed, the confirmation window will be displayed.
Click <Finish>.
Note
- Any updates that have been applied to Resource Orchestrator will be deleted during uninstallation.
- When uninstallation is stopped due to errors (system errors or processing errors such as system failure) or cancellation by users, resolve
the causes of any problems, and then attempt uninstallation again.
If uninstallation fails even when repeated, the executable program used for uninstallation may have become damaged somehow.
In this case, set the first Resource Orchestrator DVD-ROM, open the command prompt and execute the following command:
>"DVD-ROM_drive\DISK1\HBA\Windows\hbaar\win\setup.exe" /z"UNINSTALL" <RETURN>
Open "Add or Remove Programs" from the Windows Control Panel, and if "ServerView Resource Orchestrator HBA address rename
setup service" is not displayed on the [Add or Remove Programs] window, delete any remaining folders manually.
Information
For Windows Server 2008 or Windows Vista, select "Programs and Features" from the Windows Control Panel.
3.3.2 Uninstallation [Linux]
The procedure for uninstallation of the HBA address rename setup service is given below.
1. Log in to the system as the OS administrator (root).
Log in to the managed server from which Resource Orchestrator will be uninstalled, using root.
2. Execute the rcxhbauninstall command.
# /opt/FJSVrcvhb/bin/rcxhbauninstall <RETURN>
Starting the uninstaller displays the following message which explains that before uninstallation the Resource Orchestrator services
will be automatically stopped.
Any Resource Orchestrator service that is still running will be stopped and removed.
Do you want to continue ? [y,n,?,q]
To stop the services and uninstall Resource Orchestrator enter "y", to discontinue the uninstallation enter "n".
If "n" or "q" is entered, the uninstallation is discontinued.
If "?" is entered, an explanation of the entry method will be displayed.
3. Enter "y" and the uninstallation will start.
When uninstallation is completed successfully, the following message will be displayed.
INFO : ServerView Resource Orchestrator HBA address rename setup service was uninstalled successfully.
- 34 -
If uninstallation fails, the following message will be displayed.
ERROR : Uninstalling "package_name" was failed
4. If uninstallation fails, use the rpm command to remove the packages given in the message, and start the process from 1. again.
# rpm -e package_name <RETURN>
Note
- When the PATH variable has been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing uninstallation will delete any patches that have been applied to Resource Orchestrator so there is no need to return
it to the state prior to application.
When the PATH variable has not been configured, return it to the state prior to application before performing uninstallation.
- After uninstallation, the installation directories below may remain. In that case, delete any remaining directories manually.
- /opt/FJSVrcvhb
- /opt/FJSVscw-common
- /opt/FJSVscw-tftpsv
- /etc/opt/FJSVrcvhb
- /etc/opt/FJSVscw-common
- /etc/opt/FJSVscw-tftpsv
- /var/opt/FJSVrcvhb
- /var/opt/FJSVscw-common
- /var/opt/FJSVscw-tftpsv
3.4 Uninstall (middleware) Uninstallation
The uninstallation of "Uninstall (middleware)" is explained in this section.
Note
- When uninstalling Resource Orchestrator, use "Uninstall (middleware)" for the uninstallation.
- "Uninstall (middleware)" also manages product information on Fujitsu middleware other than Resource Orchestrator. Do not uninstall
"Uninstall (middleware)" unless it is necessary for some operational reason.
In the event of accidental uninstallation, reinstall it following the procedure below.
[Windows]
1. Log in to the installation machine as the OS administrator (root).
2. Set the first Resource Orchestrator DVD-ROM.
3. Execute the installation command.
>DVD-ROM_drive\DISK1\CIR\cirinst.exe <RETURN>
[Linux]
1. Log in to the system as the superuser (root).
2. Set the first Resource Orchestrator DVD-ROM.
- 35 -
3. Execute the following command to mount the DVD-ROM.
If the auto-mounting daemon (autofs) is used for DVD-ROM auto-mounting, the installer fails to start due to its "noexec" mount
option.
# mount /dev/hdc DVD-ROM_mount_point <RETURN>
# cd DVD-ROM_mount_point <RETURN>
4. Execute the installation command.
# ./DISK1/CIR/cirinst.sh <RETURN>
Information
To uninstall "Uninstall (middleware)", follow the procedure below.
1. Start "Uninstall (middleware)" and check that other Fujitsu middleware products do not remain.
The starting method is as follows.
[Windows]
Select [Start]-[All Programs]-[Fujitsu]-[Uninstall (middleware)].
[Linux]
# /opt/FJSVcir/bin/cimanager.sh [-c] <RETURN>
To start it in command mode, specify the -c option. If the -c option is not specified, it will start in GUI mode if the GUI has been
configured, or start in command mode otherwise.
Note
If the command path contains a blank space, it will fail to start. Do not specify a directory with a name containing blank spaces.
2. After checking that no Fujitsu middleware product has been installed, execute the following uninstallation command.
[Windows]
>%SystemDrive\FujitsuF4CR\bin\cirremove.exe <RETURN>
[Linux]
# /opt/FJSVcir/bin/cirreomve.sh <RETURN>
3. When "This software is a common tool of Fujitsu products. Are you sure you want to remove it? [y/n]:" is displayed, enter "y" to
continue. Uninstallation will be completed in seconds.
4. After uninstallation is complete, delete the following directory and contained files.
[Windows]
%SystemDrive%FujitsuF4CR
[Linux]
/var/opt/FJSVcir
- 36 -
Chapter 4 Upgrading from Earlier Versions
This chapter explains the upgrade procedures for the following environments.
- How to upgrade environments configured using ServerView Resource Coordinator VE (hereinafter RCVE) to this version of
ServerView Resource Orchestrator Virtual Edition (hereinafter ROR VE) environments
- How to upgrade environments configured using this version of ServerView Resource Orchestrator Virtual Edition (hereinafter ROR
VE) to ServerView Resource Orchestrator Cloud Edition (hereinafter ROR CE) environments
4.1 Overview
This section explains an overview of the following upgrades.
- Upgrade from RCVE to ROR VE
- Upgrade from ROR VE to ROR CE
Upgrade from RCVE to ROR VE
Perform the upgrade in the following order:
1. Upgrade the manager
2. Upgrade the agents
3. Upgrade the clients and HBA address rename setup service
Upgrade can be performed using upgrade installation.
Upgrade installation uses the Resource Orchestrator installer to automatically upgrade previous versions to the current version.
Upgrade from ROR VE to ROR CE
The manager, agent, client, and HBA address rename setup service must be upgraded for upgrading from ROR VE to ROR CE. In addition,
Cloud Edition licenses are required for each agent used.
For license registration, refer to "License Setup" in "7.1 Login" of the "Setup Guide VE".
Note
- When using Virtual Edition, if SPARC Enterprise series servers are registered as managed servers, upgrade to Cloud Edition is not
possible.
- The combinations of managers and agents, clients, and HBA address rename setup service supported by Resource Orchestrator are
as shown below.
Old Version
New Version
Upgradability
RCVE
ROR VE
Yes
RCVE
ROR CE
Yes
Older version of ROR
ROR CE
Yes
Older version of ROR
ROR VE
-
ROR VE
ROR CE
Yes
Yes: Supported
-: Not supported
- 37 -
4.2 Manager
This section explains the upgrading of managers.
When operating managers in clusters, transfer using upgrade installation cannot be performed. Perform the upgrade manually.
Transferred Data
The following manager data is transferred:
- Resource Orchestrator setting information (Setting information for the environment of the earlier version)
- Certificates
- System images and cloning images (Files in the image file storage folder)
Also, with transfer using upgrade installation the following data is also transferred:
- Port number settings
- Power consumption data
- Batch files and script files for event-related functions
Data which is transferred during upgrade installation is backed up in the following folder. Ensure that the folder is not deleted until after
the upgrade is complete.
[Windows]
Drive_name\Program Files\RCVE-upgradedata
[Linux]
/var/opt/RCVE-upgradedata
/var/opt/backupscwdir
Preparations
Perform the following preparations and checks before upgrading:
- Check that the environment is one in which managers of this version can be operated.
For operating environments, refer to "Chapter 1 Operational Environment".
Take particular care regarding the memory capacity.
- To enable recovery in case there is unexpected trouble during the upgrade process, please back up the admin server. For how to back
up the admin server, refer to "Appendix B Admin Server Backup and Restore" of the "ServerView Resource Coordinator VE Operation
Guide".
- As well as backing up the admin server, also back up (copy) the following information:
- Port number settings
[Windows]
Drive_name\WINDOWS\system32\drivers\etc\services
[Linux]
/etc/services
- Batch files and script files for event-related functions
[Windows]
Installation_folder\Manager\etc\trapop.bat
[Linux]
/etc/opt/FJSVrcvmr/trapop.sh
- Definition File
[Windows]
Installation_folder\Manager\etc\customize_data
- 38 -
[Linux]
/etc/opt/FJSVrcvmr/customize_data
- When using GLS to perform redundancy of NICs for the admin LAN of managed servers, activate the admin LAN on the primary
interface.
- To confirm that the upgrade is complete, if there are registered VM hosts on which there are VM guests, check from the ROR console
or earlier versions of the Resource Coordinator VE console that all of the VM guests are displayed and record their information before
upgrading.
- When server switchover settings have been performed, it is not possible to upgrade when spare servers have been switched to. Restore
any such servers before starting the upgrade. For how to restore, refer to the information about Server Switchover in the "ServerView
Resource Coordinator VE Operation Guide".
Upgrade using Upgrade Installation
When upgrading to this version from V2.1.1, upgrade can be performed using upgrade installation. Perform the upgrade using the following
procedure:
Note
- Do not change the hardware settings or configurations of managers, agents, or any other devices until upgrading is completed.
- When there are system images and cloning images, the same amount of disk space as necessary for the system images and cloning
images is required on the admin server in order to temporarily copy the images during the upgrade. Before performing the upgrade,
check the available disk space.
- When performing an upgrade installation, please do not access the installation folder of the earlier version or any of the files and
folders inside it using the command prompt, Explorer, or an editor.
While it is being accessed, attempts to perform upgrade installation will fail.
If upgrade installation fails, stop accessing the files or folders and then perform the upgrade installation again.
- In the event that upgrade installation fails, please resolve the cause of the failure and perform upgrade installation again. If the problem
cannot be resolved and upgrade installation fails again, please contact Fujitsu technical staff.
- When stopping the upgrade and restoring the earlier version, please do so by restoring the information that was backed up during the
preparations.
When performing restoration and a manager of this version or an earlier version has been installed, please uninstall it.
After restoration, please delete the folder containing the backed up assets. For the restoration method (restore), refer to the "ServerView
Resource Coordinator VE Operation Guide".
- Upgrade installation will delete patches that were applied to the earlier version.
[Linux]
When the PATH variable has not been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing upgrade installation will delete any patches that have been applied to the old version, but it will not delete product
information or component information. Refer to the UpdateAdvisor (Middleware) manual and delete software and component
information from the applied modification checklist.
- When operating managers in clusters, transfer using upgrade installation cannot be performed. Perform the upgrade manually.
1. Upgrade Installation
[Windows]
Refer to "2.1.2 Installation [Windows]", and execute the Resource Orchestrator installer.
The Resource Orchestrator setup window will be displayed. Check the contents of the license agreement, etc. and then click <Next>.
The settings to be inherited from the earlier version will be displayed. Check them and click <Next>. Upgrade installation will
begin.
[Linux]
Refer to "2.1.3 Installation [Linux]", and execute Resource Orchestrator installer.
Check the contents of the license agreement, etc. and then enter "y".
- 39 -
The setting information that will be taken over from the earlier version will be displayed. Please check it and then enter "y". Upgrade
installation will begin.
2. Restarting after Upgrade Installation is Finished [Windows]
After upgrade installation is finished, restart the system in order to complete the upgrade.
3. Restore the directory that backed up before upgrading to following directory
[Windows]
Installation_folder\Manager\etc\customize_data
[Linux]
/etc/opt/FJSVrcvmr/customize_data
4. When upgrading from ROR V2.2.0 - V2.3.0 Manager, use the following procedure to add parameter to the file.
If the file is not exists in environment, this step is not necessary.
[Windows]
Installation_folder\Manager\etc\customize_data\l_server.rcxprop
Installation_folder\Manager\etc\customize_data\vnetwork_ibp.rcxprop
Installation_folder\Manager\rails\config\rcx\vm_guest_params.rb
[Linux]
/etc/opt/FJSVrcvmr/customize_data/l_server.rcxprop
/etc/opt/FJSVrcvmr/customize_data/vnetwork_ibp.rcxprop
/opt/FJSVrcvmr/rails/config/rcx/vm_guest_params.rb
It is unnecessary to do procedure e and d when upgrading from ROR V2.3.0.
a. Stop the manager.
b. Refer following file and write down the following parameter's value.
[Windows]
Installation_folder\ROR_upgradedata\Manager\rails\config\rcx\vm_guest_params.rb
[Linux]
/var/tmp/ROR_upgradedata/opt_FJSVrcvmr/rails/config/rcx/vm_guest_params.rb
[Parameter]
SHUTDOWN_TIMEOUT = value
c. Refer following file and find the following parameter's value.
If the parameter's value is not same as wrote down in procedure b, please fix the value.
[Windows]
Installation_folder\Manager\rails\config\rcx\vm_guest_params.rb
[Linux]
/opt/FJSVrcvmr/rails/config/rcx/vm_guest_params.rb
[Parameter]
SHUTDOWN_TIMEOUT = value
d. Add following parameter to l_server.rcxprop
allocate_after_create=true
auto_preserved=false
e. When using IBP configuration mode, add following parameter to vnetwork_ibp.rcxprop.
support_ibp_mode = true
f. Start the manager.
- 40 -
Note
When using backup system images or collecting cloning images without upgrading agents, either reboot managed servers after the manager
upgrade is completed, or restart the related services.
For restarting managed servers and the related services, refer to "5.2 Agent" of the "ServerView Resource Coordinator VE Setup Guide".
Manual Upgrade
Upgrading from RCVE V13.2, RCVE V13.3, or RCVE managers in clustered operation to ROR VE is performed by exporting and
importing system configuration files for pre-configuration.
Perform the upgrade using the following procedure:
See
For pre-configuration, refer to the following manuals:
- "Systemwalker Resource Coordinator Virtual server Edition Setup Guide"
- "Chapter 7 Pre-configuration"
- "Appendix D Format of CSV System Configuration Files"
- "User's Guide VE"
- "Chapter 6 Pre-configuration"
- "Appendix A Format of CSV System Configuration Files"
Note
- When upgrading from V13.2, do not uninstall V13.2 clients until step 2. has been completed.
- Do not change the hardware settings or configurations of managers, agents, or any other devices until upgrading is completed.
- When upgrading from manager operating in clusters, replace the parts referring to "Systemwalker Resource Coordinator Virtual server
Edition manuals" with the earlier versions of "ServerView Resource Coordinator VE manuals" for the following procedure.
1. Set Maintenance Mode
Use the Resource Coordinator console of the earlier version or the ROR console to place all managed servers into maintenance
mode.
2. System Configuration File Export
Use the pre-configuration function of the earlier version and export the system configuration file in CSV format. During the export,
do not perform any other operations with Resource Orchestrator.
For the export method, refer to the "Systemwalker Resource Coordinator Virtual server Edition Setup Guide".
3. Back up (copy) Assets to Transfer
a. Perform backup (copying) of the certificates of the earlier version.
Back up (copy) the following folders and directories:
[Windows]
For V13.2 and V13.3
Installation_folder\Site Manager\etc\opt\FJSVssmgr\current\certificate
Installation_folder\Domain Manager\etc\opt\FJSVrcxdm\certificate
- 41 -
For V2.1.0 or later
Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
Installation_folder\Manager\etc\opt\FJSVrcxdm\certificate
Installation_folder\Manager\sys\apache\conf\ssl.crt
Installation_folder\Manager\sys\apache\conf\ssl.key
[Linux]
/etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
/etc/opt/FJSVrcvmr/sys/apache/conf/ssl.crt
/etc/opt/FJSVrcvmr/sys/apache/conf/ssl.key
b. Back up (copy) the folder containing the system images and cloning images of the earlier version to a location other than the
installation folder and the image file storage folder.
When using the default image file storage folder, back up (copy) the following folder or directory:
[Windows]
Installation_folder\ScwPro\depot\Cloneimg
[Linux]
/var/opt/FJSVscw-deploysv/depot/CLONEIMG
When using a folder or directory other than the default, back up (copy) the "Cloneimg" folder or "CLONEIMG" directory
used.
Note
- When operating managers in clusters, the above folders or directories are stored on the shared disk. Check if files, folders, or
directories stored in the above locations are correctly backed up (copied).
The backup (copy) storage target can be set to be stored in a folder or directory on the shared disk other than "RCoordinator"
which is created during setup of the manager cluster service.
- When operating managers in clusters, the back up (copy) of folders or directories should be executed on the primary node.
- Before making a backup (copy) of system images and cloning images, check the available disk space. For the disk space
necessary for system images and cloning images, refer to the "Systemwalker Resource Coordinator Virtual server Edition
Installation Guide". When there is no storage folder for system images and cloning images, this step is not necessary.
When operating managers in clusters, refer to "B.4 Releasing Configuration" in the "ServerView Resource Coordinator VE
Installation Guide" of the earlier version, and uninstall the cluster services and the manager of the earlier versions.
Note
User account information is deleted when managers are uninstalled. Refer to step 7. and perform reconfiguration from the ROR
console.
4. Uninstallation of earlier version managers
Refer to the "Systemwalker Resource Coordinator Virtual server Edition Installation Guide" for the method for uninstalling the
manager of the earlier version.
When operating managers in clusters, refer to "B.4 Releasing Configuration" in the "ServerView Resource Coordinator VE
Installation Guide" of the earlier version, and uninstall the cluster services and the manager of the earlier versions.
Note
- Do not perform "Delete servers" as described in Preparations of the "Systemwalker Resource Coordinator Virtual server Edition
Installation Guide". When managed servers using HBA address rename have been deleted, it is necessary to reboot managed
servers after upgrading of the manager is completed.
- User account information is deleted when managers are uninstalled. Refer to step 7. and perform reconfiguration from the RC
console.
- 42 -
- In environments where there are V13.2 managers and clients, uninstall V13.2 clients only after uninstalling the manager of the
earlier version.
5. Installation of Managers of This Version
Install managers of this version.
For installation, refer to "2.1 Manager Installation".
When operating managers in cluster environments, refer to "Appendix B Manager Cluster Operation Settings and Deletion", uninstall
the manager and then configure the cluster services.
Note
When installing managers, specify the same admin LAN as used for the earlier version on the [Admin LAN Selection] window.
After installing managers, use the following procedure to restore the certificates and image file storage folder backed up (copied)
in step 3.
a. Stop the manager.
b. Return the image file storage folder backup (copy) to the folder specified during installation.
When using the default image file storage folder, restore to the following folder or directory:
[Windows]
Installation_folder\ScwPro\depot\Cloneimg
[Linux]
/var/opt/FJSVscw-deploysv/depot/CLONEIMG
When using a folder other than the default, restore to the new folder.
When the image file storage folder was not backed up, this step is not necessary.
c. Restore the backed up (copied) certificates to the manager installation_folder.
Restore to the following folder or directory:
[Windows]
Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
Installation_folder\Manager\etc\opt\FJSVrcxdm\certificate
Installation_folder\Manager\sys\apache\conf\ssl.crt
Installation_folder\Manager\sys\apache\conf\ssl.key
[Linux]
/etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
/etc/opt/FJSVrcvmr/sys/apache/conf/ssl.crt
/etc/opt/FJSVrcvmr/sys/apache/conf/ssl.key
d. Restore the information that was backed up during preparations.
- Port number settings
Change the port numbers based on the information that was backed up during preparations.
For details of how to change port numbers, refer to "3.1.2 Changing Port Numbers" of the "User's Guide VE".
When the port number has not been changed from the default, this step is not necessary.
- Batch files and script files for event-related functions
Restore them by replacing the following file.
[Windows]
Installation_folder\Manager\etc\trapop.bat
[Linux]
/etc/opt/FJSVrcvmr/trapop.sh
- 43 -
e. Start the manager.
For the methods for starting and stopping managers, refer to "7.2 Starting and Stopping the Manager" of the "Setup Guide
VE".
Note
Take caution regarding the following when operating managers in clusters.
- Restoring the image file storage folder and certificates should be performed on shared disks are mounted on the primary node.
- Restoration of batch files and script files for event-related functions should be performed on both nodes.
6. User Account Settings
Using the information recorded during preparations, perform setting of user accounts using the ROR console.
For details, refer to "Chapter 4 User Accounts" of the "Operation Guide VE".
7. Edit System Configuration Files
Based on the environment created for the earlier version, edit the system configuration file (CSV format) exported in step 2.
Change the operation column of all resources to "new".
When upgrading from V13.3 or later, do not change the operation column of resources contained in the following sections to "new":
- ServerAgent
- ServerVMHost
- Memo
For how to edit system configuration files (CVS format), refer to the "Systemwalker Resource Coordinator Virtual server Edition
Setup Guide".
Note
When the spare server information is configured, use the following procedure to delete the spare server information.
- After upgrade from V13.2
Set hyphens ("-") for all parameters ("Spare server name", "VLAN switch", and "Automatic switch") of the "Server switch
settings" of "(3)Server Blade Information".
- In cases other than above
In the "SpareServer" section, set "operation" as a hyphen ("-").
8. Creating an Environment of This Version
Import the system configuration file and create an environment of this version.
Use the following procedure to configure an environment of this version.
a. Import of the system configuration file
Import the edited system configuration file.
For the import method, refer to "6.2 Importing the System Configuration File" of the "User's Guide VE".
b. Agent Registration
Using the information recorded during preparations, perform agent registration using the ROR console. Perform agent
registration with the OS of the managed server booted.
For agent registration, refer to "8.3 Software Installation and Agent Registration" of the "Setup Guide VE".
After completing agent registration, use the ROR console to check that all physical OS's and VM hosts are displayed. When
there are VM hosts (with VM guests) registered, check that all VM guests are displayed.
- 44 -
c. Spare Server Information Settings
Using the information recorded during preparations, perform registration of spare server information using the ROR console.
For registration of spare server information, refer to "8.6 Server Switchover Settings" of the "User's Guide VE".
d. Registration of Labels, Comments, and Contact Information
When label, comment, and contact information has been registered, change the contents of the operation column of the system
configuration file (CSV format) that were changed to "new" in step 6. back to hyphens ("-"), and change the contents of the
operation column of resources contained in the [Memo] section to "new".
For how to edit system configuration files (CVS format), refer to the "Systemwalker Resource Coordinator Virtual server
Edition Setup Guide".
Import the edited system configuration file.
For the import method, refer to "6.2 Importing the System Configuration File" of the "User's Guide VE".
9. Set Maintenance Mode
Using the information recorded during preparation, place the managed servers placed into maintenance mode into maintenance
mode again.
For maintenance mode settings, refer to "Appendix B Maintenance Mode" of the "User's Guide VE".
Note
When using backup system images or collecting cloning images without upgrading agents, either reboot managed servers after the manager
upgrade is completed, or restart the related services.
For restarting the related services, refer to "5.2 Agent" of the "Systemwalker Resource Coordinator Virtual server Edition Setup Guide".
4.3 Agent
This section explains the upgrading of agents.
Upgrading of agents is not mandatory even when managers have been upgraded to this version. Perform upgrades if necessary.
Transferred Data
Before upgrading, note that the following agent resources are transferred:
- Definition files of the network parameter auto-configuration function (when the network parameter auto-configuration function is
being used)
[Windows/Hyper-V]
Installation_folder\Agent\etc\event_script folder
Installation_folder\Agent\etc\ipaddr.conf file
Installation_folder\Agent\etc\ipaddr.conf file
[Linux/VMware/Xen/KVM]
/etc/opt/FJSVrcxat/event_script directory
/etc/opt/FJSVnrmp/lan/ipaddr.conf file
/etc/FJSVrcx.conf file
[Solaris]
There is no resource to transfer.
Data and work files which are transferred during upgrade installation are stored in the following folder. Ensure that the folder is not deleted
until after the upgrade is complete.
[Windows/Hyper-V]
Drive_name\Program Files\RCVE-upgradedata
[Linux/VMware/Xen/KVM]
/var/opt/RCVE-upgradedata
- 45 -
[Solaris]
/var/opt/RCVE-upgradedata
Preparations
Perform the following preparations and checks before upgrading:
- Check that the environment is one in which agents of this version can be operated.
For operating environments, refer to "Chapter 1 Operational Environment".
- To enable recovery in case there is unexpected trouble during the upgrade process, back up the folders and files listed in "Transferred
Data" to a folder other than the agent installation folder.
Upgrade using Upgrade Installation
When performing upgrade installation from V2.1.1 of agents in Windows environments, upgrade installation can be performed using the
installer of this version.
Use the following procedure to upgrade agents of earlier versions to agents of this version on all of the managed servers that are being
upgraded.
Note
- Do not perform any other operations with Resource Orchestrator until the upgrade is completed.
- Perform upgrading of agents after upgrading of managers is completed.
- In the event that upgrade installation fails, please resolve the cause of the failure and perform upgrade installation again. If the problem
cannot be resolved and upgrade installation fails again, please contact Fujitsu technical staff.
- When performing an upgrade installation, please do not access the installation folder of the earlier version or any of the files and
folders inside it using the command prompt, Explorer, or an editor.
While it is being accessed, attempts to perform upgrade installation will fail.
If upgrade installation fails, stop accessing the files or folders and then perform the upgrade installation again.
- When stopping the upgrade and restoring the earlier version, please re-install the agent of the earlier version and then replace the
information that was backed up during the preparations.
When performing restoration and an agent of this version or an earlier version has been installed, please uninstall it.
After restoration, please delete the folder containing the backed up assets.
- Upgrade installation will delete patches that were applied to the earlier version.
[Linux]
When the PATH variable has not been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing upgrade installation will delete any patches that have been applied to the old version, but it will not delete product
information or component information. Refer to the UpdateAdvisor (Middleware) manual and delete software and component
information from the applied modification checklist.
1. Upgrade Installation
[Windows/Hyper-V]
Refer to "2.2.2 Installation [Windows/Hyper-V]", and execute the Resource Orchestrator installer.
The Resource Orchestrator setup window will be displayed. Check the contents of the license agreement, etc. and then click <Yes>.
The setting information that will be taken over from the earlier version will be displayed. Please check it and then click <Install>.
Upgrade installation will begin.
[Linux/VMware/Xen/KVM]
Refer to "2.2.3 Installation [Linux/VMware/Xen/KVM/Oracle VM]", and execute the Resource Orchestrator installer.
Check the contents of the license agreement, etc. and then enter "y".
The setting information that will be taken over from the earlier version will be displayed. Please check it and then enter "y". Upgrade
installation will begin.
- 46 -
[Solaris]
Refer to "2.2.4 Installation [Solaris]", and execute the Resource Orchestrator installer.
Check the contents of the license agreement, etc. and then enter "y". Upgrade installation will begin.
Note
- After upgrading agents, use the ROR console to check if the upgraded managed servers are being displayed correctly.
- Updating of system images and cloning images is advised after agents have been upgraded.
- When upgrade installation is conducted on SUSE Linux Enterprise Server, upgrade installation will be conducted successfully
even if the following message is displayed.
insserv: Warning, current runlevel(s) of script `scwagent' overwrites defaults.
2. Set Maintenance Mode
When server switchover settings have been performed for managed servers, place them into maintenance mode.
When managed servers are set as spare servers, place the managed servers set as spare servers into maintenance mode.
3. Backing up (copying) of Network Parameter Auto-configuration Function Definition Files
When using the network parameter auto-configuration function during deployment of cloning images, back up (copy) the following
folders and files to a location other than the agent installation folder.
[Windows/Hyper-V]
Installation_folder\Agent\etc\event_script folder
Installation_folder\Agent\etc\ipaddr.conf file
Installation_folder\Agent\etc\ipaddr.conf file
[Linux/VMware/Xen/KVM]
/etc/opt/FJSVrcxat/event_script directory
/etc/opt/FJSVnrmp/lan/ipaddr.conf file
/etc/FJSVrcx.conf file
[Solaris]
There is no definition file to back up.
4. Restoration of Network Parameter Auto-configuration Function Definition Files
When using the network parameter auto-configuration function during deployment of cloning images, restore the definition files
that were backed up (copied) in step 3. When 3. was not performed, this step is not necessary.
a. Stop agents.
For the method for stopping agents, refer to "7.3 Starting and Stopping the Agent" in the "Setup Guide VE".
b. Restore the definition file.
Restore the folders and files backed up (copied) in step 3. to the following locations in the installation folder of this version:
[Windows/Hyper-V]
Installation_folder\Agent\etc\event_script folder
Installation_folder\Agent\etc\ipaddr.conf file
Installation_folder\Agent\etc\ipaddr.conf file
[Linux/VMware/Xen/KVM]
/etc/opt/FJSVrcxat/event_script directory
/etc/opt/FJSVnrmp/lan/ipaddr.conf file
/etc/FJSVrcx.conf file
[Solaris]
There is no definition file to restore.
c. Start agents.
For the method for starting agents, refer to "7.3 Starting and Stopping the Agent" in the "Setup Guide VE".
- 47 -
5. Release Maintenance Mode
Release the maintenance mode of managed servers placed into maintenance mode in step 2.
Note
- After upgrading agents, use the ROR console to check if the upgraded managed servers are being displayed correctly.
- Updating of system images and cloning images is advised after agents have been upgraded.
Manual Upgrade
Use the following procedure to upgrade agents of earlier versions to agents of this version on all of the managed servers that are being
upgraded.
Note
- Do not perform any other operations with Resource Orchestrator until the upgrade is completed.
- Perform upgrading of agents after upgrading of managers is completed.
- When using the network parameter auto-configuration function during deployment of cloning images, specify the same installation
folder for agents of the earlier version and those of this version.
1. Set Maintenance Mode
- When server switchover settings have been performed for managed servers
Place them into maintenance mode.
- When managed servers are set as spare servers
Place the managed servers set as spare servers into maintenance mode.
2. Backing up (copying) of Network Parameter Auto-configuration Function Definition Files
When using the network parameter auto-configuration function during deployment of cloning images, back up (copy) the following
folders and files to a location other than the agent installation folder.
[Windows/Hyper-V]
Installation_folder\Agent\etc\event_script folder
Installation_folder\Agent\etc\ipaddr.conf file
Installation_folder\Agent\etc\ipaddr.conf file
[Linux/VMware/Xen/KVM]
/etc/opt/FJSVrcxat/event_script directory
/etc/opt/FJSVnrmp/lan/ipaddr.conf file
/etc/FJSVrcx.conf file
[Solaris]
There is no definition file to back up.
3. Uninstall RCVE Agents
Refer to the "ServerView Resource Coordinator VE Installation Guide", and uninstall the agents.
4. Install ROR VE Agents
Install ROR VE agents.
For installation, refer to "2.2 Agent Installation".
- 48 -
5. Restoration of Network Parameter Auto-configuration Function Definition Files
When using the network parameter auto-configuration function during deployment of cloning images, restore the definition files
that were backed up (copied) in step 2. When 2. was not performed, this step is not necessary.
a. Stop agents.
For the method for stopping agents, refer to "7.3 Starting and Stopping the Agent" in the "Setup Guide VE".
b. Restore the definition file.
Restore the folders and files backed up (copied) in step 2. to the following locations in the installation folder of this version:
[Windows/Hyper-V]
Installation_folder\Agent\etc\event_script folder
Installation_folder\Agent\etc\ipaddr.conf file
Installation_folder\Agent\etc\ipaddr.conf file
[Linux/VMware/Xen/KVM]
/etc/opt/FJSVrcxat/event_script directory
/etc/opt/FJSVnrmp/lan/ipaddr.conf file
/etc/FJSVrcx.conf file
[Solaris]
There is no definition file to restore.
c. Start agents.
For the method for starting agents, refer to "7.3 Starting and Stopping the Agent" in the "Setup Guide VE".
6. Release Maintenance Mode
Release the maintenance mode of managed servers placed into maintenance mode in step 1.
Note
- After upgrading agents, use the ROR console to check that the upgraded managed servers are being displayed correctly.
- Updating of system images and cloning images is advised after agents have been upgraded.
Upgrading with ServerView Update Manager or ServerView Update Manager Express
Upgrade installation can be performed with ServerView Update Manager or ServerView Update Manager Express when upgrading to this
version from ROR V2.2.2 or later, or RCVE V2.2.2 or later.
Refer to the manual of ServerView Update Manager or ServerView Update Manager Express for the procedure.
Note
- To upgrade with ServerView Update Manager, the server to be upgrade must be managed by ServerView Operations Manager.
- OS's and hardware supported by ServerView Update Manager or ServerView Update Manager Express can be updated.
- For Linux or VMware, the installed ServerView Agents must be at least V5.01.08.
- Do not perform any other operations with Resource Orchestrator until the upgrade is completed.
- Perform upgrading of agents after upgrading of managers is completed.
- In the event that upgrade installation fails, please resolve the cause of the failure and perform upgrade installation again. If the problem
cannot be resolved and upgrade installation fails again, please contact Fujitsu technical staff.
- When performing an upgrade installation, please do not access the installation folder of the earlier version or any of the files and
folders inside it using the command prompt, Windows Explorer, or an editor. While it is being accessed, attempts to perform upgrade
installation will fail.
- If upgrade installation fails, stop accessing the files or folders and then perform the upgrade installation again.
- 49 -
- When stopping the upgrade and restoring the earlier version, please re-install the agent of the earlier version and then replace the
information that was backed up during the preparations.
When performing restoration and an agent of this version or an earlier version has been installed, please uninstall it.
After restoration, please delete the folder containing the backed up assets.
- Upgrade installation will delete patches that were applied to the earlier version.
[Linux]
- When the PATH variable has not been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing upgrade installation will delete any patches that have been applied to the old version, but it will not delete product
information or component information. Refer to the UpdateAdvisor (Middleware) manual and delete software and component
information from the applied modification checklist.
- After upgrading agents, use the ROR console to check that the upgraded managed servers are being displayed correctly.
- Updating of system images and cloning images is advised after agents have been upgraded.
4.4 Client
This section explains the upgrading of clients.
Web browsers are used as clients of this version.
When upgrading clients, it is necessary to clear the Web browser's cache (temporary internet files).
Use the following procedure to clear the Web browser's cache.
1. [Select [Tools]-[Internet Options].
The [Internet Options] dialog is displayed.
2. Select the [General] tab on the [Internet Options] dialog.
3. Select <Delete> in the "Browsing history" area.
The [Delete Browsing History] dialog is displayed.
4. Check the "Temporary Internet files" checkbox in the [Delete Browsing History] dialog and unselect the other checkboxes.
5. Click <Delete>.
The Web browser's cache is cleared.
4.5 HBA address rename setup service
This section explains upgrading of the HBA address rename setup service.
Transferred Data
There is no HBA address rename setup service data to transfer when upgrading from an earlier version to this version.
Preparations
Perform the following preparations and checks before upgrading:
- Check that the environment is one in which agents of this version can be operated.
For operating environments, refer to "Chapter 1 Operational Environment".
- 50 -
Upgrade using Upgrade Installation
When upgrading to this version from V2.1.1, upgrade can be performed using upgrade installation. Perform the upgrade using the following
procedure:
Note
- Do not perform any other operations with Resource Orchestrator until the upgrade is completed.
- Perform upgrading of the HBA address rename setup service after upgrading of managers is completed.
- In the event that upgrade installation fails, please resolve the cause of the failure and perform upgrade installation again. If the problem
cannot be resolved and upgrade installation fails again, please contact Fujitsu technical staff.
- When performing an upgrade installation, please do not access the installation folder of the earlier version or any of the files and
folders inside it using the command prompt, Explorer, or an editor.
While it is being accessed, attempts to perform upgrade installation will fail.
If upgrade installation fails, stop accessing the files or folders and then perform the upgrade installation again.
- When stopping the upgrade and restoring the earlier version, please re-install the HBA address rename setup service of the earlier
version.
When performing restoration and the HBA address rename setup service of this version or an earlier version has been installed, please
uninstall it.
- Upgrade installation will delete patches that were applied to the earlier version.
[Linux]
When the PATH variable has not been configured to enable execution of UpdateAdvisor (Middleware) commands from a user-defined
location, performing upgrade installation will delete any patches that have been applied to the old version, but it will not delete product
information or component information. Refer to the UpdateAdvisor (Middleware) manual and delete software and component
information from the applied modification checklist.
1. Upgrade Installation
[Windows]
Refer to "2.3.2 Installation [Windows]", and execute the Resource Orchestrator installer.
The Resource Orchestrator setup window will be displayed. Check the contents of the license agreement, etc. and then click <Yes>.
The setting information that will be taken over from the earlier version will be displayed. Please check it and then click <Install>.
Upgrade installation will begin.
[Linux]
Refer to "2.3.3 Installation [Linux]", and execute Resource Orchestrator installer.
The Resource Orchestrator setup window will be displayed. Check the contents of the license agreement, etc. and then enter "y".
A message checking about performing the upgrade installation will be displayed.
To perform upgrade installation, enter "y". Upgrade installation will begin.
2. Display the Resource Orchestrator Setup Completion Window [Windows]
When using the HBA address rename setup service immediately after configuration, check the "Yes, launch it now." checkbox.
Click <Finish> and close the window. If the checkbox was checked, the HBA address rename setup service will be started after the
window is closed.
3. Start the HBA address rename setup Service
[Windows]
When the HBA address rename setup service was not started in step 2., refer to "8.2.1 Settings for the HBA address rename Setup
Service" of the "Setup Guide VE", and start the HBA address rename setup service.
[Linux]
Refer to "8.2.1 Settings for the HBA address rename Setup Service" of the "Setup Guide VE", and start the HBA address rename
setup service.
- 51 -
Appendix A Advisory Notes for Environments with
Systemwalker Centric Manager or ETERNUS
SF Storage Cruiser
This appendix explains advisory notes for use of Resource Orchestrator in combination with Systemwalker Centric Manager or ETERNUS
SF Storage Cruiser.
Installation [Linux]
When using the following products on servers that a manager has been installed on, in order to share SNMP Traps between servers
ServerView Trap Server for Linux (trpsrvd) is necessary.
- Systemwalker Centric Manager (Operation management servers and section admin servers)
- ETERNUS SF Storage Cruiser Manager 14.1 or earlier
ServerView Trap Server for Linux (trpsrvd) is used to transfer snmp traps received at UDP port number 162 to other UDP port numbers.
The ServerView Trap Server for Linux is included in some versions of ServerView Operations Manager. In this case, install ServerView
Trap Server for Linux referring to the ServerView Operations Manager manual.
If ServerView Trap Server for Linux is not included with ServerView Operations Manager, download it from the following web site, and
install it referring to the documents provided with it.
URL: http://download.ts.fujitsu.com/prim_supportcd/SVSSoftware/html/ServerView_e.html (As of February 2012)
After installing ServerView Trap Server for Linux, perform the following settings.
1. Log in as the OS administrator (root).
2. Edit the /etc/services file, and add the following line.
mpwksttr-trap
49162/udp
3. Edit the /usr/share/SMAWtrpsv/conf/trpsrvtargets file, and add port 49162.
Before editing
########################################################################
# Copyright (C) Fujitsu Siemens Computers 2007
# All rights reserved
# Configuration File for trpsrv (SMAWtrpsv)
########################################################################
# Syntax
# port [(address | -) [comment]]
# examples
# 8162
# 9162 - test
# 162 145.25.124.121
After editing
########################################################################
# Copyright (C) Fujitsu Siemens Computers 2007
# All rights reserved
# Configuration File for trpsrv (SMAWtrpsv)
########################################################################
- 52 -
# Syntax
# port [(address | -) [comment]]
# examples
# 8162
# 9162 - test
# 162 145.25.124.121
#Transfer to UDP port 49162.
49162
4. Restart the system.
Upgrading from Earlier Versions
- When upgrading managers of V2.1.3 or earlier versions
[Windows]
The SNMP trap service (SystemWalker MpWksttr service) installed by the manager of V2.1.3 or earlier versions will be deleted by
upgrading Resource Orchestrator.
As the SystemWalker MpWksttr service is shared in environments where the following software exists, if the SystemWalker MpWksttr
service is deleted when upgrading to Resource Orchestrator, perform installation and setup operation of the SystemWalker MpWksttr
service referring to the manual of the following software.
- Systemwalker Centric Manager (Operation management servers and department admin servers)
- ETERNUS SF Storage Cruiser Manager 14.1 or earlier
[Linux]
The SNMP trap service (SystemWalker MpWksttr service) installed by the manager of V2.1.3 or earlier versions will be deleted by
upgrading Resource Orchestrator.
As the SystemWalker MpWksttr service is shared in environments where the following software exists, if the SystemWalker MpWksttr
service is deleted when upgrading to Resource Orchestrator, perform installation and setup operation of the SystemWalker MpWksttr
service referring to the manual of the following software.
- Systemwalker Centric Manager (Operation management servers and department admin servers)
In environments where the above software has not been installed but the following software has, when upgrading managers of V2.1.3
or earlier versions, the SystemWalker MpWksttr service will remain even after upgrading, but the SystemWalker MpWksttr service
is not required.
- ETERNUS SF Storage Cruiser Manager 14.1 or later
In this case, execute the following command as the OS administrator (root) and delete the SystemWalker MpWksttr service.
# rpm -e FJSVswstt <RETURN>
- 53 -
Appendix B Manager Cluster Operation Settings and
Deletion
This section explains the settings necessary for operating Resource Orchestrator in cluster systems and the procedures for deleting this
product from cluster systems.
Note
When coordination with VIOM is being used, or when Single Sign-On is configured, clustered manager operation is not supported.
When coordination with ESC is being used, clustered operation of Windows managers is not supported.
B.1 What are Cluster Systems
In cluster systems, two or more servers are operated as a single virtual server in order to enable high availability.
If a system is run with only one server, and the server or an application operating on it fails, all operations would stop until the server is
rebooted.
In a cluster system where two or more servers are linked together, if one of the servers becomes unusable due to trouble with the server
or an application being run, by restarting the applications on the other server it is possible to resume operations, shortening the length of
time operations are stopped.
Switching from a failed server to another, operational server in this kind of situation is called failover.
In cluster systems, groups of two or more servers are called clusters, and the servers comprising a cluster are called nodes.
Clusters are classified into the following types:
- Standby clusters
This type of cluster involves standby nodes that stand ready to take over from operating nodes. The mode can be one of the following
modes:
- 1:1 hot standby
A cluster consisting of one operating node and one standby node. The operating node is operational and the standby node stands
ready to take over if needed.
- n:1 hot standby
A cluster consisting of n operating nodes and one standby node. The n operating nodes run different operations and the standby
node stands ready to take over from all of the operating nodes.
- n:i hot standby
A cluster consisting of n operating nodes and i standby nodes. The style is similar to n:1 hot standby, only there are i standby
nodes standing ready to take over from all of the operating nodes.
- Mutual standby
A cluster consisting of two nodes with both operating and standby applications. The two nodes each run different operations and
stand ready to take over from each other. If one node fails, the other node runs both of the operations.
- Cascade
A cluster consisting of three or more nodes. One of the nodes is the operating node and the others are the standby nodes.
- Scalable clusters
This is a cluster that allows multiple server machines to operate concurrently for performance improvement and reduced degrading
during trouble. It differs from standby clusters as the nodes are not divided into operating and standby types. If the system fails on
one of the nodes in the cluster, the other servers take over the operations.
Resource Orchestrator managers support failover clustering of Microsoft(R) Windows Server(R) 2008 Enterprise (x86, x64) and 1:1 hot
standby of PRIMECLUSTER.
- 54 -
When operating managers in cluster systems, the HBA address rename setup service can be started on the standby node.
Using this function enables starting of managed servers without preparing a dedicated server for the HBA address rename setup service,
even when managers and managed servers cannot communicate due to problems with the manager or the failure of the NIC used for
connection to the admin LAN.
Information
For details of failover clustering, refer to the Microsoft web site.
For PRIMECLUSTER, refer to the PRIMECLUSTER manual.
B.2 Installation
This section explains installation of managers on cluster systems.
Perform installation only after configuration of the cluster system.
Note
In order to distinguish between the two physical nodes, one is referred to as the primary node and the other the secondary node. The
primary node is the node that is active when the cluster service (cluster application) is started. The secondary node is the node that is in
standby when the cluster service (cluster application) is started.
B.2.1 Preparations
This section explains the resources necessary before installation.
[Windows]
- Client Access Point
An access point is necessary in order to enable communication between the ROR console, managed servers, and managers. The IP
addresses and network names used for access are allocated.
- When the same access point will be used for access by the ROR console and the admin LAN
Prepare a single IP address and network name.
- When different access points will be used for access by the ROR console and the admin LAN
Prepare a pair of IP addresses and network names.
- Shared Disk for Managers
Prepare at least one storage volume (LUN) to store data shared by the managers.
For the necessary disk space for the shared disk, total the values for Installation_folder and Image_file_storage_folder indicated for
managers in "Table 1.46 Dynamic Disk Space" in "1.4.2.5 Dynamic Disk Space" in the "Setup Guide VE", and secure the necessary
amount of disk space.
- Generic Scripts for Manager Services
Create the generic script files for (starting and stopping) the following manager services:
- Resource Coordinator Web Server(Apache)
- Resource Coordinator Sub Web Server(Mongrel)
- Resource Coordinator Sub Web Server(Mongrel2)
Create a script file with the following content for each of the services.
The name of the file is optional, but the file extension must be ".vbs".
- 55 -
Function Online()
Dim objWmiProvider
Dim objService
Dim strServiceState
' Check to see if the service is running
set objWmiProvider = GetObject("winmgmts:/root/cimv2")
set objService = objWmiProvider.get("win32_service='Service_name'")
strServiceState = objService.state
If ucase(strServiceState) = "RUNNING" Then
Online = True
Else
' If the service is not running, try to start it.
response = objService.StartService()
' response = 0 or 10 indicates that the request to start was accepted
If ( response <> 0 ) and ( response <> 10 ) Then
Online = False
Else
Online = True
End If
End If
End Function
Function Offline()
Dim objWmiProvider
Dim objService
Dim strServiceState
' Check to see if the service is running
set objWmiProvider = GetObject("winmgmts:/root/cimv2")
set objService = objWmiProvider.get("win32_service='Service_name'")
strServiceState = objService.state
If ucase(strServiceState) = "RUNNING" Then
response = objService.StopService()
If ( response
Offline =
Else
Offline =
End If
Else
Offline =
End If
End Function
<> 0 ) and ( response <> 10 ) Then
False
True
True
Function LooksAlive()
Dim objWmiProvider
Dim objService
Dim strServiceState
set objWmiProvider = GetObject("winmgmts:/root/cimv2")
set objService = objWmiProvider.get("win32_service='Service_name'")
strServiceState = objService.state
if ucase(strServiceState) = "RUNNING" Then
- 56 -
LooksAlive = True
Else
LooksAlive = False
End If
End Function
Function IsAlive()
Dim objWmiProvider
Dim objService
Dim strServiceState
set objWmiProvider = GetObject("winmgmts:/root/cimv2")
set objService = objWmiProvider.get("win32_service='Service_name'")
strServiceState = objService.state
if ucase(strServiceState) = "RUNNING" Then
IsAlive= True
Else
IsAlive = False
End If
End Function
Specify the following service names for four occurrences of "service_name" in the script.
- ResourceCoordinatorWebServer(Apache)
- Resource Coordinator Sub Web Server(Mongrel)
- Resource Coordinator Sub Web Server(Mongrel2)
[Linux]
- Takeover Logical IP Address for the Manager
When operating managers in cluster systems, allocate a new, unique IP address on the network to PRIMECLUSTER GLS.
If the IP address used for access from the ROR console differs from the above IP address, prepare another logical IP address and
allocate it to PRIMECLUSTER GLS.
When using an IP address that is already being used for an existing operation (cluster application), there is no need to allocate a new
IP address for the manager.
- Shared Disk for Managers
Prepare a PRIMECLUSTER GDS volume to store shared data for managers.
For the necessary disk space for the shared disk, total the values indicated for "Manager [Linux]" in "Table 1.46 Dynamic Disk Space"
in "1.4.2.5 Dynamic Disk Space" of the "Setup Guide VE", and secure the necessary amount of disk space.
B.2.2 Installation
This section explains installation of managers on cluster systems.
Install managers on both the primary and secondary nodes.
Refer to "2.1 Manager Installation" and perform installation.
Note
- Do not install on the shared disk for managers.
[Windows]
- On the [Select Installation Folder] window, specify the same folder names on the primary node and secondary node for the installation
folders and the image file storage folders.
However, do not specify a folder on the shared disk for managers.
- 57 -
- On the [Administrative User Creation] window, specify the same character strings for the user account names and passwords on the
primary node and the secondary node.
- On the [Admin LAN Selection] window of the installer, select the network with the same subnet for direct communication with
managed servers.
[Linux]
- Specify the same directory names for both the primary node and the secondary node when entering the image file storage directory
during installation.
However, do not specify a directory on the shared disk for managers.
- Specify the same character strings for the primary node and the secondary node when entering administrative user account names and
passwords during installation.
- Select a network of the same subnet from which direct communication with managed servers is possible when selecting the admin
LAN network interface during installation.
After installation, stop the manager.
Stop the manager using the rcxadm mgrctl stop command.
For details of the command, refer to "5.7 rcxadm mgrctl" in the "Command Reference".
[Windows]
Change the startup type of the following manager services to "Manual".
- Resource Coordinator Task Manager
- Resource Coordinator Web Server(Apache)
- Resource Coordinator Sub Web Server(Mongrel)
- Resource Coordinator Sub Web Server(Mongrel2)
- Deployment Service (*1)
- TFTP Service (*1)
- PXE Services (*1)
- Resource Coordinator DB Server (PostgreSQL)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
B.3 Configuration
This section explains the procedure for setting up managers as cluster services (cluster applications) in cluster systems.
B.3.1 Configuration [Windows]
Perform setup on the admin server.
The flow of setup is as shown below.
- 58 -
Figure B.1 Manager Service Setup Flow
Setup of managers as cluster services (cluster applications) is performed using the following procedure.
This explanation assumes that the shared disk for managers has been allocated to the primary node.
Create Cluster Resources
- 59 -
1. Store the generic scripts.
Store the generic scripts created in "B.2.1 Preparations" in the manager installation folders on the primary node and the second node.
After storing the scripts, set the access rights for the script files.
Use the command prompt to execute the following command on each script file.
>cacls File_name /P "NT AUTHORITY\SYSTEM:F" "BUILTIN\Administrators:F" <RETURN>
Note
When using the following language versions of Windows, replace the specified local system name (NT AUTHORITY\SYSTEM)
and administrator group name (BUILTIN\Administrators) with those in the following list:
Language
Local system name
Administrator group name
German
NT-AUTORITÄT\SYSTEM
VORDEFINIERT\Administratoren
French
AUTORITE NT\SYSTEM
BUILTIN\Administrateurs
Spanish
NT AUTHORITY\SYSTEM
BUILTIN\Administradores
Russian
NT AUTHORITY\SYSTEM
BUILTIN\Администраторы
2. Open the [Failover Cluster Management] window and connect to the cluster system.
3. Configure a manager "service or application".
a. Right-click [Services and Applications] on the Failover Cluster Management tree, and select [More Actions]-[Create Empty
Service or Application].
[New service or application] will be created under [Services and Applications].
b. Right-click [New service or application], and select [Properties] from the displayed menu.
The [New service or application properties] dialog is displayed.
c. Change the "Name" on the [General] tab, select the resource name of the primary node from "Preferred owners:", and click
<Apply>.
d. When the settings have been applied, click <OK>.
From this point, the explanation assumes that the name of the "service or application" for Resource Orchestrator has been configured
as "RC-manager".
4. Allocate the shared disk to the manager "service or application".
a. Right-click [Services and Applications]-[RC-manager], and select [Add storage] from the displayed menu.
The [Add Storage] window will be displayed.
b. From the "Available disks:", select the shared disk for managers and click <OK>.
5. Allocate the client access point to the manager "service or application".
a. Right-click [Services and Applications]-[RC-manager], select [Add a resource]-[1 - Client Access Point] from the displayed
menu.
The [New Resource Wizard] window will be displayed.
b. Configure the following parameters on the [General] tab and then click <Next>>.
Name
Set the network name prepared in "B.2.1 Preparations".
Networks
Check the network to use.
- 60 -
Address
Set the IP address prepared in "B.2.1 Preparations".
"Confirmation" will be displayed.
c. Check the information displayed for "Confirmation" and click <Next>>.
If configuration is successful, the "Summary" will be displayed.
d. Click <Finish>.
"Name: Network_Name" and "IP Address:IP_Address" will be created in the "Server Name" of the "Summary of RCmanager" displayed in the center of the window.
The specified value in step b. is displayed for Network_Name and IP_Address.
When a network other than the admin LAN has been prepared for ROR console access, perform the process in step 6.
6. Allocate the IP address to the manager "service or application".
a. Right-click [Services and Applications]-[RC-manager], select [Add a resource]-[More resources]-[4 - Add IP Address] from
the displayed menu.
"IP Address: <not configured>" will be created in the "Other Resources" of the "Summary of RC-manager" displayed in the
center of the window.
b. Right-click "IP Address: <not configured>", and select [Properties] from the displayed menu.
The [IP Address: <not configured> Properties] window is displayed.
c. Configure the following parameters on the [General] tab and then click <Apply>.
Resource Name
Set the network name prepared in "B.2.1 Preparations".
Network
Select the network to use from the pull-down menu.
Static IP Address
Set the IP address prepared in "B.2.1 Preparations".
d. When the settings have been applied, click <OK>.
Copy Dynamic Disk Files
Copy the files from the dynamic disk of the manager on the primary node to the shared disk for managers.
1. Use Explorer to create the "Drive_name:\Fujitsu\ROR\SVROR\" folder on the shared disk.
2. Use Explorer to copy the files and folders from the local disk of the primary node to the folder on the shared disk.
Table B.1 List of Files and Folders to Copy
Local Disk (Source)
Shared Disk (Target)
Installation_folder\Manager\etc\customize_data
Drive_name:\Fujitsu\ROR\SVROR\customize_data
Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
Drive_name:\Fujitsu\ROR\SVROR\certificate
Installation_folder\Manager\Rails\config\rcx_secret.key
Drive_name:\Fujitsu\ROR\SVROR\rcx_secret.key
Installation_folder\Manager\Rails\config\rcx\rcxdb.pwd
Drive_name:\Fujitsu\ROR\SVROR\rcxdb.pwd
Installation_folder\Manager\Rails\db
Drive_name:\Fujitsu\ROR\SVROR\db
Installation_folder\Manager\Rails\log
Drive_name:\Fujitsu\ROR\SVROR\log
Installation_folder\Manager\Rails\tmp
Drive_name:\Fujitsu\ROR\SVROR\tmp
Installation_folder\Manager\sys\apache\conf
Drive_name:\Fujitsu\ROR\SVROR\conf
- 61 -
Local Disk (Source)
Shared Disk (Target)
Installation_folder\Manager\sys\apache\logs
Drive_name:\Fujitsu\ROR\SVROR\logs
Installation_folder\Manager\var
Drive_name:\Fujitsu\ROR\SVROR\var
Installation_folder\ScwPro\Bin\ipTable.dat (*1)
Drive_name:\Fujitsu\ROR\SVROR\ipTable.dat
Installation_folder\ScwPro\scwdb (*1)
Drive_name:\Fujitsu\ROR\SVROR\scwdb
Installation_folder\ScwPro\tftp\rcbootimg (*1)
Drive_name:\Fujitsu\ROR\SVROR\rcbootimg
User_specified_folder\ScwPro\depot (*1)
Drive_name:\Fujitsu\ROR\SVROR\depot
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
3. Release the sharing settings of the following folder:
Not necessary when ServerView Deployment Manager is used in the same subnet.
- Installation_folder\ScwPro\scwdb
Execute the following command using the command prompt:
>net share ScwDB$ /DELETE <RETURN>
4. Use Explorer to change the names of the folders below that were copied.
- Installation_folder\Manager\etc\customize_data
- Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
- Installation_folder\Manager\Rails\config\rcx_secret.key
- Installation_folder\Manager\Rails\config\rcx\rcxdb.pwd
- Installation_folder\Manager\Rails\db
- Installation_folder\Manager\Rails\log
- Installation_folder\Manager\Rails\tmp
- Installation_folder\Manager\sys\apache\conf
- Installation_folder\Manager\sys\apache\logs
- Installation_folder\Manager\var
- Installation_folder\ScwPro\Bin\ipTable.dat (*1)
- Installation_folder\ScwPro\scwdb (*1)
- Installation_folder\ScwPro\tftp\rcbootimg (*1)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Note
When folders or files are in use by another program, attempts to change folder names and file names may fail.
If attempts to change names fail, change the names after rebooting the server.
5. Delete the following file from the shared disk:
- Drive_name:\Fujitsu\ROR\SVROR\db\rmc_key
Configure Folders on the Shared Disk (Primary node)
- 62 -
1. On the primary node, configure symbolic links to the files and folders on the shared disk.
Use the command prompt to configure a symbolic link from the files and folders on the local disk of the primary node to the files
and folders on the shared disk.
Execute the following command.
- Folder
>mklink /d Link_source Link_target <RETURN>
- File
>mklink Link_source Link_target <RETURN>
Specify the folders or files copied in "Copy Dynamic Disk Files" for Link_source.
Specify the folders or files copied to the shared disk in "Copy Dynamic Disk Files" for Link_target.
The folders and files to specify are as given below:
Table B.2 Folders to Specify
Local Disk (Link Source)
Shared Disk (Link Target)
Installation_folder\Manager\etc\customize_data
Drive_name:\Fujitsu\ROR\SVROR\customize_data
Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
Drive_name:\Fujitsu\ROR\SVROR\certificate
Installation_folder\Manager\Rails\db
Drive_name:\Fujitsu\ROR\SVROR\db
Installation_folder\Manager\Rails\log
Drive_name:\Fujitsu\ROR\SVROR\log
Installation_folder\Manager\Rails\tmp
Drive_name:\Fujitsu\ROR\SVROR\tmp
Installation_folder\Manager\sys\apache\conf
Drive_name:\Fujitsu\ROR\SVROR\conf
Installation_folder\Manager\sys\apache\logs
Drive_name:\Fujitsu\ROR\SVROR\logs
Installation_folder\Manager\var
Drive_name:\Fujitsu\ROR\SVROR\var
Installation_folder\ScwPro\scwdb (*1)
Drive_name:\Fujitsu\ROR\SVROR\scwdb
Installation_folder\ScwPro\tftp\rcbootimg (*1)
Drive_name:\Fujitsu\ROR\SVROR\rcbootimg
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Table B.3 Files to Specify
Local Disk (Link Source)
Shared Disk (Link Target)
Installation_folder\Manager\Rails\config\rcx_secret.key
Drive_name:\Fujitsu\ROR\SVROR\rcx_secret.key
Installation_folder\Manager\Rails\config\rcx\rcxdb.pwd
Drive_name:\Fujitsu\ROR\SVROR\rcxdb.pwd
Installation_folder\ScwPro\Bin\ipTable.dat (*1)
Drive_name:\Fujitsu\ROR\SVROR\ipTable.dat
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Note
Before executing the above command, move to a folder one level higher than the link source folder.
Example
When specifying a link from "Installation_folder\Manager\sys\apache\logs" on the local disk to "Drive_name:\Fujitsu\ROR
\SVROR\logs" on the shared disk
- 63 -
>cd Installation_folder\Manager\sys\apache <RETURN>
>mklink /d logs Drive_name:\Fujitsu\ROR\SVROR\logs <RETURN>
2. Change the registry of the primary node.
Not necessary when ServerView Deployment Manager is used in the same subnet.
a. Backup the registry to be changed.
Execute the following command.
- x64
>reg save HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Fujitsu\SystemcastWizard
scw.reg <RETURN>
- x86
>reg save HKEY_LOCAL_MACHINE\SOFTWARE\Fujitsu\SystemcastWizard scw.reg
<RETURN>
b. Change the registry.
Execute the following command.
- x64
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Fujitsu\SystemcastWizard
\ResourceDepot /v BasePath /d Drive_name:\Fujitsu\ROR\SVROR\depot\ /f <RETURN>
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Fujitsu\SystemcastWizard
\DatabaseBroker\Default /v LocalPath /d Drive_name:\Fujitsu\ROR\SVROR\scwdb /f <RETURN>
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Fujitsu\SystemcastWizard
\DHCP /v IPtableFilePath /d Drive_name:\Fujitsu\ROR\SVROR /f <RETURN>
- x86
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Fujitsu\SystemcastWizard\ResourceDepot /v
BasePath /d Drive_name:\Fujitsu\ROR\SVROR\depot\ /f <RETURN>
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Fujitsu\SystemcastWizard\DatabaseBroker
\Default /v LocalPath /d Drive_name:\Fujitsu\ROR\SVROR\scwdb /f <RETURN>
>reg add HKEY_LOCAL_MACHINE\SOFTWARE\Fujitsu\SystemcastWizard\DHCP /v
IPtableFilePath /d Drive_name:\Fujitsu\ROR\SVROR /f <RETURN>
Change Drive_name based on your actual environment.
c. If changing the registry fails, restore the registry.
Execute the following command.
- x64
>reg restore HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Fujitsu
\SystemcastWizard scw.reg <RETURN>
- x86
- 64 -
>reg restore HKEY_LOCAL_MACHINE\SOFTWARE\Fujitsu\SystemcastWizard scw.reg
<RETURN>
Note
Do not use the backup registry file created using this procedure for any other purposes.
Configure Access Authority for Folders and Files
- Set the access authority for the folders and files copied to the shared disk.
Use the command prompt to set the access authority for the folders and files on the shared disk.
The folders and files to specify are as given below:
- Folder
Drive_name:\Fujitsu\ROR\SVROR\certificate
Drive_name:\Fujitsu\ROR\SVROR\conf\ssl.key
Drive_name:\Fujitsu\ROR\SVROR\var\log
- Files
Drive_name:\Fujitsu\ROR\SVROR\rcx_secret.key
Execute the following command.
- Folder
>cacls Folder_name /T /P "NT AUTHORITY\SYSTEM:F" "BUILTIN\Administrators:F" <RETURN>
- File
>cacls File_name /P "NT AUTHORITY\SYSTEM:F" "BUILTIN\Administrators:F" <RETURN>
Note
When using the following language versions of Windows, replace the specified local system name (NT AUTHORITY\SYSTEM)
and administrator group name (BUILTIN\Administrators) with those in the following list:
Language
Local system name
Administrator group name
German
NT-AUTORITÄT\SYSTEM
VORDEFINIERT\Administratoren
French
AUTORITE NT\SYSTEM
BUILTIN\Administrateurs
Spanish
NT AUTHORITY\SYSTEM
BUILTIN\Administradores
Russian
NT AUTHORITY\SYSTEM
BUILTIN\Администраторы
Configure Access Authority for the Resource Orchestrator Database Folder (Primary node)
Set the access authority for the folder for the Resource Orchestrator database copied to the shared disk.
Execute the following command using the command prompt of the primary node:
>cacls Drive_name:\Fujitsu\ROR\SVROR\db\data /T /P "NT AUTHORITY\SYSTEM:F" "BUILTIN\Administrators:F"
"rcxdb:C" <RETURN>
- 65 -
Change the Manager Admin LAN IP Address (Primary node)
Change the admin LAN IP address of the manager.
Specify the admin LAN IP address set in step 5. of "Create Cluster Resources".
1. Bring the admin LAN IP address for the manager "service or application" online.
2. Execute the following command using the command prompt of the primary node:
>Installation_folder\Manager\bin\rcxadm mgrctl modify -ip IP_address <RETURN>
3. Allocate the shared disk to the secondary node.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Move this service or
application to another node]-[1 - Move to node node_name] from the displayed menu.
The name of the secondary node is displayed for node_name.
Configure Folders on the Shared Disk (Secondary node)
- On the secondary node, configure symbolic links to the folders on the shared disk.
a. Use Explorer to change the names of the folders and files below.
- Installation_folder\Manager\etc\customize_data
- Installation_folder\Manager\etc\opt\FJSVssmgr\current\certificate
- Installation_folder\Manager\Rails\config\rcx_secret.key
- Installation_folder\Manager\Rails\config\rcx\rcxdb.pwd
- Installation_folder\Manager\Rails\db
- Installation_folder\Manager\Rails\log
- Installation_folder\Manager\Rails\tmp
- Installation_folder\Manager\sys\apache\conf
- Installation_folder\Manager\sys\apache\logs
- Installation_folder\Manager\var
- Installation_folder\ScwPro\Bin\ipTable.dat (*1)
- Installation_folder\ScwPro\scwdb (*1)
- Installation_folder\ScwPro\tftp\rcbootimg (*1)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Note
When folders or files are in use by another program, attempts to change folder names and file names may fail.
If attempts to change names fail, change the names after rebooting the server.
b. Release the sharing settings of the following folder:
Not necessary when ServerView Deployment Manager is used in the same subnet.
- Installation_folder\ScwPro\scwdb
Execute the following command using the command prompt:
>net share ScwDB$ /DELETE <RETURN>
- 66 -
c. Use the command prompt to configure a symbolic link from the folder on the local disk of the secondary node to the folder on
the shared disk.
Execute the following command.
- Folder
>mklink /d Link_source Link_target <RETURN>
- File
>mklink Link_source Link_target <RETURN>
Specify a file or folder on the local disk of the secondary node for Link_source.
Specify a file or folder on the shared disk of the secondary node for Link_target.
The folders to specify are as given below:
Table B.4 Folders to Specify
Local Disk (Link Source)
Shared Disk (Link Target)
Installation_folder\Manager\etc\customize_data
Drive_name:\Fujitsu\ROR\SVROR
\customize_data
Installation_folder\Manager\etc\opt\FJSVssmgr\current
\certificate
Drive_name:\Fujitsu\ROR\SVROR\certificate
Installation_folder\Manager\Rails\db
Drive_name:\Fujitsu\ROR\SVROR\db
Installation_folder\Manager\Rails\log
Drive_name:\Fujitsu\ROR\SVROR\log
Installation_folder\Manager\Rails\tmp
Drive_name:\Fujitsu\ROR\SVROR\tmp
Installation_folder\Manager\sys\apache\conf
Drive_name:\Fujitsu\ROR\SVROR\conf
Installation_folder\Manager\sys\apache\logs
Drive_name:\Fujitsu\ROR\SVROR\logs
Installation_folder\Manager\var
Drive_name:\Fujitsu\ROR\SVROR\var
Installation_folder\ScwPro\scwdb (*1)
Drive_name:\Fujitsu\ROR\SVROR\scwdb
Installation_folder\ScwPro\tftp\rcbootimg (*1)
Drive_name:\Fujitsu\ROR\SVROR\rcbootimg
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Table B.5 Files to Specify
Local Disk (Link Source)
Shared Disk (Link Target)
Installation_folder\Manager\Rails\config\rcx_secret.key
Drive_name:\Fujitsu\ROR\SVROR\rcx_secret.key
Installation_folder\Manager\Rails\config\rcx\rcxdb.pwd
Drive_name:\Fujitsu\ROR\SVROR\rcxdb.pwd
Installation_folder\ScwPro\Bin\ipTable.dat (*1)
Drive_name:\Fujitsu\ROR\SVROR\ipTable.dat
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Note
Before executing the above command, move to a folder one level higher than the link source folder.
Example
When specifying a link from "Installation_folder\Manager\sys\apache\logs" on the local disk to "Drive_name:\Fujitsu\ROR
\SVROR\logs" on the shared disk
- 67 -
>cd Installation_folder\Manager\sys\apache <RETURN>
>mklink /d logs Drive_name:\Fujitsu\ROR\SVROR\logs <RETURN>
Configure Access Authority for the Resource Orchestrator Database Folder (Secondary node)
Set the access authority for the folder for the Resource Orchestrator database copied to the shared disk.
Execute the following command using the command prompt of the secondary node:
>cacls Drive_name:\Fujitsu\ROR\SVROR\db\data /T /G "rcxdb:C" /E <RETURN>
Change the Manager Admin LAN IP Address (Secondary node)
Change the admin LAN IP address of the manager.
Specify the admin LAN IP address set in step 5. of "Create Cluster Resources".
1. Execute the following command using the command prompt of the secondary node:
>Installation_folder\Manager\bin\rcxadm mgrctl modify -ip IP_address <RETURN>
2. Allocate the shared disk to the primary node.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Move this service or
application to another node]-[1 - Move to node node_name] from the displayed menu.
The name of the primary node is displayed for node_name.
Register Service Resources
1. Add the manager service to the manager "service or application".
Add the following six services.
- Resource Coordinator Manager
- Resource Coordinator Task Manager
- Deployment Service (*1)
- TFTP Service (*1)
- PXE Services (*1)
- Resource Coordinator DB Server (PostgreSQL)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Perform the following procedure for each of the above services:
a. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Add a resource][4 - Generic Service] from the displayed menu.
The [New Resource Wizard] window will be displayed.
b. Select the above services on "Select Service" and click <Next>>.
"Confirmation" will be displayed.
c. Check the information displayed for "Confirmation" and click <Next>>.
If configuration is successful, the "Summary" will be displayed.
- 68 -
d. Click <Finish>.
After completing the configuration of all of the services, check that the added services are displayed in "Other Resources" of
the "Summary of RC-manager" displayed in the center of the window.
2. Configure registry replication as a service in the manager "service or application".
Not necessary when ServerView Deployment Manager is used in the same subnet.
Configure the registry replication of resources based on the following table.
- x64
Resource for
Configuration
Registry Key
[HKEY_LOCAL_MACHINE\]SOFTWARE\Wow6432Node\Fujitsu
\SystemcastWizard\ResourceDepot
Deployment Service
[HKEY_LOCAL_MACHINE\]SOFTWARE\Wow6432Node\Fujitsu
\SystemcastWizard\DatabaseBroker\Default
[HKEY_LOCAL_MACHINE\]SOFTWARE\Wow6432Node\Fujitsu
\SystemcastWizard\DHCP
PXE Services
[HKEY_LOCAL_MACHINE\]SOFTWARE\Wow6432Node\Fujitsu
\SystemcastWizard\PXE\ClientBoot\
- x86
Resource for
Configuration
Registry Key
[HKEY_LOCAL_MACHINE\]SOFTWARE\Fujitsu\SystemcastWizard
\ResourceDepot
Deployment Service
[HKEY_LOCAL_MACHINE\]SOFTWARE\Fujitsu\SystemcastWizard
\DatabaseBroker\Default
[HKEY_LOCAL_MACHINE\]SOFTWARE\Fujitsu\SystemcastWizard
\DHCP
PXE Services
[HKEY_LOCAL_MACHINE\]SOFTWARE\Fujitsu\SystemcastWizard\PXE
\ClientBoot\
During configuration, enter the section of the registry keys after the brackets ([ ]).
Perform the following procedure for each of the above resources:
a. Right-click the target resource on "Other Resources" on the "Summary of RC-manager" displayed in the middle of the
[Failover Cluster Management] window, and select [Properties] from the displayed menu.
The [target_resource Properties] window will be displayed.
b. Click <Add> on the [Registry Replication] tab.
The [Registry Key] window will be displayed.
c. Configure the above registry keys on "Root registry key" and click <OK>.
When configuring the second registry key, repeat b. and c.
d. After configuration of the registry keys is complete, click <Apply>.
e. When the settings have been applied, click <OK> and close the dialog.
3. Add the generic scripts to the manager "service or application".
Add the three generic scripts from the script files that were created in step 1. of "Create Cluster Resources". Perform the following
procedure for each generic script.
- 69 -
a. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Add a resource][3 - Generic Script] from the displayed menu.
The [New Resource Wizard] window will be displayed.
b. Set the script files created in step 1. of "Create Cluster Resources" in the "Script file path" of the "Generic Script Info", and
click <Next>>.
"Confirmation" will be displayed.
c. Check the information displayed for "Confirmation" and click <Next>>.
If configuration is successful, the "Summary" will be displayed.
d. Click <Finish>.
After completing the configuration of all of the generic scripts, check that the added generic scripts are displayed in "Other
Resources" of the "Summary of RC-manager" displayed in the center of the window.
4. Configure the dependencies of the resources of the manager "service or application".
Configure the dependencies of resources based on the following table.
Table B.6 Configuring Resource Dependencies
Resource for Configuration
Dependent Resource
Resource Coordinator Manager
Shared Disks
Resource Coordinator Task Manager
Shared Disks
Resource Coordinator Sub Web Server (Mongrel) Script
Resource Coordinator Task Manager
Resource Coordinator Sub Web Server (Mongrel2) Script
Resource Coordinator Task Manager
Resource Coordinator Web Server (Apache) Script
Shared Disks
Deployment Service (*1)
PXE Services
TFTP Service (*1)
Deployment Service
PXE Services (*1)
Admin LAN IP Address
Resource Coordinator DB Server (PostgreSQL)
Shared Disks
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Perform the following procedure for each of the above resources:
a. Right-click the target resource on "Other Resources" on the "Summary of RC-manager" displayed in the middle of the
[Failover Cluster Management] window, and select [Properties] from the displayed menu.
The [target_resource Properties] window will be displayed.
b. From the "Resource" of the [Dependencies] tab select the name of the "Dependent Resource" from "Table B.6 Configuring
Resource Dependencies" and click <Apply>.
c. When the settings have been applied, click <OK>.
Start Cluster Services
1. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Bring this service or
application online] from the displayed menu.
Confirm that all resources have been brought online.
2. Switch the manager "service or application" to the secondary node.
Confirm that all resources of the secondary node have been brought online.
- 70 -
Note
When registering the admin LAN subnet, additional settings are required.
For the setting method, refer to "Settings for Clustered Manager Configurations" in "2.8 Registering Admin LAN Subnets [Windows]"
in the "User's Guide VE".
Set up the HBA address rename Setup Service
When configuring managers and the HBA address rename setup service in cluster systems, perform the following procedure.
Not necessary when ServerView Deployment Manager is used in the same subnet.
Performing the following procedure starts the HBA address rename setup service on the standby node in the cluster.
1. Switch the manager "service or application" to the primary node.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select the primary node using
[Move this service or application to another node] from the displayed menu.
Switch the "Current Owner" of the "Summary of RC-manager" displayed in the center of the window to the primary node.
2. Configure the startup settings of the HBA address rename setup service of the secondary node.
a. Execute the following command using the command prompt:
>"Installation_folder\WWN Recovery\opt\FJSVrcxrs\bin\rcxhbactl.exe" <RETURN>
The [HBA address rename setup service] dialog is displayed.
b. Configure the following parameters.
IP address of admin server
Specify the admin LAN IP address set in step 5. of "Create Cluster Resources".
Port number
Specify the port number for communication with the admin server. The port number during installation is 23461.
When the port number for "rcxweb" on the admin server has been changed, specify the new port number.
c. Click <Run>.
Confirm that the "Status" becomes "Running".
d. Click <Stop>.
Confirm that the "Status" becomes "Stopping".
e. Click <Cancel> and close the [HBA address rename setup service] dialog.
3. Switch the manager "service or application" to the secondary node.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select the resource name of
the secondary node from the [Move this service or application to another node] from the displayed menu.
The "Current Owner" of the "Summary of RC-manager" switches to the secondary node.
4. Configure the startup settings of the HBA address rename setup service of the primary node.
The procedure is the same as step 2.
5. Take the manager "service or application" offline.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Take this service or
application offline] from the displayed menu.
- 71 -
6. Configure a "service or application" for the HBA address rename service.
a. Right-click [Services and Applications] on the Failover Cluster Management tree, and select [More Actions]-[Create Empty
Service or Application].
[New service or application] will be created under [Services and Applications].
b. Right-click [New service or application], and select [Properties] from the displayed menu.
The [New service or application properties] dialog is displayed.
c. Change the "Name" on the [General] tab, select the resource name of the primary node from "Preferred owners", and click
<Apply>.
d. When the settings have been applied, click <OK>.
From this point, this explanation assumes that the name of the "service or application" for the HBA address rename setup service
has been configured as "RC-HBAar".
7. Add the generic scripts to the manager "service or application" for the HBA address rename service.
a. Right-click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree, and select [Add a resource][3 - Generic Script] from the displayed menu.
The [New Resource Wizard] window will be displayed.
b. Set the script files in the "Script file path" of the "Generic Script Info", and click <Next>>.
"Confirmation" will be displayed.
Script File Path
Installation_folder\Manager\cluster\script\HBAarCls.vbs
c. Check the information displayed for "Confirmation" and click <Next>>.
The "Summary" will be displayed.
d. Click <Finish>.
Check that the added "HBAarCls Script" is displayed in "Other Resources" of the "Summary of RC-manager" displayed in
the center of the window.
8. Add the generic scripts for the coordinated starting of the "service or application" for the HBA address rename setup service, to the
"service or application" for the manager.
a. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Add a resource][3 - Generic Script] from the displayed menu.
The [New Resource Wizard] window will be displayed.
b. Set the script files in the "Script file path" of the "Generic Script Info", and click <Next>>.
"Confirmation" will be displayed.
Script File Path
Installation_folder\Manager\cluster\script\HBAarClsCtl.vbs
c. Check the information displayed for "Confirmation" and click <Next>>.
The "Summary" will be displayed.
d. Click <Finish>.
Check that the added "HBAarClsCtl Script" is displayed in "Other Resources" of the "Summary of RC-manager" displayed
in the center of the window.
9. Configure the dependencies of the resources of the manager "service or application".
Configure the dependencies of resources based on the following table.
- 72 -
Table B.7 Configuring Resource Dependencies
Resource for Configuration
Dependent Resource
PXE Services (*1)
HBAarClsCtl Script
HBAarClsCtl Script
Admin LAN IP Address
*1: The PXE Services have been configured in step 4, "Register Service Resources", but change them using the above procedure.
Configure the dependencies of resources based on the following table for each of the above resources:
Refer to step 4, "Register Service Resources" for how to perform the settings of Resource Dependencies.
10. Configure the property to refer to when executing the "HBAarClsCtl Script".
Execute the following command using the command prompt of the primary node:
>CLUSTER RES "HBAarClsCtl_Script" /PRIV HBAGroupName="RC-HBAar"<RETURN>
Note
Specify the name of the generic script added in step 8 for HBAarClsCtl_Script, and the name of the "service or application" for the
HBA address rename service for RC-HBAar.
11. Bring the manager "service or application" online.
a. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Bring this service
or application online] from the displayed menu.
Confirm that all resources of the "RC-manager" have been brought online.
b. Click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree.
Check that the "Status" of the "Summary of RC-HBAar" is online, and the "Current Owner" is the primary node.
12. Configure the startup settings of the HBA address rename setup service of the primary node.
a. Execute the following command using the command prompt:
>"Installation_folder\WWN Recovery\opt\FJSVrcxrs\bin\rcxhbactl.exe" <RETURN>
The [HBA address rename setup service] dialog is displayed.
b. Confirm that the "Status" is "Running".
c. Click <Cancel> and close the [HBA address rename setup service] dialog.
13. Switch the manager "service or application" to the primary node.
a. Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select the primary node
using [Move this service or application to another node] from the displayed menu.
The "Current Owner" of the "Summary of RC-manager" switches to the primary node.
b. Click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree.
Check that the "Status" of the "Summary of RC-HBAar" is online, and the "Current Owner" is the secondary node.
14. Confirm the status of the HBA address rename setup service of the secondary node.
The procedure is the same as step 12.
Note
- Set the logging level of Failover Cluster to 3 (the default) or higher.
It is possible to confirm and change the logging level by executing the following command using the command prompt:
Logging level confirmation
- 73 -
>CLUSTER /PROP:ClusterLogLevel <RETURN>
Changing the logging level (Specifying level 3)
>CLUSTER LOG /LEVEL:3 <RETURN>
- The "service or application" configured in "Set up the HBA address rename Setup Service" for the HBA address rename setup
service, is controlled so that it is online on the other node, and online processing is performed in coordination with the manager
"service or application".
For "service or application" for the HBA address rename setup service, do not move the node using operations from the [Failover
Cluster Management] window.
B.3.2 Settings [Linux]
Perform setup on the admin server.
The flow of setup is as shown below.
- 74 -
Figure B.2 Manager Service Setup Flow
- 75 -
Setup of managers as cluster services (cluster applications) is performed using the following procedure.
Perform setup using OS administrator authority.
If the image file storage directory is changed from the default directory (/var/opt/FJSVscw-deploysv/depot) during installation, in a. of
step 6 perform settings so that the image file storage directory is also located in the shared disk.
Not necessary when ServerView Deployment Manager is used in the same subnet.
1. Stop cluster applications (Primary node)
When adding to existing operations (cluster applications)
When adding a manager to an existing operation (cluster application), use the cluster system's operation management view (Cluster
Admin) and stop operations (cluster applications).
2. Configure the shared disk and takeover logical IP address (Primary node/Secondary node)
a. Shared disk settings
Use PRIMECLUSTER GDS and perform settings for the shared disk.
For details, refer to the PRIMECLUSTER Global Disk Services manual.
b. Configure the takeover logical IP address
Use PRIMECLUSTER GLS and perform settings for the takeover logical IP address.
As it is necessary to activate the takeover logical IP address using the following procedure, do not perform registration of
resources with PRIMECLUSTER (by executing the /opt/FJSVhanet/usr/sbin/hanethvrsc create command) at this point.
When adding to existing operations (cluster applications)
When using an existing takeover logical IP address, delete the PRIMECLUSTER GLS virtual interface information from the
resources for PRIMECLUSTER.
For details, refer to the PRIMECLUSTER Global Link Services manual.
3. Mount the shared disk (Primary node)
Mount the shared disk for managers on the primary node.
4. Activate the takeover logical IP address (Primary node)
On the primary node, activate the takeover logical IP address for the manager.
For details, refer to the PRIMECLUSTER Global Link Services manual.
5. Change manager startup settings (Primary node)
Perform settings so that the startup process of the manager is controlled by the cluster system, not the OS.
Execute the following command on the primary node.
# /opt/FJSVrcvmr/cluster/bin/rcxclchkconfig setup <RETURN>
6. Copy dynamic disk files (Primary node)
Copy the files from the dynamic disk of the manager on the primary node to the shared disk for managers.
a. Create the directory "shared_disk_mount_point/Fujitsu/ROR/SVROR" on the shared disk.
b. Copy the directories and files on the local disk of the primary node to the created directory.
Execute the following command.
# tar cf - copy_target | tar xf - -C shared_disk_mount_point/Fujitsu/ROR/SVROR/ <RETURN>
Note
The following messages may be output when the tar command is executed. They have no effect on operations, so ignore
them.
- tar: Removing leading `/' from member names
- tar: file_name: socket ignored
- 76 -
Directories and Files to Copy
- /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd
- /etc/opt/FJSVrcvmr/customize_data
- /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
- /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key
- /etc/opt/FJSVrcvmr/sys/apache/conf
- /var/opt/FJSVrcvmr
- /etc/opt/FJSVscw-common (*1)
- /var/opt/FJSVscw-common (*1)
- /etc/opt/FJSVscw-tftpsv (*1)
- /var/opt/FJSVscw-tftpsv (*1)
- /etc/opt/FJSVscw-pxesv (*1)
- /var/opt/FJSVscw-pxesv (*1)
- /etc/opt/FJSVscw-deploysv (*1)
- /var/opt/FJSVscw-deploysv (*1)
- /etc/opt/FJSVscw-utils (*1)
- /var/opt/FJSVscw-utils (*1)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
c. Change the names of the copied directories and files listed below.
Execute the following command. Make sure a name such as source_file_name(source_directory_name)_old is specified for
the target_file_name(target_directory_name).
# mv -i source_file_name(source_directory_name) target_file_name(target_directory_name) <RETURN>
- /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd
- /etc/opt/FJSVrcvmr/customize_data
- /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
- /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key
- /etc/opt/FJSVrcvmr/sys/apache/conf
- /var/opt/FJSVrcvmr
- /etc/opt/FJSVscw-common (*1)
- /var/opt/FJSVscw-common (*1)
- /etc/opt/FJSVscw-tftpsv (*1)
- /var/opt/FJSVscw-tftpsv (*1)
- /etc/opt/FJSVscw-pxesv (*1)
- /var/opt/FJSVscw-pxesv (*1)
- /etc/opt/FJSVscw-deploysv (*1)
- /var/opt/FJSVscw-deploysv (*1)
- /etc/opt/FJSVscw-utils (*1)
- /var/opt/FJSVscw-utils (*1)
- 77 -
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
7. Configure symbolic links for the shared disk (Primary node)
a. Configure symbolic links for the copied directories and files.
Configure symbolic links from the directories and files on the local disk of the primary node for the directories and files on
the shared disk.
Execute the following command.
# ln -s shared_disk local_disk <RETURN>
For shared_disk specify the shared disk in "Table B.8 Directories to Link" or "Table B.9 Files to Link".
For local_disk, specify the local disk in "Table B.8 Directories to Link" or "Table B.9 Files to Link".
Table B.8 Directories to Link
Shared Disk
Local Disk
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVrcvmr/customize_data
/etc/opt/FJSVrcvmr/customize_data
shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVrcvmr/opt/FJSVssmgr/current/certificate
/etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/
certificate
shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVrcvmr/sys/apache/conf
/etc/opt/FJSVrcvmr/sys/apache/conf
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVrcvmr
/var/opt/FJSVrcvmr
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVscw-common (*1)
/etc/opt/FJSVscw-common
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVscw-common (*1)
/var/opt/FJSVscw-common
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVscw-tftpsv (*1)
/etc/opt/FJSVscw-tftpsv
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVscw-tftpsv (*1)
/var/opt/FJSVscw-tftpsv
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVscw-pxesv (*1)
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVscw-pxesv (*1)
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVscw-deploysv (*1)
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVscw-deploysv (*1)
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVscw-utils (*1)
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/
FJSVscw-utils (*1)
/etc/opt/FJSVscw-pxesv
/var/opt/FJSVscw-pxesv
/etc/opt/FJSVscw-deploysv
/var/opt/FJSVscw-deploysv
/etc/opt/FJSVscw-utils
/var/opt/FJSVscw-utils
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
Table B.9 Files to Link
Shared Disk
Local Disk
Shared_disk_mount_point/Fujitsu/ROR/SVROR/opt/
FJSVrcvmr/rails/config/rcx/rcxdb.pwd
- 78 -
/opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd
Shared Disk
Local Disk
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/
FJSVrcvmr/rails/config/rcx_secret.key
/etc/opt/FJSVrcvmr/rails/config/
rcx_secret.key
b. When changing the image file storage directory, perform the following.
When changing the image file storage directory, refer to "5.5 rcxadm imagemgr" in the "Command Reference", and change
the path for the image file storage directory.
Also, specify a directory on the shared disk for the new image file storage directory.
Not necessary when ServerView Deployment Manager is used in the same subnet.
8. Change the manager admin LAN IP Address (Primary node)
Change the admin LAN IP address of the manager.
Execute the following command.
# /opt/FJSVrcvmr/bin/rcxadm mgrctl modify -ip IP_address <RETURN>
For IP_address, specify the admin LAN IP address activated in step 4.
9. Deactivate the takeover logical IP address (Primary node)
On the primary node, deactivate the takeover logical IP address for the manager.
For details, refer to the PRIMECLUSTER Global Link Services manual.
10. Unmount the shared disk (Primary node)
Unmount the shared disk for managers from the primary node.
11. Mount the shared disk (Secondary node)
Mount the shared disk for managers on the secondary node.
12. Change manager startup settings (Secondary node)
Perform settings so that the startup process of the manager is controlled by the cluster system, not the OS.
On the secondary node, execute the same command as used in step 5.
13. Configure symbolic links for the shared disk (Secondary node)
a. Change the directory names and file names given in c. of step 6.
b. Configure symbolic links for the shared disk.
Configure symbolic links from the directories and files on the local disk of the secondary node for the directories and files
on the shared disk.
The directories and files to set symbolic links for are the same as those for "Table B.8 Directories to Link" and "Table B.9
Files to Link".
14. Unmount the shared disk (Secondary node)
Unmount the shared disk for managers from the secondary node.
15. Register takeover logical IP address resources (Primary node/Secondary node)
On PRIMECLUSTER GLS, register the takeover logical IP address as a PRIMECLUSTER resource.
Note
When using an existing takeover logical IP address, as it was deleted in step 2., registration as a resource is necessary.
For details, refer to the PRIMECLUSTER Global Link Services manual.
- 79 -
16. Create cluster resources/cluster applications (Primary node)
a. Use the RMS Wizard of the cluster system to create the necessary PRIMECLUSTER resources on the cluster service (cluster
application).
When creating a new cluster service (cluster application), select Application-Create and create the settings for primary node
as Machines[0] and the secondary node as Machines[1]. Then create the following resources on the created cluster service
(cluster application).
Perform the RMS Wizard settings for any of the nodes comprising the cluster.
For details, refer to the PRIMECLUSTER manual.
- Cmdline resources
Create the Cmdline resources for Resource Orchestrator.
On RMS Wizard, select "CommandLines" and perform the following settings.
- Start script: /opt/FJSVrcvmr/cluster/cmd/rcxclstartcmd
- Stop script: /opt/FJSVrcvmr/cluster/cmd/rcxclstopcmd
- Check script: /opt/FJSVrcvmr/cluster/cmd/rcxclcheckcmd
Note
When specifying a value other than nothing for the attribute value StandbyTransitions of a cluster service (cluster
application), enable the Flag of ALLEXITCODES(E) and STANDBYCAPABLE(O).
When adding to existing operations (cluster applications)
When adding Cmdline resources to existing operations (cluster applications), decide the startup priority order considering
the restrictions of the other components that will be used in combination with the operation (cluster application).
- Gls resources
Configure the takeover logical IP address to use for the cluster system.
On the RMS Wizard, select "Gls:Global-Link-Services", and set the takeover logical IP address.
When using an existing takeover logical IP address this operation is not necessary.
- Fsystem resources
Set the mount point of the shared disk.
On the RMS Wizard, select "LocalFileSystems", and set the file system. When no mount point has been defined, refer
to the PRIMECLUSTER manual and perform definition.
- Gds resources
Specify the settings created for the shared disk.
On the RMS Wizard, select "Gds:Global-Disk-Services", and set the shared disk. Specify the settings created for the
shared disk.
b. Set the attributes of the cluster application.
When you have created a new cluster service (cluster application), use the cluster system's RMS Wizard to set the attributes.
- In the Machines+Basics settings, set "yes" for AutoStartUp.
- In the Machines+Basics settings, set "HostFailure|ResourceFailure|ShutDown" for AutoSwitchOver.
- In the Machines+Basics settings, set "yes" for HaltFlag.
- When using hot standby for operations, in the Machines+Basics settings, set "ClearFaultRequest|StartUp|SwitchRequest"
for StandbyTransitions.
When configuring the HBA address rename setup service in cluster systems, ensure that hot standby operation is
configured.
c. After settings are complete, save the changes and perform Configuration-Generate and Configuration-Activate.
- 80 -
17. Set up the HBA address rename setup service (Primary node/Secondary node)
Configuring the HBA address rename setup service for cluster systems
When configuring managers and the HBA address rename setup service in cluster systems, perform the following procedure.
Not necessary when ServerView Deployment Manager is used in the same subnet.
Performing the following procedure starts the HBA address rename setup service on the standby node in the cluster.
a. HBA address rename setup service startup settings (Primary node)
Configure the startup settings of the HBA address rename setup service.
Execute the following command.
# /opt/FJSVrcvhb/cluster/bin/rcvhbclsetup <RETURN>
b. Configuring the HBA address rename setup service (Primary node)
Configure the settings of the HBA address rename setup service.
Execute the following command on the primary node.
# /opt/FJSVrcvhb/bin/rcxhbactl modify -ip IP_address <RETURN>
# /opt/FJSVrcvhb/bin/rcxhbactl modify -port port_number <RETURN>
IP Address
Specify the takeover logical IP address for the manager.
Port number
Specify the port number for communication with the manager. The port number during installation is 23461.
c. HBA address rename setup service Startup Settings (Secondary node)
Configure the startup settings of the HBA address rename setup service.
On the secondary node, execute the same command as used in step a.
d. Configuring the HBA address rename setup service (Secondary node)
Configure the settings of the HBA address rename setup service.
On the secondary node, execute the same command as used in step b.
18. Start cluster applications (Primary node)
Use the cluster system's operation management view (Cluster Admin) and start the manager cluster service (cluster application).
19. Set up the HBA address rename setup service startup information (Secondary node)
a. Execute the following command.
# nohup /opt/FJSVrcvhb/bin/rcxhbactl start& <RETURN>
The [HBA address rename setup service] dialog is displayed.
b. Click <Stop>.
Confirm that the "Status" becomes "Stopping".
c. Click <Run>.
Confirm that the "Status" becomes "Running".
d. Click <Stop>.
Confirm that the "Status" becomes "Stopping".
e. Click <Cancel> and close the [HBA address rename setup service] dialog.
20. Switch over cluster applications (Secondary node)
Use the cluster system's operation management view (Cluster Admin) and switch the manager cluster service (cluster application)
to the secondary node.
- 81 -
21. Set up the HBA address rename setup service startup information (Primary node)
The procedure is the same as step 19.
22. Switch over cluster applications (Primary node)
Use the cluster system's operation management view (Cluster Admin) and switch the manager cluster service (cluster application)
to the primary node.
B.4 Releasing Configuration
This section explains how to delete the cluster services of managers being operated on cluster systems.
B.4.1 Releasing Configuration [Windows]
The flow of deleting cluster services of managers is as indicated below.
Figure B.3 Flow of Deleting Manager Service Settings
Delete settings for manager cluster services using the following procedure.
This explanation assumes that the manager is operating on the primary node.
Delete the HBA address rename Setup Service
When the HBA address rename setup service and managers in cluster systems have been configured, perform the following procedure.
Not necessary when ServerView Deployment Manager is used in the same subnet.
1. Take the "service or application" for the HBA address rename setup service offline.
a. Open the [Failover Cluster Management] window and connect to the cluster system.
b. Right-click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree, and select [Take this service
or application offline] from the displayed menu.
2. Take the manager "service or application" offline.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Take this service or
application offline] from the displayed menu.
- 82 -
3. Delete the scripts of the "service or application" for the HBA address rename service.
a. Click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree.
b. Right-click the "HBAarCls Script" of the "Other Resources" on the "Summary of RC-HBAar" displayed in the middle of the
[Failover Cluster Management] window, and select [Delete] from the displayed menu.
4. Delete the "service or application" for the HBA address rename setup service.
Right-click [Services and Applications]-[RC-HBAar] on the Failover Cluster Management tree, and select [Delete] from the
displayed menu.
5. Configure the dependencies of the resources in the "service or application" for the manager back to the status they were in before
setting up the HBA address rename setup service.
a. Right-click the "PXE Services" on "Other Resources" on the "Summary of RC-manager" displayed in the middle of the
[Failover Cluster Management] window, and select [Properties] from the displayed menu.
The [PXE Services Properties] window will be displayed.
b. In the "Resource" of the [Dependencies] tab, select the name of the Admin LAN IP Address and click <Apply>.
c. When the settings have been applied, click <OK>.
6. Delete the generic scripts for the coordinated boot of the "service or application" for the HBA address rename service from the
"service or application" for the manager.
a. Click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree.
b. Right-click the "HBAarCls Script" of the "Other Resources" on the "Summary of RC-manager" displayed in the middle of
the [Failover Cluster Management] window, and select [Delete] from the displayed menu.
Stop Cluster Services
When configuring the "Delete the HBA address rename Setup Service", perform from step 3.
1. Open the [Failover Cluster Management] window and connect to the cluster system.
2. Take the manager "service or application" offline.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Take this service or
application offline] from the displayed menu.
3. Bring the shared disk for the manager "service or application" online.
Right-click the shared disk on "Disk Drives" on the "Summary of RC-manager" displayed in the middle of the [Failover Cluster
Management] window, and select [Bring this resource online] from the displayed menu.
Delete Service Resources
Delete the services, scripts, and IP address of the manager "service or application".
Using the following procedure, delete all the "Other Resources".
1. Click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree.
2. Right-click the resources on "Other Resources" on the "Summary of RC-manager" displayed in the middle of the [Failover Cluster
Management] window, and select [Delete] from the displayed menu.
3. Using the following procedure, delete the resources displayed in the "Server Name" of the "Summary of RC-manager" in the middle
of the [Failover Cluster Management] window.
a. Right-click "IP Address: IP_address", and select [Delete] from the displayed menu.
b. Right-click "Name: Network_name", and select [Delete] from the displayed menu.
When registering the admin LAN subnet, delete the "DHCP Service" using the following procedure.
- 83 -
4. Set the path of the "DHCP Service".
a. Right-click the resources of the "DHCP Service" on the "Summary of RC-manager" displayed in the middle of the [Failover
Cluster Management] window, and select [Properties] from the displayed menu.
The [New DHCP Service Properties] window will be displayed.
b. Configure the path on the [General] tab based on the following table.
Items
Value to Specify
Database path
%SystemRoot%\System32\dhcp\
Audit file path
%SystemRoot%\System32\dhcp\
Backup path
%SystemRoot%\System32\dhcp\backup\
5. Right-click the resources of the "DHCP Service" on the "Summary of RC-manager" displayed in the middle of the [Failover Cluster
Management] window, and select [Delete] from the displayed menu.
Uninstall the Manager
1. Refer to "3.1.2 Uninstallation [Windows]", and uninstall the manager on the primary node.
2. Allocate the shared disk to the secondary node.
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Move this service or
application to another node]-[1 - Move to node node_name] from the displayed menu.
The name of the secondary node is displayed for node_name.
3. Uninstall the manager of the secondary node.
Delete Shared Disk Files
Use Explorer to delete the "Drive_name:\Fujitsu\ROR\SVROR\" folder on the shared disk.
Delete Cluster Resources
Delete the manager "service or application".
Right-click [Services and Applications]-[RC-manager] on the Failover Cluster Management tree, and select [Delete] from the displayed
menu.
B.4.2 Releasing Configuration [Linux]
The flow of deleting cluster services (cluster applications) of managers is as indicated below.
- 84 -
Figure B.4 Flow of Deleting Manager Service Settings
- 85 -
Releasing of manager cluster services (cluster applications) is performed using the following procedure.
Perform releasing of configuration using OS administrator authority.
1. Stop cluster applications (Primary node)
Use the cluster system's operation management view (Cluster Admin) and stop the cluster service (cluster application) of manager
operations.
2. Delete cluster resources (Primary node)
Use the RMS Wizard of the cluster system, and delete manager operation resources registered on the target cluster service (cluster
application).
When a cluster service (cluster application) is in a configuration that only uses resources of Resource Orchestrator, also delete the
cluster service (cluster application).
On the RMS Wizard, if only deleting resources, delete the following:
- Cmdline resources (Only script definitions for Resource Orchestrator)
- Gls resources (When they are no longer used)
- Gds resources (When they are no longer used)
- Fsystem resources (The mount point for the shared disk for managers)
Release the RMS Wizard settings for any of the nodes comprising the cluster.
For details, refer to the PRIMECLUSTER manual.
3. Delete the HBA address rename setup service (Primary node/Secondary node)
When the HBA address rename setup service has been configured for a cluster system
When the HBA address rename setup service and managers in cluster systems have been configured, perform the following
procedure.
Not necessary when ServerView Deployment Manager is used in the same subnet.
a. Stopping the HBA address rename setup service (Secondary node)
Stop the HBA address rename setup service.
Execute the following command, and check if the process of the HBA address rename setup service is indicated.
# ps -ef | grep rcvhb | grep -v grep <RETURN>
When processes are output after the command above is executed, execute the following command and stop the HBA address
rename setup service. If no processes were output, this procedure is unnecessary.
# /etc/init.d/rcvhb stop <RETURN>
b. Releasing HBA address rename setup service Startup Settings (Secondary node)
Release the startup settings of the HBA address rename setup service.
Execute the following command.
# /opt/FJSVrcvhb/cluster/bin/rcvhbclunsetup <RETURN>
c. Deleting links (Secondary node)
If processes of the HBA address rename setup service were not indicated in step a., this procedure is unnecessary.
Execute the following command and delete symbolic links.
# rm symbolic_link <RETURN>
- Symbolic Links to Delete
- /var/opt/FJSVscw-common
- /var/opt/FJSVscw-tftpsv
- 86 -
- /etc/opt/FJSVscw-common
- /etc/opt/FJSVscw-tftpsv
d. Reconfiguring symbolic links
If processes of the HBA address rename setup service were not indicated in step a., this procedure is unnecessary.
Execute the following command, and reconfigure the symbolic links from the directory on the local disk for the directory on
the shared disk.
# ln -s shared_disk local_disk <RETURN>
For shared_disk, specify the shared disk in "Table B.10 Directories to Relink".
For local_disk, specify the local disk in "Table B.10 Directories to Relink".
Table B.10 Directories to Relink
Shared Disk
Local Disk
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-common
/var/opt/FJSVscw-common
Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-tftpsv
/var/opt/FJSVscw-tftpsv
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-common
/etc/opt/FJSVscw-common
Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-tftpsv
/etc/opt/FJSVscw-tftpsv
e. Stopping the HBA address rename setup service (Primary node)
Stop the HBA address rename setup service.
On the primary node, execute the same command as used in step a.
f. Releasing the HBA address rename setup service Startup Settings (Primary node)
Release the startup settings of the HBA address rename setup service.
On the primary node, execute the same command as used in step b.
g. Deleting links (Primary node)
If processes of the HBA address rename setup service were not indicated in step e., this procedure is unnecessary.
On the primary node, execute the same command as used in step c.
h. Reconfiguring symbolic links
If processes of the HBA address rename setup service were not indicated in step e., this procedure is unnecessary.
On the primary node, execute the same command as used in step d.
4. Mount the shared disk (Secondary node)
When it can be confirmed that the shared disk for managers has been unmounted from the primary node and the secondary node,
mount the shared disk for managers on the secondary node.
5. Delete links to the shared disk (Secondary node)
Delete the symbolic links specified for the directories and files on the shared disk from the directories and files on the local disk of
the secondary node.
Execute the following command.
# rm symbolic_link <RETURN>
- Symbolic links to directories to delete
- /etc/opt/FJSVrcvmr/customize_data
- /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
- /etc/opt/FJSVrcvmr/sys/apache/conf
- /var/opt/FJSVrcvmr
- /etc/opt/FJSVscw-common (*1)
- 87 -
- /var/opt/FJSVscw-common (*1)
- /etc/opt/FJSVscw-tftpsv (*1)
- /var/opt/FJSVscw-tftpsv (*1)
- /etc/opt/FJSVscw-pxesv (*1)
- /var/opt/FJSVscw-pxesv (*1)
- /etc/opt/FJSVscw-deploysv (*1)
- /var/opt/FJSVscw-deploysv (*1)
- /etc/opt/FJSVscw-utils (*1)
- /var/opt/FJSVscw-utils (*1)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
- Symbolic links to files to delete
- /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd
- /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key
6. Restore backed up resources (Secondary node)
Restore the directories and files that were backed up when configuring the cluster environment.
Execute the following command. Specify the files and directories that were backed up when configuring the cluster environment
source_file_name(source_directory_name)_old
for
the
using
names
such
as
source_restoration_file_name(source_restoration_directory_name).
For
restoration_target_file_name(restoration_target_directory_name), specify file names or directory names corresponding to the
source_restoration_file_name(source_restoration_directory_name).
# mv -i source_restoration_file_name(source_restoration_directory_name)
restoration_target_file_name(restoration_target_directory_name) <RETURN>
Restore the following directory names and file names.
- /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd
- /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate
- /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key
- /etc/opt/FJSVrcvmr/sys/apache/conf
- /var/opt/FJSVrcvmr
- /etc/opt/FJSVscw-common (*1)
- /var/opt/FJSVscw-common (*1)
- /etc/opt/FJSVscw-tftpsv (*1)
- /var/opt/FJSVscw-tftpsv (*1)
- /etc/opt/FJSVscw-pxesv (*1)
- /var/opt/FJSVscw-pxesv (*1)
- /etc/opt/FJSVscw-deploysv (*1)
- /var/opt/FJSVscw-deploysv (*1)
- /etc/opt/FJSVscw-utils (*1)
- /var/opt/FJSVscw-utils (*1)
*1: Not necessary when ServerView Deployment Manager is used in the same subnet.
- 88 -
7. Change manager startup settings (Secondary node)
Perform settings so that the startup process of the manager is controlled by the OS, not the cluster system.
Execute the following command on the secondary node.
# /opt/FJSVrcvmr/cluster/bin/rcxclchkconfig unsetup <RETURN>
8. Unmount the shared disk (Secondary node)
Unmount the shared disk for managers from the secondary node.
9. Mount the shared disk (Primary node)
Mount the shared disk for managers on the primary node.
10. Delete links to the shared disk (Primary node)
Delete the symbolic links specified for the directories and files on the shared disk from the directories and files on the local disk of
the primary node.
The directories and files to delete are the same as those for "Symbolic links to directories to delete" and "Symbolic links to files to
delete" in step 5.
11. Restore backed up resources (Primary node)
Restore the directories and files that were backed up when configuring the cluster environment.
Refer to step 6. for the procedure.
12. Delete directories on the shared disk (Primary node)
Delete the created directory "Shared_disk_mount_point/Fujitsu/ROR/SVROR" on the shared disk.
Execute the following command.
# rm -r shared_disk_mount_point/Fujitsu/ROR/SVROR <RETURN>
When there is no need to check with the rm command, add the -f option. For the rm command, refer to the manual for the OS.
13. Change manager startup settings (Primary node)
Perform settings so that the startup process of the manager is controlled by the OS, not the cluster system.
Refer to step 7. for the command to execute on the primary node.
14. Unmount the shared disk (Primary node)
Unmount the shared disk for managers from the primary node.
15. Uninstall the manager (Primary node/Secondary node)
Refer to "3.1.3 Uninstallation [Linux]", and uninstall the managers on the primary node and the secondary node.
When releasing the cluster configuration and returning to a single configuration, uninstall the manager from one of the nodes.
When operating managers in cluster environments, if the admin server settings are modified, change the admin server settings before
using it in a single configuration.
For the method for changing the admin server settings, refer to "3.1 Changing Admin Server Settings" of the "User's Guide VE".
16. Start cluster applications (Primary node)
When there are other cluster services (cluster applications), use the cluster system's operation management view (Cluster Admin)
and start the cluster services (cluster applications).
B.5 Advisory Notes
This section explains advisory notes regarding the settings for managers in cluster operations, and their deletion.
- 89 -
Switching Cluster Services (Cluster Applications)
Events that occur when switching cluster services (cluster applications) cannot be displayed.
Also, "Message number 65529" may be displayed on the ROR console.
Perform login again.
For message details, refer to "Message number 65529" of the "Messages VE".
Troubleshooting Information
Collect troubleshooting information referring to the instructions in "15.1 Types of Troubleshooting Data" of the "Operation Guide VE".
At that time, execute the commands from the primary node of the admin server, and collect information from the primary node and managed
servers.
Commands
Do not use the start or stop subcommand of rcxadm mgrctl to start or stop the manager.
Right-click the manager "service or application" on the Failover Cluster Management tree, and select [Bring this service or application
online] or [Take this service or application offline] from the displayed menu.
Other commands can be executed as they are in normal cluster operation.
ROR Console
The registration state of server resources on the resource tree when an admin server is included in a chassis is as indicated below.
- The server resources being operated by the manager on the cluster node are displayed as "[Admin Server]".
- The server resources not being operated by the manager on the cluster node are displayed as "[Unregistered]".
Do not register the server resources of the cluster node that are displayed as "[Unregistered]".
Services (Daemons) Managed with PRIMECLUSTER [Linux]
The following services (daemons) are managed with the Check script provided by Resource Orchestrator.
- /etc/init.d/scwdepsvd
- /etc/init.d/scwpxesvd
- /etc/init.d/scwtftpd
- /opt/FJSVrcvmr/opt/FJSVssmgr/bin/cimserver
- /opt/FJSVrcvmr/sys/rcxtaskmgr
- /opt/FJSVrcvmr/sys/rcxmongrel1
- /opt/FJSVrcvmr/sys/rcxmongrel2
- /opt/FJSVrcvmr/sys/rcxhttpd
- 90 -
Glossary
access path
A logical path configured to enable access to storage volumes from servers.
active mode
The state where a managed server is performing operations.
Managed servers must be in active mode in order to use Auto-Recovery.
Move managed servers to maintenance mode in order to perform backup or restoration of system images, or collection or deployment
of cloning images.
active server
A physical server that is currently operating.
admin client
A terminal (PC) connected to an admin server, which is used to operate the GUI.
admin LAN
A LAN used to manage resources from admin servers.
It connects managed servers, storage, and network devices.
admin server
A server used to operate the manager software of Resource Orchestrator.
affinity group
A grouping of the storage volumes allocated to servers. A function of ETERNUS.
Equivalent to the LUN mapping of EMC.
agent
The section (program) of Resource Orchestrator that operates on managed servers.
Auto-Recovery
A function which continues operations by automatically switching over the system image of a failed server to a spare server and
restarting it in the event of server failure.
This function can be used when managed servers are in a local boot configuration, SAN boot configuration, or a configuration such
as iSCSI boot where booting is performed from a disk on a network.
- When using a local boot configuration
The system is recovered by restoring a backup of the system image of the failed server onto a spare server.
- When booting from a SAN or a disk on a LAN
The system is restored by having the spare server inherit the system image on the storage.
Also, when a VLAN is set for the public LAN of a managed server, the VLAN settings of adjacent LAN switches are automatically
switched to those of the spare server.
BACS (Broadcom Advanced Control Suite)
An integrated GUI application (comprised from applications such as BASP) that creates teams from multiple NICs, and provides
functions such as load balancing.
BASP (Broadcom Advanced Server Program)
LAN redundancy software that creates teams of multiple NICs, and provides functions such as load balancing and failover.
- 91 -
blade server
A compact server device with a thin chassis that can contain multiple server blades, and has low power consumption.
As well as server blades, LAN switch blades, management blades, and other components used by multiple server blades can be mounted
inside the chassis.
blade type
A server blade type.
Used to distinguish the number of server slots used and servers located in different positions.
BladeViewer
A GUI that displays the status of blade servers in a style similar to a physical view and enables intuitive operation.
BladeViewer can also be used for state monitoring and operation of resources.
BMC (Baseboard Management Controller)
A Remote Management Controller used for remote operation of servers.
boot agent
An OS for disk access that is distributed from the manager to managed servers in order to boot them when the network is started during
image operations.
CA (Channel Adapter)
An adapter card that is used as the interface for server HBAs and fibre channel switches, and is mounted on storage devices.
chassis
A chassis used to house server blades and partitions.
Sometimes referred to as an enclosure.
cloning
Creation of a copy of a system disk.
cloning image
A backup of a system disk, which does not contain server-specific information (system node name, IP address, etc.), made during
cloning.
When deploying a cloning image to the system disk of another server, Resource Orchestrator automatically changes server-specific
information to that of the target server.
Cloud Edition
The edition which can be used to provide private cloud environments.
Domain
A system that is divided into individual systems using partitioning. Also used to indicate a partition.
DR Option
The option that provides the function for remote switchover of servers or storage in order to perform disaster recovery.
end host mode
This is a mode where the uplink port that can communicate with a downlink port is fixed at one, and communication between uplink
ports is blocked.
- 92 -
environmental data
Measured data regarding the external environments of servers managed using Resource Orchestrator.
Measured data includes power data collected from power monitoring targets.
ESC (ETERNUS SF Storage Cruiser)
Software that supports stable operation of multi-vendor storage system environments involving SAN, DAS, or NAS. Provides
configuration management, relation management, trouble management, and performance management functions to integrate storage
related resources such as ETERNUS.
Express
The edition which provides server registration, monitoring, and visualization.
FC switch (Fibre Channel Switch)
A switch that connects Fibre Channel interfaces and storage devices.
fibre channel switch blade
A fibre channel switch mounted in the chassis of a blade server.
GLS (Global Link Services)
Fujitsu network control software that enables high availability networks through the redundancy of network transmission channels.
GSPB (Giga-LAN SAS and PCI_Box Interface Board)
A board which mounts onboard I/O for two partitions and a PCIe (PCI Express) interface for a PCI box.
GUI (Graphical User Interface)
A user interface that displays pictures and icons (pictographic characters), enabling intuitive and easily understandable operation.
HA (High Availability)
The concept of using redundant resources to prevent suspension of system operations due to single problems.
hardware initiator
A controller which issues SCSI commands to request processes.
In iSCSI configurations, NICs fit into this category.
hardware maintenance mode
In the maintenance mode of PRIMEQUEST servers, a state other than Hot System Maintenance.
HBA (Host Bus Adapter)
An adapter for connecting servers and peripheral devices.
Mainly used to refer to the FC HBAs used for connecting storage devices using Fibre Channel technology.
HBA address rename setup service
The service that starts managed servers that use HBA address rename in the event of failure of the admin server.
HBAAR (HBA address rename)
I/O virtualization technology that enables changing of the actual WWN possessed by an HBA.
host affinity
A definition of the server HBA that is set for the CA port of the storage device and the accessible area of storage.
It is a function for association of the Logical Volume inside the storage which is shown to the host (HBA) that also functions as security
internal to the storage device.
- 93 -
Hyper-V
Virtualization software from Microsoft Corporation.
Provides a virtualized infrastructure on PC servers, enabling flexible management of operations.
I/O virtualization option
An optional product that is necessary to provide I/O virtualization.
The WWNN address and MAC address provided is guaranteed by Fujitsu Limited to be unique.
Necessary when using HBA address rename.
IBP (Intelligent Blade Panel)
One of operation modes used for PRIMERGY switch blades.
This operation mode can be used for coordination with ServerView Virtual I/O Manager (VIOM), and relations between server blades
and switch blades can be easily and safely configured.
ILOM (Integrated Lights Out Manager)
The name of the Remote Management Controller for SPARC Enterprise T series servers.
image file
A system image or a cloning image. Also a collective term for them both.
IPMI (Intelligent Platform Management Interface)
IPMI is a set of common interfaces for the hardware that is used to monitor the physical conditions of servers, such as temperature,
power voltage, cooling fans, power supply, and chassis.
These functions provide information that enables system management, recovery, and asset management, which in turn leads to reduction
of overall TCO.
IQN (iSCSI Qualified Name)
Unique names used for identifying iSCSI initiators and iSCSI targets.
iRMC (integrated Remote Management Controller)
The name of the Remote Management Controller for Fujitsu's PRIMERGY servers.
iSCSI
A standard for using the SCSI protocol over TCP/IP networks.
LAN switch blades
A LAN switch that is mounted in the chassis of a blade server.
license
The rights to use specific functions.
Users can use specific functions by purchasing a license for the function and registering it on the manager.
link aggregation
Function used to multiplex multiple ports and use them as a single virtual port.
With this function, if one of the multiplexed ports fails its load can be divided among the other ports, and the overall redundancy of
ports improved.
logical volume
A logical disk that has been divided into multiple partitions.
- 94 -
LSB (Logical System Board)
A system board that is allocated a logical number (LSB number) so that it can be recognized from the domain, during domain
configuration.
maintenance mode
The state where operations on managed servers are stopped in order to perform maintenance work.
In this state, the backup and restoration of system images and the collection and deployment of cloning images can be performed.
However, when using Auto-Recovery it is necessary to change from this mode to active mode. When in maintenance mode it is not
possible to switch over to a spare server if a server fails.
managed server
A collective term referring to a server that is managed as a component of a system.
management blade
A server management unit that has a dedicated CPU and LAN interface, and manages blade servers.
Used for gathering server blade data, failure notification, power control, etc.
Management Board
The PRIMEQUEST system management unit.
Used for gathering information such as failure notification, power control, etc. from chassis.
manager
The section (program) of Resource Orchestrator that operates on admin servers.
It manages and controls resources registered with Resource Orchestrator.
master slot
A slot that is recognized as a server when a server that occupies multiple slots is mounted.
multi-slot server
A server that occupies multiple slots.
NAS (Network Attached Storage)
A collective term for storage that is directly connected to a LAN.
network device
The unit used for registration of network devices.
L2 switches and firewalls fit into this category.
network map
A GUI function for graphically displaying the connection relationships of the servers and LAN switches that compose a network.
network view
A window that displays the connection relationships and status of the wiring of a network map.
NFS (Network File System)
A system that enables the sharing of files over a network in Linux environments.
NIC (Network Interface Card)
An interface used to connect a server to a network.
- 95 -
OS
The OS used by an operating server (a physical OS or VM guest).
PDU (Power Distribution Unit)
A device for distributing power (such as a power strip).
Resource Orchestrator uses PDUs with current value display functions as Power monitoring devices.
physical LAN segment
A physical LAN that servers are connected to.
Servers are connected to multiple physical LAN segments that are divided based on their purpose (public LANs, backup LANs, etc.).
Physical LAN segments can be divided into multiple network segments using VLAN technology.
physical OS
An OS that operates directly on a physical server without the use of server virtualization software.
physical server
The same as a "server". Used when it is necessary to distinguish actual servers from virtual servers.
pin-group
This is a group, set with the end host mode, that has at least one uplink port and at least one downlink port.
Pool Master
On Citrix XenServer, it indicates one VM host belonging to a Resource Pool.
It handles setting changes and information collection for the Resource Pool, and also performs operation of the Resource Pool.
For details, refer to the Citrix XenServer manual.
port backup
A function for LAN switches which is also referred to as backup port.
port VLAN
A VLAN in which the ports of a LAN switch are grouped, and each LAN group is treated as a separate LAN.
port zoning
The division of ports of fibre channel switches into zones, and setting of access restrictions between different zones.
power monitoring devices
Devices used by Resource Orchestrator to monitor the amount of power consumed.
PDUs and UPSs with current value display functions fit into this category.
power monitoring targets
Devices from which Resource Orchestrator can collect power consumption data.
pre-configuration
Performing environment configuration for Resource Orchestrator on another separate system.
primary server
The physical server that is switched from when performing server switchover.
public LAN
A LAN used for operations by managed servers.
Public LANs are established separately from admin LANs.
- 96 -
rack
A case designed to accommodate equipment such as servers.
rack mount server
A server designed to be mounted in a rack.
RAID (Redundant Arrays of Inexpensive Disks)
Technology that realizes high-speed and highly-reliable storage systems using multiple hard disks.
RAID management tool
Software that monitors disk arrays mounted on PRIMERGY servers.
The RAID management tool differs depending on the model or the OS of PRIMERGY servers.
Remote Management Controller
A unit used for managing servers.
Used for gathering server data, failure notification, power control, etc.
- For Fujitsu PRIMERGY servers
iRMC2
- For SPARC Enterprise
ILOM (T series servers)
XSCF (M series servers)
- For HP servers
iLO2 (integrated Lights-Out)
- For Dell/IBM servers
BMC (Baseboard Management Controller)
Remote Server Management
A PRIMEQUEST feature for managing partitions.
Reserved SB
Indicates the new system board that will be embedded to replace a failed system board if the hardware of a system board embedded
in a partition fails and it is necessary to disconnect the failed system board.
resource
Collective term or concept that refers to the physical resources (hardware) and logical resources (software) from which a system is
composed.
resource pool
On Citrix XenServer, it indicates a group of VM hosts.
For details, refer to the Citrix XenServer manual.
resource tree
A tree that displays the relationships between the hardware of a server and the OS operating on it using hierarchies.
ROR console
The GUI that enables operation of all functions of Resource Orchestrator.
- 97 -
SAN (Storage Area Network)
A specialized network for connecting servers and storage.
server
A computer (operated with one operating system).
server blade
A server blade has the functions of a server integrated into one board.
They are mounted in blade servers.
server management unit
A unit used for managing servers.
A management blade is used for blade servers, and a Remote Management Controller is used for other servers.
server name
The name allocated to a server.
server virtualization software
Basic software which is operated on a server to enable use of virtual machines. Used to indicate the basic software that operates on a
PC server.
ServerView Deployment Manager
Software used to collect and deploy server resources over a network.
ServerView Operations Manager
Software that monitors a server's (PRIMERGY) hardware state, and notifies of errors by way of the network.
ServerView Operations Manager was previously known as ServerView Console.
ServerView RAID
One of the RAID management tools for PRIMERGY.
ServerView Update Manager
This is software that performs jobs such as remote updates of BIOS, firmware, drivers, and hardware monitoring software on servers
being managed by ServerView Operations Manager.
ServerView Update Manager Express
Insert the ServerView Suite DVD1 or ServerView Suite Update DVD into the server requiring updating and start it.
This is software that performs batch updates of BIOS, firmware, drivers, and hardware monitoring software.
Single Sign-On
A system among external software which can be used without login operations, after authentication is executed once.
slave slot
A slot that is not recognized as a server when a server that occupies multiple slots is mounted.
SMB (Server Message Block)
A protocol that enables the sharing of files and printers over a network.
SNMP (Simple Network Management Protocol)
A communications protocol to manage (monitor and control) the equipment that is attached to a network.
- 98 -
software initiator
An initiator processed by software using OS functions.
Solaris Container
Solaris server virtualization software.
On Solaris servers, it is possible to configure multiple virtual Solaris servers that are referred to as a Solaris zone.
Solaris zone
A software partition that virtually divides a Solaris OS space.
SPARC Enterprise Partition Model
A SPARC Enterprise model which has a partitioning function to enable multiple system configurations, separating a server into multiple
areas with operating OS's and applications in each area.
spare server
A server which is used to replace a failed server when server switchover is performed.
storage blade
A blade-style storage device that can be mounted in the chassis of a blade server.
storage unit
Used to indicate the entire secondary storage as one product.
switchover state
The state in which switchover has been performed on a managed server, but neither failback nor continuation have been performed.
System Board
A board which can mount up to 2 Xeon CPUs and 32 DIMMs.
system disk
The disk on which the programs (such as the OS) and files necessary for the basic functions of servers (including booting) are installed.
system image
A copy of the contents of a system disk made as a backup.
Different from a cloning image as changes are not made to the server-specific information contained on system disks.
tower server
A standalone server with a vertical chassis.
UNC (Universal Naming Convention)
Notational system for Windows networks (Microsoft networks) that enables specification of shared resources (folders, files, shared
printers, shared directories, etc.).
Example
\\hostname\dir_name
- 99 -
UPS (Uninterruptible Power Supply)
A device containing rechargeable batteries that temporarily provides power to computers and peripheral devices in the event of power
failures.
Resource Orchestrator uses UPSs with current value display functions as power monitoring devices.
URL (Uniform Resource Locator)
The notational method used for indicating the location of information on the Internet.
VIOM (ServerView Virtual-IO Manager)
The name of both the I/O virtualization technology used to change the MAC addresses of NICs and the software that performs the
virtualization.
Changes to values of WWNs and MAC addresses can be performed by creating a logical definition of a server, called a server profile,
and assigning it to a server.
Virtual Edition
The edition that can use the server switchover function.
Virtual I/O
Technology that virtualizes the relationship of servers and I/O devices (mainly storage and network) thereby simplifying the allocation
of and modifications to I/O resources to servers, and server maintenance.
For Resource Orchestrator it is used to indicate HBA address rename and ServerView Virtual-IO Manager (VIOM).
virtual server
A virtual server that is operated on a VM host using a virtual machine.
virtual switch
A function provided by server virtualization software to manage networks of VM guests as virtual LAN switches.
The relationships between the virtual NICs of VM guests and the NICs of the physical servers used to operate VM hosts can be managed
using operations similar to those of the wiring of normal LAN switches.
VLAN (Virtual LAN)
A splitting function, which enables the creation of virtual LANs (seen as differing logically by software) by grouping ports on a LAN
switch.
Using a Virtual LAN, network configuration can be performed freely without the need for modification of the physical network
configuration.
VLAN ID
A number (between 1 and 4,095) used to identify VLANs.
Null values are reserved for priority tagged frames, and 4,096 (FFF in hexadecimal) is reserved for mounting.
VM (Virtual Machine)
A virtual computer that operates on a VM host.
VM guest
A virtual server that operates on a VM host, or an OS that is operated on a virtual machine.
VM Home Position
The VM host that is home to VM guests.
VM host
A server on which server virtualization software is operated, or the server virtualization software itself.
- 100 -
VM maintenance mode
One of the settings of server virtualization software, that enables maintenance of VM hosts.
For example, when using high availability functions (such as VMware HA) of server virtualization software, by setting VM maintenance
mode it is possible to prevent the moving of VM guests on VM hosts undergoing maintenance.
For details, refer to the manuals of the server virtualization software being used.
VM management software
Software for managing multiple VM hosts and the VM guests that operate on them.
Provides value adding functions such as movement between the servers of VM guests (migration).
VMware
Virtualization software from VMware Inc.
Provides a virtualized infrastructure on PC servers, enabling flexible management of operations.
Web browser
A software application that is used to view Web pages.
WWN (World Wide Name)
A 64-bit address allocated to an HBA.
Refers to a WWNN or a WWPN.
WWNN (World Wide Node Name)
The WWN set for a node.
The Resource Orchestrator HBA address rename sets the same WWNN for the fibre channel port of the HBA.
WWPN (World Wide Port Name)
The WWN set for a port.
The Resource Orchestrator HBA address rename sets a WWPN for each fibre channel port of the HBA.
WWPN zoning
The division of ports into zones based on their WWPN, and setting of access restrictions between different zones.
Xen
A type of server virtualization software.
XSB (eXtended System Board)
Unit for domain creation and display, composed of physical components.
XSCF (eXtended System Control Facility)
The name of the Remote Management Controller for SPARC Enterprise M series servers.
zoning
A function that provides security for Fibre Channels by grouping the Fibre Channel ports of a Fibre Channel switch into zones, and
only allowing access to ports inside the same zone.
- 101 -