MasterScope Virtual DataCenter
Automation v4.0
First Step Guide
1st Edition
April, 2017
NEC Corporation
Disclaimer
The copyrighted information noted in this document shall belong to NEC Corporation.
Copying or revising this document, in whole or in part, is strictly prohibited without the
permission of NEC Corporation.
This document may be changed without prior notice.
NEC Corporation shall not be liable for any technical or editing errors or omissions in this
document.
NEC Corporation shall not be liable for the accuracy, usability, or certainty of information noted
in this document.
Copyright Information
• SigmaSystemCenter, MasterScope, Network Manager, NEC Storage, ESMPRO,
EXPRESSBUILDER, EXPRESSSCOPE, SIGMABLADE, UNIVERGE, and
ProgrammableFlow are registered trademarks of NEC Corporation.
• VMware is a trademark or registered trademark of VMware, Inc. in the United States and other
countries.
• Microsoft, Windows, Windows Server, Windows Vista, Internet Explorer, SQL Server, and
Hyper-V are trademarks or registered trademarks of Microsoft Corporation in the United States
of America and other countries.
• Linux is a trademark or registered trademark of Linus Torvalds in the United States of America
and other countries.
• Red Hat is a trademark or registered trademark of Red Hat, Inc. in the United States and other
countries.
• Intel and Itanium are trademarks or registered trademarks of Intel Corporation in the United
States of America and other countries.
• Apache, Apache Tomcat, and Tomcat are trademarks or registered trademarks of Apache
Software Foundation.
• Oracle, Solaris, Java, and WebLogic are registered trademarks of Oracle Corporation and its
subsidiaries and affiliates in the United States of America and other countries.
• SAP is a trademark or registered trademark of SAP AG in Germany and other countries.
• Fortinet, FortiGate, FortiClient, and FortiGuard are registered trademarks of Fortinet, Inc. Other
Fortinet products contained in this guide are trademarks of Fortinet, Inc.
• Thunder Series and AX Series of A10 Networks is a registered trademark of A10 Networks,
Inc.
• Catalyst, IOS, Cisco IOS, Cisco, Cisco Systems, and Cisco logo are trademarks or registered
trademarks of Cisco Systems, Inc. in the United States of America and other countries.
• F5, F5 Networks, F5 logo, and product names in the text are trademarks or registered
trademarks of F5 Networks, Inc. in the United States of America and other countries.
Other system names, company names, and product names in this document are trademarks or
registered trademarks of their respective companies.
i
The ® and ™ marks are not included in this document.
Notes on exporting this product
If this product (including its software) is subject to regulation under the Foreign Exchange and
Foreign Trade Law, it will be necessary to follow the procedures required by this law when exporting
this product, such as obtaining an export license from the Japanese Government. If you require
documents from NEC in order to obtain an export license, please contact the dealer where you
purchased your MasterScope product, or your local NEC sales office.
ii
Preface
Target Readers and Objective
This document provides the users to use this product first with the description on the product
overview, system design method, and latest operating environment of Virtual DataCenter
Automation.
Overview of the Document
The chapters in this document mainly describe Virtual DataCenter Automation, with Network
Automation described in a supplementary fashion. If Network Automation is not explicitly described,
assume that the explanation is the same as that of Virtual DataCenter Automation.
Notation Rules of This Document
This document describes precautions, important items, and related information as follows.
Note
Indicates precautions, warnings, and supplementary notes for the function, operation, and setting
Tip
Indicates the location of reference destination information
Notation rules
In addition, the following notation rules are applied in this document.
Notation
How to use
XXXXX
Used before and after the items (text boxes,
check boxes, or tabs, etc.) to be displayed in
the dialog box or used for screen names
(dialog boxes, windows, and others).
Enter the machine name in the
Machine name text box.
"Installation Guide"
XXXXX
Used used before and after other manual
name.
[ ] in a command
line
Indicates that the specification of the value
in [ ] can be omitted.
add [/a] Gr1
Monospace font
Perform the following command.
(courier new)
Indicates the outputs (messages, prompts,
and others) from the command line or
system.
Italicized
monospace font
Indicates the items to be replaced with a
valid value and input by users.
add GroupName
(courier new)
When a space is included in the value, place
“ “ before and after the value.
<Install DVD>
""
<>
Example
iii
All check box
Setting window
Installation Guide
replace Gr1
InstallPath="Install Path"
Contents
Chapter 1. Virtual DataCenter Automation.............................................................................. 1
1.1 What is Virtual DataCenter Automation ? .............................................................................2
1.2 What is Virtual DataCenter Automation Standard Edition ? ..................................................3
1.3 What is Network Automation ? .............................................................................................3
1.4 Virtual DataCenter Automation Capabilities .........................................................................3
1.4.1 Resource management.................................................................................................4
1.4.2 Provisioning and orchestration ....................................................................................5
1.4.3 Monitoring ..................................................................................................................7
1.4.4 Asset Management ....................................................................................................14
1.4.5 Custom Monitoring for Tenants.................................................................................16
1.4.6 Asset Management for Tenants .................................................................................22
1.4.7 Software Repository..................................................................................................24
1.4.8 Integrated ID Management........................................................................................26
1.4.9 Provisioning of Physical Machines............................................................................26
1.4.10 Visualization of Tenant Networks............................................................................26
1.5 Network Automation Capabilities .......................................................................................27
Chapter 2. Virtual DataCenter Automation Configuration .................................................. 28
2.1 Management Target of Virtual DataCenter Automation.......................................................29
2.1.1 Network ....................................................................................................................29
2.1.2 Storage ......................................................................................................................30
2.1.3 Server........................................................................................................................30
2.2 System Management Domain..............................................................................................31
2.2.1 Overview of Management Domain............................................................................31
2.2.2 Pod............................................................................................................................31
2.2.3 Zone..........................................................................................................................31
2.2.4 P-Flow Domain .........................................................................................................31
2.2.5 Site............................................................................................................................32
2.3 Virtual DataCenter Automation Basic Configuration ..........................................................32
2.3.1 Consisting Components.............................................................................................32
2.3.2 Installed Functions ....................................................................................................33
2.4 Server configuration of Virtual DataCenter Automation......................................................34
2.4.1 For Single Pod...........................................................................................................34
2.4.2 For Single Pod (VM Monitoring Server Configuration) ............................................35
2.4.3 For Multiple Pods......................................................................................................36
2.4.4 For Multiple Pods (Zone Configuration) ...................................................................37
2.4.5 For Multiple Sites......................................................................................................38
2.5 Virtual DataCenter Automation License..............................................................................39
Chapter 3. System Design.......................................................................................................... 40
3.1 Studying Network Configuration (Standard Configuration) ................................................41
3.1.1 Public Cloud .............................................................................................................41
iv
3.1.2
3.1.3
3.1.4
3.1.5
3.1.6
3.1.7
3.1.8
Private Cloud ............................................................................................................43
On-premises Cloud....................................................................................................45
Utilization of the P-Flow Network ............................................................................47
Multiple Pods ............................................................................................................49
Multiple Sites ............................................................................................................50
IP Address Design.....................................................................................................52
User Authentication for Network Devices .................................................................53
3.2 Customization of Network Configuration............................................................................53
3.2.1 Public Cloud .............................................................................................................53
3.3 Studying Storage Configuration ..........................................................................................54
3.3.1 Storage Configuration ...............................................................................................54
3.3.2 Point of View of Storage Pool ...................................................................................56
3.3.3 Study point for storage configuration ........................................................................56
3.3.4 Storage device...........................................................................................................57
3.3.5 Storage capacity ........................................................................................................58
3.3.6 Extendibility..............................................................................................................59
3.3.7 Availability................................................................................................................60
3.3.8 Functionality .............................................................................................................60
3.3.9 Backing up ................................................................................................................61
3.4 Studying Configuration of Virtualization Base ....................................................................63
3.4.1 Configuration Examples of Virtualization Base in VMware vCenter Server
Management Environment ...........................................................................................63
3.4.2 Configuration Examples of Virtualization Base in Hyper-V Environment.................64
3.4.3 Configuration Examples of Virtualization Base in KVM Environment......................65
3.5 Studying VM Template .......................................................................................................66
3.5.1 Linkage between VM Template and Resource Pool...................................................69
3.5.2 VM Template Creation Policy ...................................................................................72
3.5.3 Using VM Template ..................................................................................................73
3.5.4 Sharing of the VM Template .....................................................................................74
3.6 Studying DC Resource Group Configuration ......................................................................76
3.6.1 DC Resource Group ..................................................................................................76
3.7 Studying Resource Pool Configuration ...............................................................................77
3.7.1 Resource Pool ...........................................................................................................77
3.7.2 Resource Pool and Sub-pool......................................................................................78
3.7.3 Configuration Examples of Sub-pool ........................................................................79
3.8 Studying Resource Pool for Each Cloud..............................................................................81
3.8.1 Public Cloud .............................................................................................................81
3.8.2 Private Cloud ............................................................................................................83
3.8.3 On-premises Cloud....................................................................................................84
Chapter 4. Design of Operation Management Server Configuration ................................... 85
4.1 Studying ID Management ...................................................................................................86
4.1.1 Users Handled in ID Management.............................................................................86
4.1.2 Precautions for ID Management ................................................................................86
4.1.3 ID Management Configuration..................................................................................86
v
4.2 Studying DB Configuration.................................................................................................87
4.2.1 Point of View for DB Configuration..........................................................................87
4.2.2 DB Configuration......................................................................................................88
4.3 Studying Management of 100000 Virtual Machines............................................................89
4.3.1 Point of View for Management of 100000 Virtual Machines .....................................89
4.3.2 Configuration of Management of 100000 Virtual Machines ......................................89
Chapter 5. Design of Optional Function .................................................................................. 91
5.1 Studying Distribution Package Configuration .....................................................................92
5.2 Studying Physical Machine Configuration ..........................................................................93
5.2.1 Physical Machine Configuration ...............................................................................93
5.2.2 Physical machines .....................................................................................................93
5.2.3 Network ....................................................................................................................94
5.2.4 Storage ......................................................................................................................94
5.2.5 OS Image ..................................................................................................................94
Chapter 6. Operating Environments/System Requirements ................................................. 95
6.1 Virtual DataCenter Automation Version Information ..........................................................96
6.2 Global Management Server.................................................................................................96
6.3 Management Server ............................................................................................................97
6.4 VM Monitoring Server........................................................................................................98
6.5 Managed Machine (Virtual Base) ........................................................................................99
6.5.1 System Requirements................................................................................................99
6.5.2 Virtual Machine Base ................................................................................................99
6.5.3 Managed Guest OS .................................................................................................100
6.6 Managed Machine (Physical Machine) .............................................................................101
6.7 Management Agent ...........................................................................................................102
6.8 Console .............................................................................................................................103
6.9 ID Management Server .....................................................................................................103
6.10 DB Server .......................................................................................................................104
6.11 Service Governor ............................................................................................................105
6.12 Network Devices.............................................................................................................106
6.13 Storage ............................................................................................................................107
6.14 Distributed Middleware...................................................................................................107
6.15 Monitored Middleware....................................................................................................108
Appendix A. Revision History................................................................................................. 110
Appendix B. Manual System....................................................................................................111
Appendix C. Managed Guest OS require packages.............................................................. 113
Appendix D. License Information .......................................................................................... 115
Glossary...................................................................................................................................... 116
vi
List of Figures
Figure 3-1
Figure 3-2
Figure 4-1
Figure 4-2
Gold level ...............................................................................................................59
Silver/Bronze level .................................................................................................59
Configuration example (local allocation of DB on servers).....................................88
Configuration example (allocation of the DB server) ..............................................89
vii
List of Tables
Table 3-1
Table 3-2
Table 6-2
Table 6-3
[Storage SAN configuration] ..................................................................................55
[Storage NAS configuration] ..................................................................................55
platform compatible with remote host for each Oracle Database version ..............109
Platform compatible with the remote host for each application version.................109
viii
Chapter 1. Virtual DataCenter Automation
Chapter 1.
Virtual DataCenter Automation
This section gives a product overview of Virtual DataCenter Automation.
Contents
1.1 What is Virtual DataCenter Automation ? ....................................................................................2
1.2 What is Virtual DataCenter Automation Standard Edition ? .........................................................3
1.3 What is Network Automation ? ....................................................................................................3
1.4 Virtual DataCenter Automation Capabilities ................................................................................3
1.5 Network Automation Capabilities ..............................................................................................27
1
Chapter 1. Virtual DataCenter Automation
1.1 What is Virtual DataCenter Automation ?
Virtual DataCenter Automation is software making it possible to manage the infrastructure of a data
center in the cloud.
The data center has been managed without using virtualization. However, virtualization has been
widely used to ensure system flexibility and improve work efficiency. Recently, cloud has been
attracting attention as a means of centrally managing the IT infrastructure of the data center.
There are two types of requirements in the IT system managing the data center: requests for and
provision of IT resources.
The cycle of IT resource requests and provision of the same generally required a few months before
virtualization was introduced. However, its introduction has slashed the required time to a few weeks.
In fact, virtualization only makes the phase where the server is prepared for providing the IT resource
more efficient. To automate the overall cycle of IT resource request and provision, cloud is crucial.
2
Chapter 1. Virtual DataCenter Automation
Cloud requires a service portal as a means of IT resource request and infrastructure management as a
means of IT resource provision respectively. Virtual DataCenter Automation is the software making
it possible to manage the infrastructure.
1.2 What is Virtual DataCenter Automation
Standard Edition ?
Virtual DataCenter Automation provides functions required for cloud operations in all-in-one.
However, Virtual DataCenter Automation Standard Edition allows users to select a desired function
according to their use cases.
The current version of Virtual DataCenter Automation Standard Edition can be used in cooperation
with the following components related to Virtual DataCenter Automation.
• Resource management function (SigmaSystemCenter)
• Monitoring function (SystemManager G)
Operations can be automated by controlling these from through the Virtual DataCenter Automation
Standard Edition portal.
1.3 What is Network Automation ?
Network Automation is a product that includes only the network orchestration functions of Virtual
DataCenter Automation, and can be incorporated even in systems in which IT from multiple vendors
is already being operated. This allows networks that utilize SDN to be automatically constructed and
operated even if the system includes a combination of OpenFlow and existing networks.
1.4 Virtual DataCenter Automation Capabilities
Virtual DataCenter Automation automates systemwide operations in addition to providing advanced
infrastructure management. It achieves a cloud system with superior usability and operability by
linking with a service portal that provides a self-service infrastructure.
The infrastructure management standardizes and automates operations through the centralized
management of ICT resources. Virtual DataCenter Automation achieves the best infrastructure
3
Chapter 1. Virtual DataCenter Automation
management for cloud systems with its advanced resource pool management and operations
automation functions.
Note
It's different that a service portal on a figure is an example and is built by a bundled Virtual DataCenter
Automation portal.
1.4.1 Resource management
Diverse ICT resources (servers, networks, and storage units) and multiple virtual infrastructures are
centrally managed with the cloud, and these resources must be grouped into a resource pool. Through
this centralization, the operation of diverse ICT resources can be standardized. Virtual DataCenter
Automation integrates a wide range of ICT resources, including existing assets. It enables you to
divide resource pools for each division and department and also provides a sub-pool function in
addition to standardizing operations. Flexible, practical cloud environment operations that maintain
ICT resources for each department can be achieved by limiting management privileges to a fixed
resource pool range.
1.
Resource pool management
The resource pool is a function that collects and centrally manages resources including CPUs,
memory devices, storage units, and networks, and dynamically distributes these resources to
each job. In Virtual DataCenter Automation, the resource pool is used to operate resources
efficiently, so that costs can be reduced. You can visually check the usage status of each
resource from the resource pool list screen of the web console.
2.
Controlling resources
Start and stop, snapshot creation, console connection, and other operations can be controlled
for a virtual machine in Virtual DataCenter Automation.
3.
Optimized Placement
Virtual DataCenter Automation maintains the proper load of a virtual machine server by
monitoring the load status of the virtual machine server.
4
Chapter 1. Virtual DataCenter Automation
If the load is high, the load is adjusted by live migration of virtual machines from a loaded virtual
machine server to other virtual machine server which loads are not very high.
If the high load is not alleviated by moving virtual machines, SigmaSystemCenter can start and use
new virtual machine servers
1.4.2 Provisioning and orchestration
An additional element that is important for infrastructure management is the flexible extraction of
ICT resources and the automated configuration of this extraction. MasterScope Virtual DataCenter
Automation not only configures virtual machines and assigns physical machines, but also automates
a series of settings necessary for the machines actually used for tasks such as storage allocation,
network setting, installation of applications, or monitor setting. This frees ICT resource
administrators of setting-related tasks, and users can start using the resources at the time they are
extracted. In addition, provisioning scenarios (automation procedures) verified by NEC are provided
with the standard product, and these practical configuration scenarios can be used immediately.
These scenarios can be easily customized to specific jobs by using the editing function from the GUI.
1.
Provisioning/activation
Virtual DataCenter Automation can configure the machine to be managed, manage
configuration information, change the configuration, and execute autonomous restoration from
a machine failure. Users obtain permission to use resources through the activation process that
follows provisioning.
2.
Request management
Request management receives orchestration requests through the service governor and
executes the scenario corresponding to the request. This function manages the progress and
results of the received request and provides a means for cancelling the request depending on
the situation. The orchestration results are asynchronously returned to the requester.
3.
Schedule management
5
Chapter 1. Virtual DataCenter Automation
Schedule management controls the starting and stopping of monitoring for each category of the
server and message monitoring functions. Schedule management also manages the schedules
of scenarios that must be regularly executed and controls the scenarios that are executed not
regularly but at a specified time.
Schedule management can also automatically stop monitoring when servers are down, such as
at night and during maintenance, and automatically start monitoring when the servers start.
4.
Controlling scenario
Scenarios required by the orchestration function (such as virtual machine creation, deletion,
and reconfiguration) are executed. Workflows are executed from request management and
schedule management. Work is automated by defining process flows for each job.
Responding to the functions specific to scenario controls such as job date and non-flow
executions enables the control and monitoring of advanced operational flows.
Support for the non-flow execution in particular enables the monitoring and control of
workflows that instruct starting and stopping at any timing in addition to supporting traditional
workflows that follow a fixed procedure.
5.
Controlling network devices
Streamlines the settings of the network devices (firewall, load balancer, SSL-VPN device, L2
switch, etc.) in the data center as a virtual network for each tenant.
• Creation of tenant firewall and load balancer
• Setting of SSL-VPN
• Creation of VLANs (production VLAN, tenant administration VLAN (Note: To be
described later), etc.)
• etc.
6.
Controlling storage devices
Virtual DataCenter Automation is a product that automates virtual data centers. The storage
settings scenario improves the efficiency of installing new storage units and setting up
additional storage units, thereby reducing setup costs.
The following settings can be specified as the basic functions of the vDCA storage settings
scenario:
• Settings when the storage unit is first installed
• Settings when storage units are added
• Settings of a storage unit when a VM server is added
7.
Automatically discovering managers
Provisioned agents automatically apply monitoring definitions when connected to managers by
setting monitoring definitions in advance for management target machines that perform
provisioning, enabling monitoring to start.
8.
Service governor
The service governor provides an integrated interface for linking with external products (such
as a service portal).
The service governor of Virtual DataCenter Automation allows you to orchestrate the
management target machines from the service portal.
6
Chapter 1. Virtual DataCenter Automation
1.4.3 Monitoring
Virtual DataCenter Automation provides a function for the integrated monitoring of an entire cloud
system, including work systems configured on virtual machines. Diverse monitoring functions such
as detecting silent failures, identifying the range of impact, and automating the troubleshooting
process, reduces downtime and supports the realization of the high availability required for missioncritical system use.
1.
Machine Status and Failure Monitoring
SigmaSystemCenter can monitor the status of the machines. This function monitors usage and
operating status of each machine resource including terminal equipment in real-time. In
addition, the function monitors errors and thresholds of CPU, memory, or disks periodically
and if any failure is generated, the function reports you immediately. The system can recover
from a failure when an event is detected by the machine status monitoring function.
2.
Message monitoring
Centrally manages the messages generated by performance monitoring, application log
monitoring, etc. Messages by linkage with the other products are also integrated. Classifying
the messages into each operation items and displaying them in the tree view so that user can
identify the business impact immediately in case of any system failure. Enormous messages
can be filtered and necessary information is selectively displayed.
7
Chapter 1. Virtual DataCenter Automation
Stopping and resuming monitoring can be automatically controlled using the schedule
function. This enables to discard the messages and avoid the unnecessary notifications in case
of regular system maintenance, for instance.
• Registers the schedule for selected category from the monitoring view
• Batch schedule registration is available by defining a schedule to the category group
3.
Service process monitoring
In addition to alive monitoring of important service/process, monitors from the operation point
of view. When incidents occur, it notifies which business will be influenced & affected.
8
Chapter 1. Virtual DataCenter Automation
4.
Application log, syslog, and event log monitoring
Monitors the application log, event log and syslog and displays and informs only the necessary
information.
5.
Network Operation Management
9
Chapter 1. Virtual DataCenter Automation
It allows you to manage the configuration, failure, and performance of the network utilizing
Network Manager. Network Manager provides a function that displays configuration maps for
the visual management of IPv4/IPv6 networks, a topology management function that displays
the wiring conditions of network devices, and a function that graphically displays the path
between any two points in the network. Network Manager can detect failures and notify them
to the administrator quickly by monitoring alive status, SNMP traps, and MIBs with ICMP
(Internet Control Message Protocol).
Also, in Network Manager, performance information is accumulated through its periodic
collection from a management information base (MIB) by equipment that supports SNMP. The
collected information can be used for real time analysis and the creation of performance
reports. The collected data is automatically saved in the CSV (Comma Separated Value)
format.
6.
Current Alerts
Only the unresolved alert information that must be checked and investigated is displayed, and
once resolved, the alert information is automatically deleted. Since only the current alerts are
notified on-screen, you are protected against overlooking important failures.
7.
Performance monitoring
It graphically displays the performance status (CPU, memory, etc.) of each server.
Monitors the capacity of the CPU, memory, etc… Also, generates the message when the
threshold is exceeded.
10
Chapter 1. Virtual DataCenter Automation
Performance data can be accumulated and charted as statistics.
8.
Audit log management
Manages operating status and automatically processes with the help of/by monitoring the
terminal/Manager. Each operation and result is recordable as the audit logs for future tracing.
And it is also possible to send a report when certain audit log is being generated.
9.
Operation control
11
Chapter 1. Virtual DataCenter Automation
This feature issues a predetermined action by using the time, generation of a specific message,
or an operator's operation as a trigger.
Enables to submit commands to Manager/Agent with a simple step. Avoid forgetting to issue
commands by defining a complex command execution as an "operation". The defined
operation can be executed manually by the operator or automatically by using a specific
internal event, scheduler, or timer as a trigger. The commands included in the operation can be
executed sequentially, in parallel, or on multiple Agents in parallel.
10. Manager linkage
The whole system can be centrally monitored by linking multiple managers in a hierarchy.
Manager of Manager (MOM) collects the messages from Region Manager (RM).
* You can specify whether to use linkage for each category in the business view.
11. Application linkage
Here the text filled message is sent as an output which is collected by SystemManager G to the
external applications. The record format of write-out file can be selected and the user can
12
Chapter 1. Virtual DataCenter Automation
specify various character encoding (UTF-8, Shift-JIS, etc.) to write log files. Log rotation is
also made available.
12. System performance analysis (Option)
Performance information can be automatically obtained on SystemManager G, and analyzed in
real time. Automatic analysis is simply realized by input performance data without any
professional knowledge or complex setup. Performance information is automatically acquired
in SystemManager G and analyzed in real time.
• Automatic detection of silent failures
The performance information correlations that do not change during normal operation
(invariant) are automatically learned and modeled. An unusual behavior that does not
match that model is detected as a silent failure. Real-time automatic analysis makes it
possible to issue a warning message when a failure is detected.
• Automation and visualization of analysis and cause determination
You can significantly reduce the time necessary to determine the cause, because the core
element of the detected unusual behavior and its extent of impact are intuitively shown in
a pie chart or map.
• Recording and viewing responses
Responses that have been made can be recorded. Recorded responses are accumulated
and searched for based on similarities with other failures. This helps you reduce the time
necessary to respond to any subsequent unusual behavior that is similar.
• Description of the basic screen
The simple and easy-to-understand basic screen enables you to see the analysis results at
a glance.
• Automatic analysis of persistent relationships (invariant)
The system is automatically analyzed using performance information to detect failure
occurrence.
- Automatic analysis does not require special knowledge or a complex setup.
- Because the persistent relationships between performance information sets are
focused on, it is not necessary to change settings even if the load temporarily
fluctuates, such as due to a special advertising campaign.
- Additional analysis can be automatically controlled using an external command.
- Real-time analysis can be performed in SystemManager G.
13. ServiceManager linkage (Option)
By linking with MasterScope ServiceManager, which is used for IT service management, this
feature ensures that all important incidents are addressed and realizes total system IT service
management.
* The Manager does not support Windows Server 2008 when the ServiceManager linkage
feature is used.
14. Linkage function for service portal messages
This function uses HTTP communication to link specific messages detected by the operation
monitoring server (SystemManager G) with an external product (such as a service portal). The
messages can be simultaneously transmitted to multiple destinations.
15. Service port monitoring function
13
Chapter 1. Virtual DataCenter Automation
This function monitors the opening and closing of the service port and outputs the
corresponding message if the state changes. Recovery can be also performed by extracting this
message and making recovery settings for the message. In addition, the state change of the
monitoring port can be reported to the operator by using the report function.
1.4.4 Asset Management
1.
Displaying asset information
Collected asset information is displayed on a simple screen. You can divide and view
information in groups, organized by, for example, division or floor, based on usage. A
hierarchical view that can flexibly handle the organization of large companies can be used. In
addition, data center operators can separate and manage management information on the
company level.
Asset information for each machine can be displayed in lists organized by category. Customerspecific management items can be added and viewed.
14
Chapter 1. Virtual DataCenter Automation
2.
Searching asset information
Assets matching specific criteria can be displayed in a list through a filtered search. By using
search criteria, an administrator can limit the configuration information viewed to only that
which is desired.
You can use flexible searches in which multiple criteria, such as asset names, users, and
devices names, are specified. Frequently used searches can be registered to the tree view so
that you can quickly view a list of assets that match the search criteria. Arithmetic operations
or comparison between items can be set in the search criteria, and required information can be
displayed in a batch.
3.
Alert function
The alert function can send notifications linked with the search function and monitoring
conditions by e-mail. The function sends e-mail notifications concerning the movement of
assets, approaching lease/rental end dates, an insufficient number of software licenses,
unauthorized software usage, and other status alerts. The alert function can be set to link with
the search function and monitoring criteria settings to send violation notifications.
4.
Contract management
15
Chapter 1. Virtual DataCenter Automation
Contract management associates lease, rental, and maintenance contract periods, rates,
statuses, and other basic contract information with asset information, enabling the integrated
management and confirmation of this information.
5.
Distributing and executing software
Batch distribution is available for commercial packaged software, proprietary applications, and
security patches. Specification of a distribution destination is flexible, enabling you to specify
specific groups or a list of terminals extracted from search results. You can also select
mandatory distribution or optional distribution according to usage. Distribution progress can
be viewed in a list.
6.
License key management
The following logical assets can be managed in Virtual DataCenter Automation.
• OS product key
• License key for software/middleware
The license key registered according to each cloud configuration can be used as necessary such
as when creating a new virtual machine or installing new software to a virtual machine.
7.
Software installation image management
Virtual DataCenter Automation allows you to register the installation images for software,
security patches, etc. By utilizing the registered installation images, you can install the
software or security patch to the virtual machine extracted from the newly created virtual
machines or search results. By working with the license key management, you can associate
the virtual machine in which the software was installed with the license key of the installed
software.
8.
Management information output
Asset information, software license information, and contract information managed in the asset
management database can be output in CSV format. This information can be utilized as
configuration information in other operations management software or when analyzing
information based on customer-specific needs.
1.4.5 Custom Monitoring for Tenants
For resources assigned to the tenant administrator by the IaaS provider, Operation Management
Appliance *1 can provide functions that allow the tenant administrator to customize what to monitor
for integrated monitoring. It minimizes downtime with various monitoring functions such as sensing
of silent failure, identification of affected areas, and automation of responses, to support the
realization of higher availability.
*1
Function to provide custom monitoring, asset management, and software patch distribution and application for the tenant
administrator and the machine image where products necessary for the custom monitoring function or software
repository function for the tenant are set up in advance.
16
Chapter 1. Virtual DataCenter Automation
The following functions are available in custom monitoring for tenants.
1.
Monitoring of machine status and failure
The tenant administrator can monitor the usage and operation states of the machine resources
assigned by the IaaS provider in real time. It also monitors errors and threshold values of the
CPU, memory, and disks periodically, so it can report to the administrator by email
immediately in the event of a failure.
In addition, it can execute commands on the virtual machine for automatic recovery in the
event of a failure triggered by an event detected via machine state monitoring.
2.
Message monitoring function
This function manages messages occurring in performance monitoring and application log
monitoring as well as messages received from the modules of this product in an integrated
fashion.
It displays messages occurring in the system in trees grouped by host or by task, so the
affected area can be identified immediately in the event of a failure. Only the messages
17
Chapter 1. Virtual DataCenter Automation
required for monitoring are selectively displayed out of a large volume of messages, so current
messages regarding the failure won't be scrolled down and off the screen.
In monitoring by task, stopping and resuming of monitoring can be controlled automatically
with schedules in units of category groups. Therefore, messages occurring in time zones that
are not to be monitored, such as in periodic maintenance, are discarded, preventing
unnecessary notifications. Schedules can be registered for any category from the console.
Schedules can be set for a category group to register them to the categories under the group in
a batch.
3.
Service process monitoring function
It performs live monitoring of processes in units of operation systems as well as monitoring
important processes in the operation system in units of nodes. In the event of a failure, it can
notify affected tasks.
18
Chapter 1. Virtual DataCenter Automation
4.
Application log monitoring, syslog monitoring, and event log monitoring functions
This function monitors logs output by the application program, event logs, and syslogs,
extracts necessary information, and reports as messages.
5.
Report function
19
Chapter 1. Virtual DataCenter Automation
This function reports by email in the event of an alert such as a server stop or process stop.
Simultaneous reporting to multiple destinations is available.
6.
Performance monitoring function
This function displays the operation state (CPU/memory usage) of the server graphically. It
can also display the operation state of the database and application server.
It monitors the CPU usage and memory usage, and displays a message when the threshold
value is exceeded.
20
Chapter 1. Virtual DataCenter Automation
It accumulates performance data as statistical information and displays them in graphs.
7.
Internal control management (trail management) function
This function supports internal control by managing operations and result history as logs (audit
logs) for operations and automatically executed processing on the console/manager.
It can also report when an audit log of specific importance occurs.
8.
Middleware monitoring function
21
Chapter 1. Virtual DataCenter Automation
This function monitors the operation state (CPU/memory usage) of the middleware installed in
the virtual machine and the process states. The operation state can be displayed in a graph or
output in CSV format with the performance monitoring function.
9.
Integration of login accounts with the ID management server
You can log into the console of Operation Management Appliance with the tenant
administrator account stored on the ID management server. Appropriate rights can be assigned
to each user.
1.4.6 Asset Management for Tenants
Operation Management Appliance can be used to provide the functions to manage information
resources for the resources assigned to the tenant administrator by the IaaS provider on the tenant
administrator side. It realizes streamlining of asset management operation of the tenant administrator
and effective utilization of assets with a wide variety of asset management functions such as
management and searching of asset information or distribution of software.
The following functions are available in asset management for tenants.
1.
Asset information display
It displays collected asset information on a simple screen. It can be browsed in groups
according to usage, such as by department or by building floor. It allows hierarchical display
that flexibly supports organizations of large corporations. For data center providers, it can also
separate management information into units of companies and allows you to manage the
information.
22
Chapter 1. Virtual DataCenter Automation
It lists asset information of each machine by category. Management items specific to a
customer can be added and displayed.
2.
Asset information search
You can narrow down to assets that match specific conditions to display them in a list. Search
conditions can be used in combination to display just the configuration information that the
administrator wants to check. Flexible searches with multiple conditions such as asset name,
user, and device name, etc. can be performed. Search conditions that are used frequently can be
registered in the tree view that allows you to obtain a list of assets that match the search
conditions. "Four arithmetic operations" and "Comparison among items" can be specified as
search conditions for batch display of required information.
23
Chapter 1. Virtual DataCenter Automation
3.
Asset management alert
It allows email notification (reporting) in conjunction with the search function and monitoring
conditions. Asset movement, forthcoming end of lease/rental, insufficient software licenses,
and use of unauthorized software can be notified by email. Transmission settings for emails in
the event of a violation can be specified in conjunction with the search function and
monitoring condition settings.
4.
Contract management
It can associate basic information regarding contracts such as lease, rental, and maintenance
contractual periods, fees, and statuses with asset information so that it can be managed and
checked in an integrated fashion.
5.
Software distribution and execution
Batch distribution of commercially available package software, proprietary application
software, and security patches can be performed. Distribution destinations can be specified
flexibly such as specific groups and a list of terminals extracted from search conditions. Forced
distribution and voluntary distribution can be selected according to the usage. The progress of
distribution can be checked in the list.
6.
License key and package distribution
The following license keys can be managed.
• OS product keys
• Software/middleware license keys
• Global IP addresses (IPv4 addresses, IPv6 addresses)
License keys registered based on the cloud configuration can be used as needed when a new
virtual machine is created or when new software is installed in a virtual machine.
7.
Management information output
Asset information, software license information, and contract information managed in the asset
management database can be output in CSV format. It can be used as configuration
information of other operation management software or for analysis based on the customer's
specific needs.
1.4.7 Software Repository
A data center or multiple data centers require the management of VM templates, etc., and their
operation costs increase. The software repository allows you to manage the management targets such
24
Chapter 1. Virtual DataCenter Automation
as a VM template in an integrated fashion. Operation without focusing on the existence of the
management target decreases the operation costs.
Tenants may want to install the application in a virtual machine, make it a template, and create
multiple virtual machines, or they may want to migrate the existing virtual machine into their own
environment for Virtual DataCenter Automation. The software repository provides functions for
tenants to deal with these cases.
1.
Function for providers and resellers
By registering the following management targets to the software repository only once, they can
be shared among multiple data centers and in a data center. This leads to a decrease in the
operation cost.
• VM template
• Software(including middleware)/patch
• OS image
The shared management targets can be provisioned from each data center.
• Registration and distribution of software/patches, assignment of licenses along with
distribution
• Registration of OS images and provisioning of physical servers
• Registration of VM templates and provisioning of virtual machines
2.
Functions for tenants
The following functions are provided.
• Creation of VM templates and provisioning of virtual machines
Note
The software repository needs a dedicated volume for storage to store the management target. For storage, a
shared disk or local disk can be used. If a shared disk is used, CIFS and NFS are supported as protocols.
They have the following characteristics.
•
CIFS: No additional component is required. Setting allowing Guest access is required for the volume.
•
NFS: NFS service must be installed to each server.
25
Chapter 1. Virtual DataCenter Automation
For the settings of the volume, see 5.7.1 Setting up the Software Repository Environment in Virtual
DataCenter Automation Configuration Guide.
Note
If the server configuration is multiple sites, a shared disk must be placed in each site, and a dedicated
volume must be created for each of them, for distribution and sharing of software / patch. In addition, in one
Management Server of a site that has a Global Management Server, you must install the components of
Managed Machines. For multiple sites, see "2.4.5 For Multiple Sites (page 38)". For the settings of the
management server, see 2.5.1 Components Installed on the Management Server in Virtual DataCenter
Automation Installation Guide. For the setting multiple sites, 5.7.1 Setting up the Software Repository
Environment and 5.7.6 Managing Packages (for Multiple Sites) in Virtual DataCenter Automation
Configuration Guide.
Note
Software images and VM images managed by the software repository are stored in a volume on the file
server. Consider backing up the images. For consideration of backups, see "3.3.9 Backing up (page 61)".
1.4.8 Integrated ID Management
To utilize various ICT resources in a data center within an appropriate function range, individual
authentication and an approval framework are required. In Virtual DataCenter Automation, users
registered in the service portal are managed in the ID management server in an integrated fashion. By
the network devices working with the ID management server, individual user registration and user
right settings in the operation management function and network devices are not required. This leads
to a decrease in management cost and operation costs.
1.4.9 Provisioning of Physical Machines
As a resource to assign to tenants, Virtual DataCenter Automation is compatible not only with virtual
machines on the virtualization base, but also with physical machines. Existing physical machines can
be used effectively.
1.
Provisioning and automation
Similarly to virtual machines, it allows you to provision and automate physical machines. On
request, physical servers can be assigned, a network can be set (VLAN setting), and storage
can be set (assignment of FC and iSCSI).
2.
Operation of physical machines
It allows you to turn the power ON/OFF, back up, and restore the physical machine.
3.
Management of gold images
By utilizing the software repository, physical machine gold images can be managed.
1.4.10 Visualization of Tenant Networks
Tenant networks assigned by Virtual DataCenter Automation (VLANs, tenant firewalls, load
balancers, etc.) and the logical configuration diagrams in the servers allocated to those networks can
be automatically created and checked by using Virtual DataCenter Automation. The created logical
configuration diagrams can also be freely edited and saved by system operators. Network
Automation also provides functions that enable physical information to be verified immediately from
logical configuration diagrams, making it easy to monitor the networks of a large number of tenants.
26
Chapter 1. Virtual DataCenter Automation
1.5 Network Automation Capabilities
The functions that can be implemented in Network Automation, whose role is to simply orchestrate
network resources, are described below.
Function
Resource Management
Network Automation
Virtual appliances such as virtual load balancers and virtual firewalls, as well as
network resources such as IP addresses and VLAN IDs can be pooled. Resource
management, such as managing the total amount of resources, the amount of used and
unused resources, and the amount of reserved resources, can be performed in pool
units.
Orchestration, Provisioning All the network settings required to use services, such as allocation of L2 switches,
firewalls, and load balancers to VLANs, filter policy settings, user authentication
settings, are automated. Network provisioning across multiple data centers can be
automated by linking with the UNIVERGE PF series, which uses OpenFlow
technology.
Integrated ID Management Supported in the same way as Virtual DataCenter Automation.
Tenant network
visualization
Supported in the same way as Virtual DataCenter Automation.
27
Chapter 2. Virtual DataCenter Automation Configuration
Chapter 2.
Virtual DataCenter Automation
Configuration
This section describes the configuration of the system to which Virtual DataCenter Automation is
installed.
Contents
2.1 Management Target of Virtual DataCenter Automation..............................................................29
2.2 System Management Domain.....................................................................................................31
2.3 Virtual DataCenter Automation Basic Configuration .................................................................32
2.4 Server configuration of Virtual DataCenter Automation.............................................................34
2.5 Virtual DataCenter Automation License.....................................................................................39
28
Chapter 2. Virtual DataCenter Automation Configuration
2.1 Management Target of Virtual DataCenter
Automation
To configure data centers that utilize Virtual DataCenter Automation, it is necessary to understand
the management targets (network, storage, and server) of Virtual DataCenter Automation and which
management domain is used for the management target. For the management domain, see
"2.2 System Management Domain (page 31)".
2.1.1 Network
The network used in the Virtual DataCenter Automation system is described below. For details of
network design, see "3.1 Studying Network Configuration (Standard Configuration) (page 41)".
Type
Description
Share in Share in Share in
a pod
a zone
a site
Share
among
sites
L2SW
Layer 2 switch for the operation
management network used for device control
from the Virtual DataCenter Automation
management server, and layer 2 switch
accommodating the tenant network. In the
Virtual DataCenter Automation system, a
VLAN tag is used to separate the network
among different tenants. Accordingly, the L2
switch to be used in the virtualization board
and to support IEEE 802.1Q is required. Use
an L2 switch whose maximum active VLAN
number is 1000 or more. Note that some
switches have a number of 500 or fewer
depending on the device type. Consider the
network redundant system to select the
device.
√
-
-
-
Tenant FW
Firewall device used for the tenant network.
Use the firewall device that supports the
multi-tenant function. Consider the
maximum tenant number to select the device.
-
√
-
-
Back-end FW
The multi-tenant function is not required for
the back-end firewall.
-
√
-
-
SSL-VPN device
Use the devices that support LDAP linkage,
VLAN support, and the group access control
function. In addition, some firewall devices
that support the multi-tenant function contain
the tenant firewall and SSL-VPN device as a
single device.
-
√
-
-
Router
Router connected to the Internet and used for
the operation management network on the
provider side.
-
√
-
-
Load Balancer
Load balancer used for the tenant network.
Use the load balancer that supports the multitenant function. At this time, consider the
maximum tenant number to select the device.
-
√
-
-
UNC
Device used for the programmable flow (PFlow) implementing the OpenFlow
technology. Manages multiple PFCs
-
-
-
√
29
Chapter 2. Virtual DataCenter Automation Configuration
Type
Description
Share in Share in Share in
a pod
a zone
a site
Share
among
sites
(programmable flow controllers) in an
integrated fashion. Enables central
management of VTNs and the setting of
VTNs spanning different PFCs.
PFC
Device used for the programmable flow (PFlow) implementing the OpenFlow
technology. By connecting the different
blade housing to the PFS (programmable
flow switch), the different pods are
connected at the L2 level.
-
√
-
-
PFS
Device used for the programmable flow (PFlow) implementing the OpenFlow
technology. The PFC (programmable flow
controller) provides the central control of
multiple PFSs.
-
√
-
-
2.1.2 Storage
The storage used in the Virtual DataCenter Automation system is described below. For details of
storage design, see "3.3 Studying Storage Configuration (page 54)".
Type
Description
Share in Share in Share in
a pod
a zone
a site
Share
among
sites
Storage for tenants
Storage connected to a hypervisor and
directly used by virtual machines to be
provided to tenants.
√
-
-
-
Storage for the
software repository
Storage to share and manage VM templates,
etc. Use the NAS device.
-
√
-
-
2.1.3 Server
The server used in the Virtual DataCenter Automation system is described below.
Type
Description
Share in Share in Share in
a pod
a zone
a site
Share
among
sites
Hypervisor
Server accommodating virtual machines of
tenants. Use VMWare ESXi or Hyper-V.
√
-
-
-
VM template
Template from which virtual machines are
created.
-
-
-
√
Virtual machine
Virtual machines operating on hypervisor
assigned during the provisioning process.
√
-
-
-
Gold image of the
physical machine
Disk image including OS and profile for
provisioning of physical machines.
-
-
-
√
Physical machines
Physical machines assigned during the
provisioning process.
√
-
-
-
30
Chapter 2. Virtual DataCenter Automation Configuration
2.2 System Management Domain
2.2.1 Overview of Management Domain
Virtual DataCenter Automation defines four management domains (pod, zone, P-Flow domain and
site) as cloud system configuration.
2.2.2 Pod
"Pod" is defined as a management domain to manage 1000VLAN and 1000 virtual machines within a
network range connected to the L2 switch. The maximum number of active VLAN IDs that can be
managed in a single network device is 1024. The element 1000VLAN is based on this feature. The
network assigned to each tenant is basically created in one pod.
2.2.3 Zone
"Zone" is defined as a management domain consisting of four pods at maximum. The upper limit of
VLAN IDs is 4096 as a specification. Therefore, the range where a VLAN ID is uniquely identified
is defined as "zone". Communication between pods within a zone is realized as communication at the
L2 level.
2.2.4 P-Flow Domain
P-Flow domain is defined as a group of pods or zones to which the PFSs managed by a single PFC
belong. Connection on the L2 communication level is possible exceeding the upper-limit VLAN ID
of 4096 between each pod and between zones by using programmable flow (P-Flow) equipment. In
addition, the tenant NW can be built across sites by using IX router or other equipment that extends
L2-level communication between sites. When installing PFC at each site, install UNC at the same
time to enable control of multiple PFCs.
31
Chapter 2. Virtual DataCenter Automation Configuration
2.2.5 Site
"Site" is defined as each pod group connected via a router. Communication between sites is realized
as communication at the L3 level using a dedicated line or IP-VPN. For example, a data center
connected via a router and geographically separated, or the connection via a router even in the same
data center, is defined as "site". Site consists of one or multiple zones.
2.3 Virtual DataCenter Automation Basic
Configuration
2.3.1 Consisting Components
Virtual DataCenter Automation consists of the following products.
The components required for the manager function are listed below: All of the following components
of the manager function are installed using Virtual DataCenter Automation Integrated Installer
(database excluded). Because Network Automation functionality is limited to network orchestration,
it does not include all the components required for the manager function. The components required
for the manager function are indicated by  in the NWA column in the table below. Besides, Virtual
DataCenter Automation Portal is bundled as the service portal.
Component list
NWA
(1)
SigmaSystemCenter component*1
(2)
DeploymentManager component
(3)
SystemManager G component*2
√
(4)
Network Manager component*3
√
(5)
AssetSuite component
(6)
Cloud provider API component
√
(7)
Database
√
(8)
Identity Manager component
√
*1 For details of the SigmaSystemCenter component, see 2.1.4. Component and Product Configuration in
the SigmaSystemCenter First Step Guide.
*2 In Network Automation, SystemManager G components are configured, but the functions are limited.
This means that virtual machine failure monitoring cannot be implemented.
*3 In Network Automation, Netvisor Pro components are configured, but the functions are limited. This
means that performance analysis cannot be implemented.
Each component responds to the function of Virtual DataCenter Automation as the figure below.
32
Chapter 2. Virtual DataCenter Automation Configuration
2.3.2 Installed Functions
The functions installed to operate Virtual DataCenter Automation are listed below. Install them with
the Virtual DataCenter Automation Integrated Installer. For the installation order, see the Virtual
DataCenter Automation Installation Guide, and Network Automation Installation Guide.
Role
NWA
Global Management
Server
Manages and monitors all resources in Virtual DataCenter Automation in an
integrated fashion. Provides a single-point gateway function working with
an external function such as a service portal (Virtual DataCenter Automation
Portal, etc.). Use local or remote DB.
√
Global Management
Server console
Provides functions to monitor and control the resources managed by the
global management server.
√
Management server
Provides functions to control (creation and deletion) and monitor the
storage, network, virtualization base, and virtual machines (VM). Use local
or remote DB.
√
Management server
console
Provides functions to browse and control the resources managed by the
management server.
√
VM Monitoring Server
Provides functions to monitor and control virtual machines. Plays the role of
reducing the management server load when monitoring virtual machines.
The VM monitoring server is installed when IaaS providers monitor virtual
machines in detail. Use local or remote DB.
VM monitoring server
console
Provides functions to browse and control the resources managed by the VM
monitoring server.
ID management server
Manages authentication IDs of the IaaS providers who use Virtual
DataCenter Automation and of the tenant administrators who access
resources in a tenant in an integrated fashion.
Managed machine
Provides functions to monitor and control by installation in the virtualization
base such as a management target virtual machine, physical machine, ESX,
or Hyper-V.
Management agent
Provides a function to monitor the management target ESX, Hyper-V, and
storage. Normally, it is installed in the same machine as the management
server.
Function
33
√
Chapter 2. Virtual DataCenter Automation Configuration
For details about the system requirements, installation procedure, and the operation method in virtual
DataCenter Automation Portal, refer to Virtual DataCenter Automation Portal User's Manual
(Installation Guide) and Virtual DataCenter Automation Portal User's Manual (Operation Guide).
2.4 Server configuration of Virtual DataCenter
Automation
This section describes an assumed configuration of Virtual DataCenter Automation. When using
Network Automation, replace this configuration with one in which only the network devices are
controlled.
2.4.1 For Single Pod
This is the basic configuration of Virtual DataCenter Automation. A single pod of a certain data
center is managed. The management server is configured so as to control and monitor the NW
devices, storage devices, hypervisors, and provisioned virtual machines. This also processes requests
from the portal server and their replies, brings together management servers, and configures the
global management server to realize integrated monitoring and management.
34
Chapter 2. Virtual DataCenter Automation Configuration
Tenant number
200 per pod at maximum
Virtual machine
number(Number of
machines managed
under a global
management server)
1000 per pod at maximum
VLAN number
1000 per pod at maximum
• Allocates 1 to 6 blade housings per pod.
• Allocates 8 blade servers per blade housing.
• Allocates 24 virtual machines per blade server.
24 x 7 x 6 ≈ 1000
2.4.2 For Single Pod (VM Monitoring Server Configuration)
Also for a single pod can the VM monitoring function be divided as shown in the figure below and a
hierarchical configuration can be employed. The VM monitoring server monitors the VM, and the
management server controls and monitors the others. This configuration enables the VM monitoring
items and frequency to be increased without increasing the load on the management server. In
Network Automation, the VM monitoring server cannot be installed, so this configuration is not
supported.
35
Chapter 2. Virtual DataCenter Automation Configuration
VM monitoring
server number
1 to 4 per management server
Virtual machine
number (Number of
machines managed
under VM
monitoring server)
256 per VM monitoring server
2.4.3 For Multiple Pods
This configuration consists of multiple pods in certain data center. The management server is
configured by pods.
36
Chapter 2. Virtual DataCenter Automation Configuration
Management server
number
100 at maximum (Number of machines managed under global management server)
2.4.4 For Multiple Pods (Zone Configuration)
This configuration manages multiple zones of a certain data center. The management server is
configured by pods similarly to the previous section.
37
Chapter 2. Virtual DataCenter Automation Configuration
2.4.5 For Multiple Sites
This configuration is used when managing multiple sites across multiple data centers. The
management server is configured by pods similarly to the previous section. Only one global
management server is configured in either site. For multiple sites, the ID management servers are
configured per site.
38
Chapter 2. Virtual DataCenter Automation Configuration
2.5 Virtual DataCenter Automation License
Prepare the appropriate license in the required quantity according to the configuration.
Tip
Price information can be obtained from the following website:
http://www.nec.com/en/global/prod/masterscope/vdcautomation/
39
Chapter 3. System Design
Chapter 3.
System Design
This section describes considerations for the system design of Virtual DataCenter Automation.
Contents
3.1 Studying Network Configuration (Standard Configuration) .......................................................41
3.2 Customization of Network Configuration...................................................................................53
3.3 Studying Storage Configuration .................................................................................................54
3.4 Studying Configuration of Virtualization Base ...........................................................................63
3.5 Studying VM Template ..............................................................................................................66
3.6 Studying DC Resource Group Configuration .............................................................................76
3.7 Studying Resource Pool Configuration ......................................................................................77
3.8 Studying Resource Pool for Each Cloud.....................................................................................81
40
Chapter 3. System Design
3.1 Studying Network Configuration (Standard
Configuration)
The standard network model assumed in Virtual DataCenter Automation is described in each cloud
model (public cloud, private cloud, and on-premises cloud). In addition, the consideration for the IP
address design, user authentication, and network devices will be described. First, network difference
among the cloud models and precautions are listed below.
Public cloud
Private cloud
On-premises cloud
IP Address of
business VLAN
Unique per tenant.
Unique per tenant.
However, can be changed
freely after assigning VM.
Parts of the IP address system Part of the user Intranet.
of the user Intranet.
IP Address of
tenant VLAN
Unique within the system.
Unique within the system.
Cannot be changed.
Parts of the IP address system Part of the user Intranet.
of the user Intranet.
IP Address of
management
VLAN
Unique within the system.
Unique within the system.
Unique within the system.
Cannot be changed.
Cannot be changed.
Part of the user Intranet.
Access to business
VLAN
Internet access.
Intranet access via a WAN
service line.
Intranet access.
Access to tenant
VLAN
Internet access or an access
via SSL-VPN device.
Intranet access via a WAN
service line.
Intranet access.
Communicating
between VM and
the operation
management server
Communicating with
operation management server
via provider administration
VLAN and back-end
Firewall.
Same as on the left.
Same as on the left.
One-to-one NAT for global
and local IP addresses.
Unique per tenant.
Unique within the system.
3.1.1 Public Cloud
The public cloud is a cloud model in which multiple tenants use the Virtual DataCenter Automation
system configured in the data center of the IaaS provider. The path is limited to the Internet when
tenant administrators or service users access VM on the Virtual DataCenter Automation system. The
configuration elements and usage of the public cloud are described using the following table and
figure:
In addition, IEEE 802.1Q tag VLAN is used in the standard network model to separate networks of
different tenants while sharing network devices and cables.
Network
Constituent
Elements
1
Tenant Firewall
Used solo/Shared
Used solo by tenant
Necessary
2
Public VLAN
Usage
Necessary/
Recommended/
Option
Used solo by tenant
Necessary
Prepare one per tenant. Tenant Firewall is connected with
public VLAN, production VLAN, and tenant administration
VLAN to provide NAT function for the routing between
VLANs, Firewall, and global/local IP address.
Prepare one or more per tenant. The public VLAN connects
the tenant Firewall with an Internet router. Global IP address
is allocated to the IP address space of public VLAN.
41
Chapter 3. System Design
Network
Constituent
Elements
3
Business VLAN
Used solo/Shared
Necessary/
Recommended/
Option
Used solo by tenant
Necessary
4
Tenant VLAN
Used solo by tenant
Necessary
5
Management VLAN
Used solo by tenant
Necessary
6
Internet router
Shared by tenant
Necessary
7
SSL-VPN device
Shared by tenant
Necessary
8
Back-end Firewall
Shared by tenant
Necessary
9
Portal server Firewall Shared by tenant
Necessary
10
Tenant LB
Used solo by tenant
Option
11
12
Operation
management LAN
Shared by tenant
Live migration LAN
Shared by tenant
Necessary
Recommended
13
ID management
server
Usage
Shared by tenant
Necessary
Prepare more than one per tenant. The Business VLAN
connects the tenant firewall with the VM. Service users
access the applications on the VM via an Internet router,
public VLAN, tenant firewall, and Business VLAN.
Prepare one per tenant. The Tenant VLAN connects the
SSL-VPN device with the tenant firewall or VM. Tenant
administrators set the tenant Firewall, maintain the VM such
as an application setup on VM, use the VPN, and access the
tenant Firewall and VM securely via the Internet, SSL-VPN
device, and tenant administration VLAN.
Prepare one per tenant. The Management VLAN connects
the back-end Firewall with the VM and is used for Agent
communication between the operation management server
and the VM.
Prepare one per data center. The Internet router connects the
Internet with the public VLAN.
Prepare one per data center. The SSL-VPN device connects
the Internet with the tenant administration VLAN. A VPN
function is provided to facilitate secure access from the
tenant administrator to the tenant Firewall and VM via the
Internet. In each tenant’s administration VLAN, the security
policy is set to SSL-VPN device so that only the related
tenant administrator can access.
Prepare one per data center. The back-end firewall connects
the Management VLAN with the operation management
LAN being used to connect the operation management
server. To separate the operation management LAN of the
different tenants, a Firewall function is provided.
Prepare one per data center. The portal server Firewall
connects the Internet with the portal server. A Firewall
function is provided for secure separation.
Prepare one per tenant. Tenant LB is connected to the
Business VLAN and Tenant VLAN and provides an LB
function for tenants. To perform authentication with the ID
management server, the tenant LB is used to set the access to
the ID management server.
The operation management LAN is used to connect the
portal server, back-end Firewall, server accommodating the
VM and the operation management server of the Virtual
DataCenter Automation system.
The live migration LAN is used for communication during
live migration by connecting the server accommodating the
VM.
The ID management server provides the function to integrate
login accounts of IaaS providers and login accounts for
resources assigned to tenants and manage them. This is
connected to the operation management LAN to enable you
to access from the devices or NW devices that use
authentication functions.
42
Chapter 3. System Design
Network
Constituent
Elements
14
NAS
Used solo/Shared
Usage
Necessary/
Recommended/
Option
Shared by tenant
NAS is prepared to use the software repository functions to
share VM templates, software, patches, and OS images
among pods and manage them.
Necessary
3.1.2 Private Cloud
The private cloud is a cloud model in which multiple tenants use the Virtual DataCenter Automation
system configured in the data center of the IaaS provider. An IP-VPN or a closed network WAN
service is used as the path when tenant administrators or service users access VM on Virtual
DataCenter Automation system. The configuration elements and usage of the private cloud are
described using the following table and figure:
Network
Constituent
Elements
1
Tenant Firewall
Used solo/Shared
Used solo by tenant
Necessary
2
WAN service VLAN
Used solo by tenant
Necessary
3
Public VLAN
Usage
Necessary/
Recommended/
Option
Used solo by tenant
Prepare one per tenant. The tenant firewall is connected with
the WAN service VLAN, public VLAN, Business VLAN,
and Tenant VLAN to provide NAT function for the routing
between VLANs, Firewall and global/local IP address. To
publicize a job via the Internet, the tenant Firewall must be
set carefully to separate the Internet from the network within
a user company. To perform authentication with the ID
management server, the tenant LB is used to set the access to
the ID management server.
Prepare one per tenant. The WAN service VLAN connects
the tenant Firewall with the WAN service router.
Prepare one or more per tenant in case a job is publicized via
the Internet. The public VLAN connects the tenant firewall
43
Chapter 3. System Design
Network
Constituent
Elements
4
Business VLAN
Used solo/Shared
Necessary/
Recommended/
Option
Option
with an Internet router. A global IP address is allocated to
the address space of the VLAN.
Used solo by tenant
Prepare more than one per tenant. The Business VLAN
connects the tenant Firewall with the VM. A job is
publicized in a user company by accessing the applications
on the VM via the WAN service line, WAN service router,
WAN service LAN, tenant Firewall, and Business VLAN. A
job is publicized via the Internet by accessing the
applications on the VM via the Internet, Internet router,
public VLAN, tenant Firewall, and Business VLAN.
Necessary
5
Tenant VLAN
Used solo by tenant
Recommended
6
Management VLAN
Used solo by tenant
Necessary
7
WAN ervice router
Used solo by tenant
Necessary
8
Internet router
Shared by tenant
Option
9
SSL-VPN device
Shared by tenant
Option
10
Back-end Firewall
Shared by tenant
Necessary
11
Portal server Firewall Shared by tenant
Necessary
12
Tenant LB
Used solo by tenant
Option
13
Operation
management LAN
Usage
Shared by tenant
Necessary
Prepare one per tenant. The Tenant VLAN connects the
tenant Firewall with the VM. Tenant administrators maintain
the VM such as an application setup by access to VM via the
WAN service line, WAN service router, WAN service
VLAN, and Tenant VLAN. In the private cloud not
publicizing a job via the Internet, the Tenant VLAN can be
replaced with the Business VLAN.
The Management VLAN connects the back-end Firewall
with the VM and is used for Agent communication between
the operation management server of the IaaS provider and
the VM.
Prepare one per tenant. The WAN service router connects the
WAN service line with the WAN service VLAN.
Prepare one per data center. The Internet router connects the
Internet with the public VLAN.
Prepare one per data center in case secure VM access via the
Internet is provided. The SSL-VPN device connects the
Internet with the Tenant VLAN. A VPN function is provided
to facilitate secure access from the enant administrator to the
tenant Firewall and VM via the Internet. In each Tenant
VLAN, the security policy is set to the SSL-VPN device so
that only the related tenant administrator can access.
Prepare one per data center. The back-end firewall connects
the Management VLAN with the operation management
LAN being used to connect the operation management
server. To separate the operation management LAN of the
different tenants, a Firewall function is provided.
Prepare one per data center. The portal server Firewall
connects the Internet with the portal server. A Firewall
function is provided for secure separation.
Prepare one per tenant. Tenant LB is connected to the
Business VLAN and Tenant VLAN and provides an LB
function for tenants. To perform authentication with the ID
management server, the tenant LB is used to set the access to
the ID management server.
The operation management LAN is used to connect the
portal server, back-end Firewall, server accommodating the
VM and the operation management server of the Virtual
DataCenter Automation system.
44
Chapter 3. System Design
Network
Constituent
Elements
14
Live migration LAN
Used solo/Shared
Necessary/
Recommended/
Option
Shared by tenant
The live migration LAN is used for communication during
live migration by connecting the server accommodating the
VM.
Recommended
15
16
Usage
ID management
server
Shared by tenant
NAS
Shared by tenant
The ID management server provides the function to integrate
login accounts of IaaS providers and login accounts for
resources assigned to tenants and manage them. This is
connected to the operation management LAN to enable you
to access from the devices or NW devices that use
authentication functions.
Necessary
NAS is prepared to use the software repository functions to
share VM templates, software, patches, and OS images
among pods and manage them.
Necessary
3.1.3 On-premises Cloud
The on-premises cloud is a cloud model in which single or multiple tenants use the Virtual
DataCenter Automation system configured in the data center of a user company. The path is limited
to the user Intranet when tenant administrators or service users access VM on a Virtual DataCenter
Automation system. The configuration elements and usage of the on-premises cloud are described
using the following table and figure:
Network
Constituent
Elements
1
Front-end L3 switch
Used solo/Shared
Usage
Necessary/
Recommended/
Option
Shared by tenant
Necessary
Prepare one in the Virtual DataCenter Automation system.
The front-end L3 switch is connected with the user Intranet,
45
Chapter 3. System Design
Network
Constituent
Elements
Used solo/Shared
Usage
Necessary/
Recommended/
Option
Business VLAN, and Tenant VLAN to provide functions for
the routing between VLANs, and Firewall.
2
Business VLAN
Shared by tenant
Necessary
3
Tenant VLAN
Used solo by tenant
Recommended
4
Management VLAN
Used solo by tenant
Necessary
5
Back-end L3 switch
Shared by tenant
Necessary
6
Tenant LB
Used solo by tenant
Option
7
8
Operation
management LAN
Shared by tenant
Live migration LAN
Shared by tenant
Necessary
Option
9
10
ID management
server
Shared by tenant
NAS
Shared by tenant
Necessary
Necessary
Prepare more than one in Virtual DataCenter Automation
system. The production VLAN connects the front-end L3
switch with the VM. A job is publicized in a user company
by accessing the applications on the VM via the user
Intranet, L3 switch, and production VLAN. In the onpremises cloud, the production VLAN can be replaced with
the No-tag LAN.
Prepare one per tenant. The Tenant VLAN connects the
front-end L3 switch with the VM. Tenant administrators
maintain the VM such as an application setup on the same
by access the VM via the user Intranet, L3 switch, and
Tenant VLAN. In the on-premises cloud, Tenant VLAN can
be replaced to Business VLAN.
Prepare one per tenant. The Management VLAN connects
the back-end L3 switch and is used for Agent
communication between the operation management server
and the VM.
Prepare one in the Virtual DataCenter Automation system.
The back-end L3 switch connects the Management VLAN
with the operation management LAN being used to connect
the operation management server. To separate the operation
management LAN of the different tenants, a Firewall
function is provided.
Prepare one per tenant. Tenant LB is connected to the
Business VLAN and Tenant VLAN and provides an LB
function for tenants. To perform authentication with the ID
management server, the tenant LB is used to set the access to
the ID management server.
The operation management LAN is used to connect the
portal server, back-end Firewall, server accommodating the
VM and the operation management server of the Virtual
DataCenter Automation system.
The live migration LAN is used for communication during
live migration by connecting the server accommodating the
VM.
The ID management server provides the function to integrate
login accounts of IaaS providers and login accounts for
resources assigned to tenants and manage them. This is
connected to the operation management LAN to enable you
to access from the devices or NW devices that use
authentication functions.
NAS is prepared to use the software repository functions to
share VM templates, software, patches, and OS images
among pods and manage them.
46
Chapter 3. System Design
3.1.4 Utilization of the P-Flow Network
The programmable flow (P-Flow) that implements OpenFlow technology consists of a programmable
flow controller (PFC) and programmable flow switch (PFS). In the programmable flow, the PFC
controls paths and the PFS transfers packets. A path information transaction between the PFC and
PFS with the OpenFlow protocol realizes packet transfer by central control. In the programmable
flow, objects such as virtual routers (vRouter) or virtual bridges (vBridge) are used to define the
virtual network (Virtual Tenant Network, VTN).
The P-Flow network realizes flexible network configuration without limitation of the layer 2 switch
such as the network visualization base on the centrally controlled path information, VLAN ID upper
limit, or loop measures.
1.
Network visualization
The GUI of the PFC allows you to check data communication paths on network physical
configuration and on logical/physical configuration for each tenant.
47
Chapter 3. System Design
2.
Flexible network configuration (VLAN expansion)
In the legacy network consisting of a layer 2 switch, the network is assigned from a single pod
whose connectivity is assured to tenants. The P-Flow network allows you to connect multiple
pods at the layer 2 level. Virtual DataCenter Automation provides a VLAN expansion function
to assign the network across multiple pods. In the VLAN expansion function, the system is
configured using the network across two pods and virtual machines assigned from two pods.
When there is a shortage of server resources on the pod where the tenant is stored, the tenant
network can be extended to another pod to supplement insufficient server resources.
3.
Designing the P-Flow domain
48
Chapter 3. System Design
When designing the P-Flow domain, please make the pod space where you'd like to assume
that resource accommodation is possible belong to the identical P-Flow domain. When the PFlow domain is divided into plural, resource accommodation can't be performed between the
pod to which I belong between the separate P-Flow domains.
For example when consisting of a pod and the P-Flow domain of the construction like the
following figure, the resource accommodation propriety between the pods will be a street in
the following table.
4.
Pod1
Pod2
Pod3
Pod4
Pod5
Pod6
Pod1
-
√
√
√
Pod2
√
-
√
√
Pod3
√
√
-
√
Pod4
√
√
√
-
Pod5
-
√
Pod6
√
-
Resource accommodation between P-Flow domains using UNC
Resource accommodation can be performed between P-Flow domains by using UNC in a
configuration with multiple P-Flow domains. For example, resource accommodation is
possible between all pods in the configuration shown in the figure below. Note that it is
necessary to set up a communication route that supports L2 communication between each PFlow domain.
3.1.5 Multiple Pods
Network
Constituent
Elements
1
PFS
Used solo/Sharing
Necessary/
Recommended/
Optional
Shared by tenant
Necessary
2
PFC
Usage
Shared by tenant
Necessary
By connecting the different blade housing to the PFS
(programmable flow switch), the different pods are
connected at the L2 level.
The PFC (programmable flow controller) provides the
central control of multiple PFSs.
49
Chapter 3. System Design
3.1.6 Multiple Sites
This is a model to configure the previously described public cloud configuration, private cloud
configuration, and on-premises cloud configuration by the management domain of multiple sites. The
following figure shows the network configuration in the multiple-site environment of private cloud
configuration.
The figure below is an image of a tenant network that spans multiple sites.
The network in the same tenant can be created across multiple sites by connecting the network for
tenant communication on the L2 level between sites. BC and DR can be also supported with this
configuration.
Network
Constituent
Elements
1
PFS
Dedicated/Shared
Necessary/
Recommended/
Optional
Tenant shared
Necessary
2
PFC
Tenant shared
Necessary
3
UNC
Usage
Tenant shared
Necessary
Connect different pods on the L2 level by connecting
different blade servers by using the PFS (programmable flow
switch).
The PFC (programmable flow controller) provides central
control of multiple PFSs.
The UNC (unified network coordinator) provides central
control of multiple PFCs.
50
Chapter 3. System Design
The following sections describe automation of resource interchange *1 and disaster recovery in
multiple-site configuration by taking the following logical network configuration as an example.
The diagram below shows the state where Tenant A has configured an operation system at Site 1.
Assume that the operation system cannot be expanded at Site 1 due to insufficient resources. When
this state occurs, Virtual DataCenter Automation automatically extends to another site at the L2 level
by checking the vacancy of the resources held by the sites to make network resources available. This
allows the tenant to configure one operation system across sites without any regard to sites.
The diagram below shows the state where the network is extended from Site 1 to Site 2 and one
operation system is configured across the sites.
*1
Resource interchange is to use vacant network resources with no regard to location when network resources in a certain
range become depleted.
51
Chapter 3. System Design
The tenant can also configure other operation systems at different sites with sites in mind.
Virtual DataCenter Automation can extend at the L2 level as required, so a disaster recovery
environment can be configured by configuring active and standby operation systems at each site.
3.1.7 IP Address Design
Based on the standard network model described above, design an IP address considering the number,
size, and allocation method of the IP address space.
1.
Number of the IP address space
The number of the IP address space (the number of VLAN) is determined according to the
number of tenants provided by the Virtual DataCenter Automation system and the average
number of VLAN per tenant. (See "2.2 System Management Domain (page 31)".)
2.
VLAN types
Review the type of VLAN assigned to the tenant. Types are classified into Business VLAN,
Tenant VLAN, Management VLAN, Public VLAN, and WAN service VLAN. (See
"3.1 Studying Network Configuration (Standard Configuration) (page 41)")
52
Chapter 3. System Design
3.
Size of the IP address space
The size in the necessary IP address space is designed based on the following index every
VLAN types.
4.
Allocation of IP address
Examples of the public cloud IP address design are listed in the table below.
VLAN type
Number
of VLAN
Allocation VLAN ID
IP address space
Public VLAN
240
11-250
1.1.x.x/28
Tenant VLAN
240
261-500
172.18.x.x/24
Management VLAN
240
511-750
172.17.x.x/24
Business VLAN
3000
1011-4011
172.16.x.x/28
In case of multiple sites or multiple zones, the IP address space might be designed in each site
or zone. The global IP address assigned to a public VLAN in a multi-site environment is
assigned from one pool in which the range of IP addresses held by the provider is registered.
3.1.8 User Authentication for Network Devices
In the Virtual DataCenter Automation system, a guest OS on the VM, tenant firewall, tenant LB,
SSL-VPN device, and service portal must be considered for user authentication of tenant
administrators.
Assign an initial ID and password for the guest OS on the VM. Subsequently, the tenant
administrator uses the OS function to manage users. For tenant Firewall, tenant LB, SSL-VPN
device, and service portal, use LDAP linkage for authentication. By LDAP linkage between the user
authentication function of the tenant firewall, tenant LB, SSL-VPN device, and the user information
of the service portal, individual user information need not be registered on the tenant LB.
Synchronization with the user information of the service portal is available.
3.2 Customization of Network Configuration
In the previous section, the standard network model assumed in Virtual DataCenter Automation is
described in each cloud model (public cloud, private cloud, and on-premises cloud). In this section,
configuration in the event of use of optional functions and customization of network configuration
are described below.
3.2.1 Public Cloud
Customized network configuration in the public cloud configuration is described below.
Network
Constituent
Elements
1
Data transfer VLAN
Used solo/Sharing
Necessary/
Recommended/
Optional
Used solo by tenant
Option
2
Usage
Back-end firewall
Shared by tenant
(data transfer VLAN) Option
Prepare one per tenant. The data transfer VLAN connects the
operation management appliance and back-end firewall. Use
it to transfer data when using the VM import function.
Prepare one per tenant. The back-end firewall connects the
data transfer VLAN and operation management LAN to
provide a routing function among LANs.
53
Chapter 3. System Design
Network
Constituent
Elements
3
Physical server
Used solo/Sharing
Necessary/
Recommended/
Optional
Used solo by tenant
Option
4
Operation
Management
Appliance
Usage
Used solo by tenant
Option
Connect the physical server with devices (L2 switch, etc.)
available for VLAN control so that the virtual systems in
tenants are connected when assigning physical machines
(instead of virtual machines) to tenants as resources.
Assigns to each tenant, and provides custom monitoring
function for tenant administrators, asset management
function, and distribution/application function for software
patches.
3.3 Studying Storage Configuration
The perspective and consideration when studying the storage configuration will be listed below.
3.3.1 Storage Configuration
• Entire storage configuration
This section describes the configuration examples of the storage assumed in Virtual DataCenter
Automation. In this configuration, the storage is used in the SIGMABLADE server unit. You
can add the resource in the SIGMABLADE unit. The backing up can be made in the storage
unit.
• Storage configuration
For the above entire storage configuration, the detailed storage configuration is described.
54
Chapter 3. System Design
Table 3-1 [Storage SAN configuration]
Overview
Item
(1)
Storage main unit
Housing part of the main storage unit
(2)
Addition storage housing
Housing part for the disk drive connection
Add for each additional storage housing if there is no capacity in
the disk drive slot.
(3)
Disk
Disk drive (SAS, SATA, and others)
(4)
Array controller
Redundant configuration of the storage device controller
(5)
Communication cable
8Gigabit Fibre Channel (FC) connection
(6)
SAS controller
Redundant and multiplexing of communication control due to
the 8Gigabit Fibre Channel configuration
Table 3-2 [Storage NAS configuration]
Overview
Item
(1)
Storage main unit
Housing part of the main storage unit
(2)
Addition storage housing
Housing part for the disk drive connection
Add for each additional storage housing if there is no capacity in
the disk drive slot.
(3)
Disk
Disk drive (SAS, SATA, and others)
(4)
Array controller
Redundant configuration of the storage device controller
(5)
Communication cable
10Gigabit Fibre Channel (FC) connection
⑥
LAN controller
Redundant and multiplexing of communication control due to
10Gigabit Ethernet configuration
• Usage of storage
Usage of storage assumed in Virtual DataCenter Automation is described.
55
Chapter 3. System Design
Usage
Overview
Storage for
tenants
This is the storage resource provided directly to tenants.
Storage for
providers and
resellers
This is the storage resource for providers and resellers. This is used in the software
repository. Since multiple Virtual DataCenter Automation use and share it, use of NASconfiguration storage is recommended.
3.3.2 Point of View of Storage Pool
The storage pool is the storage capacity available when allocating the virtual disk of the system area
(OS boot) or data area of the virtual machine. The points of view of the storage pool are described
below.
• The storage pool manages the logical disk that was extracted from storage and the virtual disk
stored in the logical disk.
• The virtual disk is extracted from the logical disk and provided. However, there is the risk of
reducing usage efficiency for the logical disk if the flexibility of the virtual disk size for
extracting is enhanced. There also is the risk of losing continuity in the disk and of the access
performance declining if the disk is extracted regardless of size and then returned repeatedly.
Therefore, it is recommended to extract the virtual disk in fixed size units to resolve the above
problem. Assuming the smallest virtual disk size to be a single unit, manage the number of
virtual disks that can be stored in the logical disk.
• Considering the attributes of the division based on the service level, division in tenant units,
division due to the load balancing and availability requirement, and others, multiple storage
pools can exist.
• If provisioning is requested for the storage pool, a virtual disk of size equivalent to the specified
virtual disk unit is extracted from the corresponding logical disk.
3.3.3 Study point for storage configuration
The item and perspective when studying the Virtual DataCenter Automation storage configuration
will be listed below. The study points when studying the Virtual DataCenter Automation storage
configuration will be listed below.
1.
Storage device (including connecting configuration between server and storage)
2.
Storage capacity
3.
Extendibility
4.
Availability
5.
Functionality
6.
Backing up
The storage available in Virtual DataCenter Automation and the above storage requirements will be
listed.
NEC
Storage
Ite
m
1
EMC
NetApp
Storage device
Storage device with standard model
configuration
NEC
Storage M
Series
56
VNX
Series
FAS Series
Remarks
Chapter 3. System Design
Ite
m
NEC
Storage
EMC
FC connection
√
√
iSCSI connection
√
NetApp
Remarks
Connecting configuration
NAS connection
2
√
Storage capacity (number of virtual machine)
From initial to maximum VM
√
√
√
Escalating the
maximum number of
virtual machines.
Virtual disk area
√
√
√
RDM area
√
√
X
Adding a disk unit
√
√
√
Adding additional storage housing
√
√
√
Redundant configuration of disk,
controller, power, and others
√
√
√
Non-stop 24 hour operation
√
√
√
Planned maintenance
stoppages are excluded.
√
√
√
Virtualizing the storage
so
√
√
Storage capacity (service model)
3
4
5
Extendibility
Availability
Functionality
Using the ThinProvisioning function
6
If it is an NAS
configuration, RDM is
not supported.
Backing up
Replication in housing D2DT2
Replication in housing + Replication
in snapshot housing
√
Replication in external tape device
housing + snapshot replication
√
√
√
D2D2T
√
√
√
The number of
generation management
must be studied.
The external tape device
and the number of
generation management
must be studied.
3.3.4 Storage device
The storage capable of Virtual DataCenter Automation is required. See "6.13 Storage (page 107)" to
determine the storage device. Also, to determine the connecting configuration between the server and
storage device, select the connecting configuration and transfer speed that correspond to the storage
device or recommended. The selection examples are listed below.
Storage device
FC
iSCSI
NEC Storage


EMC

NetApp
NAS

57
Chapter 3. System Design
connecting
configuration
Transfer speed
FC
8Gbps
iSCSI
10Gbps
NAS
10Gbps / 1Gbps
3.3.5 Storage capacity
Studying the following items for each storage usage will allow you to calculate the storage
configuration and capacity. Therefore, you can plan the addition of a disk or the addition of
additional storage housing.
• Storage for tenants
- Number of VMs
Studying the resource provision plan ranging from the VM number at initial introduction to
the maximum VM number will allow you to calculate the storage capacity necessary for
the VM.
- Service model
For the disk area provided as the service menu as well, studying the service model from a
performance or operation perspective is recommended. Study the storage configuration
such as the HDD or RAID according to the service menu in the provided disk area such as
when prioritizing performance or prioritizing capacity efficiency (cost performance).
Studying the service menu and its provisions will allow you to calculate the storage
configuration and capacity.
* Reference examples of service model
The reference examples when studying the service model are described below.
Prepare 3 levels of service model. (categorized into Gold, Silver, and Bronze)
+ Gold:
You can select both the virtual disk and RDM areas for the data area.
Use SAS for HDD to prioritize the data transfer capability.
Replication backing up is available.
+ Silver:
You can select only the virtual disk for the data area.
Use SATA for HDD to prioritize the cost performance of the service model.
Replication backing up is available.
+ Bronze:
You can select only the virtual disk for the data area.
Use SATA for HDD to prioritize the cost performance of the service model.
Replication backing up is not available.
58
Chapter 3. System Design
Figure 3-1 Gold level
Figure 3-2 Silver/Bronze level
* Virtual disk area and RDM area
Reference examples when studying the service model are described below.
The following figure is a storage image of the virtual disk and RDM areas.
- Storage for providers and resellers
The storage for providers and resellers is used in the software repository. For how to
calculate storage capacity, see "3.5.4 Sharing of the VM Template (page 74)".
3.3.6 Extendibility
Study the flexible disk capacity extension if the capacity must be extended due to lack of disk
capacity or server resource addition.
59
Chapter 3. System Design
3.3.7 Availability
The study points for the storage reliability configuration are listed below. Study the policy for the
availability. If non-stop 24-hour operation (excluding planned maintenance stoppages) is a general
rule as a service level, the redundant configuration is recommended for the components and path as
far as possible to ensure availability.
Item
Conditions
Control board
Redundant mechanism in the storage main unit (Cache memory/shared memory, controller,
and others).
Disk
Redundant configuration by RAID configuration and hot spare.
Communication
board
Duplicate the HBA and LAN boards to be connected to the server.
Communication path
Duplicate or multiplex the path to communicate with the server.
Power supply
The redundant configuration enables the power unit to operate continuously in case of
single failure.
FAN
The redundant configuration enables the FAN to operate continuously in case of single
failure.
3.3.8 Functionality
The functions to streamline the Virtual DataCenter Automation operation are provided in storage
devices. These functions are not necessary. Use them as necessary.
• Thin Provisioning
In the ThinProvisioning function, the storage resource is virtualized and allocated to reduce and
effectively use the physical storage capacity. The disk usage variation between disks can be
absorbed by setting the disk size to be created to sufficiently larger value than the actual usage.
Using the ThinProvisioning function for the purpose of the absorption is valid. If the
ThinProvisioning function is used, the capacity must be monitored as follows:
- Monitoring so that the total of actual usage of LDs is smaller than the pool capacity
- Monitoring so that the actual usage of LDs is smaller than the created LD size
- In consideration of the lead time taken for adding a disk, set thresholds to avoid physical
capacity shortage.
The ThinProvisioning function corresponds to NEC Storage, EMC, and NetApp storage
devices. For details about the ThinProvisioning function, see the specification of the storage
device to be used.
60
Chapter 3. System Design
• Exclusion of overlapped data
The overlapped data exclusion function allows you to exclude the overlapped data in the
storage. This leads to deletion of data usage and enhancement of capacity efficiency. When
storing data including the same information such as a template, the VM created from the
template, or daily backed-up data, storage usage capacity is deleted and capacity efficiency is
enhanced.
The overlapped data exclusion function corresponds to NEC Storage (HS series), EMC, and
NetApp storage devices. For details of this function, see the specification of the storage device
to be used.
3.3.9 Backing up
To back up, you can use the function provided by the storage device uses. See the specification of the
storage to be used and study the backup method.
The standard perspective of backing up is described below.
For the generation management policy for backing up, the number of generation to be stored in the
storage, that to be stored in the secondary backing up (external device), and the timing for backing up
must be studied. Use the tape device, virtual tape device (HYDRAstor), and others for the external
device.
The following areas must be backed up:
• ESX area backing up
• Virtual server backing up
• Template area
The configuration examples in backing up are described below.
• [Backing up the virtual disk area in SAN configuration (NEC Storage)]
- The virtual server backs up the area by Replication in LD unit periodically (at 3 o'clock
daily, for example).
- The system and data areas are also backed up simultaneously.
61
Chapter 3. System Design
- Backing up by Replication is valid only for one generation. If retaining multiple
generations, store the backed up area in the tape.
• [Backing up with the snapshot function in the NAS configuration (NetApp)]
- Use the VMware snapshot function for backing up to enable immediate backing up by
tenant administrators.
- Obtain unit of the snapshot is VM unit and multiple generations can be obtained for
storage.
• [Recovery in SAN configuration (NEC Storage)]
- The files can be restored from the RV area at once in case of a disk failure. ((1) in the
above figure)
- In case of a logical failure (file deletion and others), recognize the LD in the RV area
manually, mount the virtual disk area to the virtual server as another area, and restore the
file by the tenant administrator. ((2) and (3) in the above figure)
• [Recovery in NAS configuration (NetApp)]
62
Chapter 3. System Design
- The files can be restored from Snapshot area at once in case of a disk failure. (1 in the
above figure)
- In case of a logical failure (file deletion and others), recognize the LD in the Snapshot area
manually, mount the virtual disk area to the virtual server as another area, and restore the
file by the tenant administrator. (2 and 3 in the above figure)
3.4 Studying Configuration of Virtualization Base
This section describes the configuration examples of the virtualization base.
3.4.1 Configuration Examples of Virtualization Base in
VMware vCenter Server Management Environment
The configuration examples of the virtualization base in the VMware vCenter Server management
environment are as follows:
63
Chapter 3. System Design
The points of the configuration examples are described below.
1.
Install the VMware vCenter Server on the management server and register the VM server.
2.
Connect the LAN with the BMC of the VM server to manage the VM server with Out-ofBand.
3.
NIC#1 of the VM server is designated as NIC for management of the VM server.
4.
NIC#2 of the VM server is used for connection of VLAN for tenants.
5.
NIC#3 of the VM server is used only for live migration.
6.
NIC#4 of the VM server is used only for NAS to ensure the access performance for the NAS
data store.
7.
The SAN data store and RDM disk volume are shared by all VM servers.
8.
The NAS data store is shared by all VM servers.
3.4.2 Configuration Examples of Virtualization Base in HyperV Environment
Configuration examples of the virtualization base in the Hyper-V environment are as follows:
64
Chapter 3. System Design
The points of the configuration examples are described below.
1.
Installation of the virtualization base software is not required for the management server.
2.
Prepare the domain controller to manage Hyper-V cluster. Also enables the DHCPserver
function.
3.
Connect the LAN with the BMC of the VM server to manage the VM server with Out-ofBand.
4.
NIC#1 of the VM server is designated as NIC for management of the VM server.
5.
NIC#2 of the VM server is used for connection of VLAN for tenants.
6.
NIC#3 of the VM server is used for live migration and heartbeat.
7.
The SAN data store and RDM/quorum disk volume are shared by all VM servers.
3.4.3 Configuration Examples of Virtualization Base in KVM
Environment
Configuration examples of the virtualization base in the KVM environment are as follows:
65
Chapter 3. System Design
The points of the configuration examples are described below.
1.
Installation of the virtualization base software is not required for the management server.
2.
Enable the DHCP server function.
3.
Connect the LAN with the BMC of the VM server to manage the VM server with Out-ofBand.
4.
NIC#1 of the VM server is designated as NIC for management of the VM server.
5.
NIC#2 of the VM server is used for connection of VLAN for tenants.
6.
NIC#3 of the VM server is used for live migration and heartbeat.
7.
The NFS data store and RDM/quorum disk volume are shared by all VM servers.
3.5 Studying VM Template
The VM template consists of hardware settings of the virtual machine and information such as an OS
image, and is a form to create the virtual machine. Using the VM template can greatly reduce the
workload involved in the virtual machine installation. Virtual DataCenter Automation provides 3
types of template methods. These methods can be classified into the complete copying type and
differential information retaining type.
• Full Clone [Complete copying type]
The Full Clone uses the standard template for the virtual base products. The virtual machine
created in Full Clone corresponds to the image directly copied from the standard template
image. Guest OS information such as the host name or IP address is set using the function of
each virtual base product. The diagram below describes the procedure for virtual machine
creation using the Full Clone method.
66
Chapter 3. System Design
Master VM is the virtual machine and the source of the template creation. The template for the
Full Clone method is created from the master VM and the virtual machine is created.
- Advantages
1.
Facilitates template configuration work by the IaaS provider.
2.
The created virtual machine is independent from the Master VM.
- Disadvantage
1.
The virtual machine creation requires time due to the complete copy.
• Differential Clone [Differential information retaining type]
The Differential Clone creates only differential information based on the base. The capacity is
small and creation time may be reduced. However, management costs are incurred because the
master VM snapshot must be managed. The diagram below describes the procedure for virtual
machine creation using the Differential Clone method.
Master VM status is stored as snapshot. The images of the mater VM with the snapshot obtained
are created as a replica VM. For the virtual machine, only the differential information is created
based on a replica VM. In Virtual DataCenter Automation, the Differential Clone method is
recommended because the large-scale VM environment can be swiftly created and the resource
consumption can be greatly reduced.
- Advantages
1.
Responding to requests from IaaS and tenant users, the virtual machine can be
swiftly provided.
67
Chapter 3. System Design
2.
Updates of patches, etc. for the operating virtual machine can be swiftly applied.
3.
IaaS providers can manage the image generation.
4.
IaaS providers, IaaS resellers, and tenant administrators can configure the largescale virtual machine within the scope of the limited storage resource.
- Disadvantage
1.
Replica VM and the virtual machine that refers to the image must be allocated in the
same datastore. Moving between storages is constrained.
The Differential Clone method can be used to reconstruct the virtual machine.
During the reconstruction processing, another replica VM is created from the snapshot created
at the same time as the system change (patch application for master VM and others) and used
for the new master image of the virtual machine. Via reconstruction processing, the system can
be effectively upgraded, since jobs common to all virtual machines with the same template
settings can be performed in a single process, such as snapshot creation or operation settings
during the reconstruction, etc. The concept of the reconstruction is described in the following
figure.
• Disk Clone [Complete copying type]
In Disk Clone, a virtual machine is created by copying the image created from the master VM.
The management of the images created from the same master VM can be easily managed with
the image management function. Unlike the Differential Clone, no snapshot of the master VM is
required, which facilitates management. The diagram below describes the procedure for the
virtual machine creation using the Disk Clone method.
68
Chapter 3. System Design
- Advantages
1.
IaaS providers can manage the image generation.
- Disadvantage
1.
The virtual machine creation requires time due to the complete copy.
For the availability of each template in each virtual environment, see the table below. Products
requiring specific information settings for guest OS are described in brackets ( ). The
recommended patterns are described in bold face. Not-recommended patterns are described in
italics.
Full Clone
Environment to be
managed
Differential Clone
Disk Clone
VMware (vCenter Server
management)
Available (vCenter
Server)
Available (vCenter
Server)
Available *1(vCenter
Server)
Hyper-V cluster
Not available
Available (DPM)
Available (DPM)
KVM
Not available
Available (DPM)
Available (DPM)
*1 The Disk Clone for VMware (vCenter Server management) is not recommended due to the
following disadvantages: templates cannot be used in the vCenter Server and the performance
during the virtual machine creation declines compared with Full Clone.
In Hyper-V, use a VM template separately according to usage.
3.5.1 Linkage between VM Template and Resource Pool
First, the valid range of VM template for each virtualization base is described. The valid range of
VM template differs depending on the virtualization base. The valid ranges of VM template are listed
below. The VM template cannot be shared by different virtualization bases.
Virtualization base
type
VMware (vCenter Server
management)
Template type
Target range of virtual machine server
Full Clone
The virtual machine server must be managed by the
same vCenter Server management server as the template
69
Chapter 3. System Design
Virtualization base
type
Template type
Target range of virtual machine server
being used. The templates cannot be shared by VMware
vCenter Servers.
Hyper-V cluster
KVM
Differential Clone
The virtual machine server must be managed by the
same vCenter Server management server as the template
being used. The templates cannot be shared by VMware
vCenter Servers.
Disk Clone
The virtual machine server must be managed by the
same vCenter Server management server as the template
being used. The templates cannot be shared by VMware
vCenter Servers.
Differential Clone
The virtual machine server must be connected with the
datastore in the storage destination of the image linked
with the template. The templates cannot be shared by
Hyper-V clusters.
Disk Clone
The virtual machine server must be connected with the
datastore in the storage destination of the image linked
with the template. The templates cannot be shared by
Hyper-V clusters.
Differential Clone
The virtual machine server must be connected with the
datastore in the storage destination of the image linked
with the template.
Disk Clone
The virtual machine server must be connected with the
datastore in the storage destination of the image linked
with the template.
IaaS providers must prepare VM templates available in each OS for multiple IaaS resellers and tenant
administrators. The valid template range differs depending on the virtualization base, hence the
number of templates required for the system configuration also differs.
The following description is based on the assumption that IaaS providers extracted 3 resource pools,
and IaaS resellers and tenant administrators determine the operation using the VM template for OS.
• In VMware
Data centers 01 and 02 in the figure are those in the VMware vCenter Server. The VMware
vCenter Server manages 2 data centers. 2 resource pools are extracted from SSC for ESX server
managed by data center 01, while 1 resource pool is extracted for ESX server managed by data
center 02. The VM template for OS is managed by the ESX server extracted as resource pool 01
of data center 01.
70
Chapter 3. System Design
According to the configuration in the above figure, only 1 VM template for OS must be created.
The reason why only 1 VM template is required is that the VM template can be shared in the
same resource pool, by the resource pools and by the different data centers in VMware.
• In the Hyper-V cluster
Hyper-V では、Hyper-V クラスタごとに SSC のリソースプールを切り出します。 OS 用
VM テンプレートはリソースプール 01 として切り出した Hyper-V で管理しています。
上図の構成における作成しなければいけない VM テンプレート数は、リソースプール
ごとに用意する必要があるために、3 テンプレートとなります。 これは、Hyper-V ク
ラスタの場合には、クラスタをまたがるテンプレートの共有はできず、リソースプー
ルをまたがるテンプレートの共有もできないからです。
• In KVM
In KVM, the SSC resource pool is extracted for each group to share the data store. The VM
template for OS is managed by KVM extracted as resource pool 01.
According to the configuration in the above figure, 3 VM templates must be created because the
template must be prepared for each resource pool. The reason why 3 templates are required is
that the templates cannot be shared by resource pools in KVM.
Next, the linkage between VM templates and the resource pool is described. The VM template is
linked with the resource pool within the valid VM template range described above. The virtual
machine created from the VM template is allocated to the virtualization server extracted as the
resource pool and to the datastore. The following figure describes the extraction of resources for IaaS
resellers or tenant administrators by IaaS providers.
71
Chapter 3. System Design
IaaS providers extracted the resource pool for each combination of virtualization base and storage.
They extracted the resource pool considering the machine specification of the virtualization base,
storage performance, and capacity. In the figure, the "Gold" tag is assigned to VMware resource pool
01 because it is extracted from the virtualization base and storage equipped with high processing
performance.
The sub-pool is extracted from the resource pool for IaaS resellers or the tenant administrator. The
sub-pools extracted from "Gold" and "Silver" resource pools can be allocated to the same IaaS
administrator or tenant administrator at this point. In this case, the actual VM is allocated to the
resource from which the "Gold" resource pool is extracted when creating the VM with the template
linked with the "Gold" resource pool. In the figure, it is allocated to the most appropriate
virtualization base or datastore area among the virtualization bases (VMware ESX#01, 02, and 03)
and the storage EMC datastore areas (datastore#01, 02, and 03).
IaaS providers and tenant administrators must use the template considering that it is linked with the
resource pool. IaaS providers must design the system considering the available template range.
3.5.2 VM Template Creation Policy
The creation policy for the VM template is described below.
IaaS providers create a VM template in all systems of public cloud, private cloud, and on-premises
cloud. The management of IaaS resellers differs from that of tenant administrators respectively.
Therefore, it is recommended to create a template with security provided from the OS applied,
instead of installing application-specific software to the template. Update the service pack (for
Windows servers) or kernel (for Red Hat Enterprise Linux servers), and then create the template.
The template is applied in the Windows server according to the above policy in the figure below.
72
Chapter 3. System Design
The generation can be managed relating to image management in the Differential Clone method. In
this case, IaaS providers must appropriately manage the default images to be used for the template.
3.5.3 Using VM Template
Fully understand the VM template features of Virtual DataCenter Automation and the linkage with
the resource pool, and then operate it according to the public cloud, private cloud, and on-premises
cloud systems. The figure below describes an example of using the template by tenant administrators
in the public cloud system.
Responding to requests from IaaS resellers and tenant administrators, the IaaS provider allocates subpools and creates a VM template.
Each tenant administrator uses the template to create a virtual machine. Tenant administrator A
creates a VM using the template (Full Clone method) linked with the resource pool "Gold" so that
users can access it securely at any time.
Tenant administrator B creates a VM using the template (Disk Clone method) linked with the
resource pool "Silver" considering the cost and performance.
73
Chapter 3. System Design
Tenant administrator C creates a VM using the template (Differential Clone method) linked with the
resource pool "Gold" due to requests to provide many users with a quick-access environment and
secure access at any time.
Tenant administrators must select the VM template considering cost performance, time required for
configuration, access performance during operation, and others.
3.5.4 Sharing of the VM Template
To allocate multiple units of Virtual DataCenter Automation (management server), study the VM
template share function.
Virtual DataCenter Automation replicates the VM templates among the management servers when a
VM template is required and automatically registers the VM template to allow you to use it.
Environment of Managed machine
VM Template Sharing function
VMware (vCenter Server management)
Available (vCenter Server)
Hyper-V Cluster
Not available
KVM
Not available
Note
When using the VM template sharing function, it is only possible to specify VM template sharing for one
virtualization platform in the same management server, and that is the VMware vCenter Server management
environment.
The diagram below provides an overview of VM template sharing.
• Entire configuration
NAS devices are installed in each site to store the VM template as shown in the diagram below.
For the network configuration, see "3.1 Studying Network Configuration (Standard
Configuration) (page 41)".
74
Chapter 3. System Design
• Flow of VM template replication
The flow of VM template replication is described below.
Create the data store for each management server on the NAS, and register it in the hypervisor.
Virtual DataCenter Automation controls the hypervisor so that the VM template is created on
the above data store when creating the VM template. (1)
If the VM template is required by the management server when creating virtual machines, the
management server copies the template from the data store. (2)
75
Chapter 3. System Design
If the VM template does not exist in the NAS located in the management server site but the VM
template exists in other sites, the VM template is copied across sites using the hypervisor. (3)
When copying is finished, the management server registers the VM template in the hypervisor.
(4)
The study points when using this function are described below.
1.
Installation of NAS
NAS devices are installed in each site to store the VM template. The VM templates for
IaaS providers are replicated according to the number of management servers. Therefore,
the disk capacity obtained from the calculation formula below is required.
Necessary NAS disk size = Disk usage of all templates X Number of management
devices
Example:
When creating 50 VM templates with 20GB virtual disks and there are 10 management
servers
20GB X 50 templates X 10 devices = 10TB
For a VM template created by an IaaS reseller or tenants, capacity is required according
to the number of management servers used by the IaaS reseller or tenants.
Calculate the necessary NAS disk capacity based on the above concept.
2.
Network protocol between the management server and NAS
The management server mounts the NAS data store to replicate the VM template. To
access the shared files on the NAS, the NFS protocol or CIFS protocol can be used.
Enable this protocol and set the access privileges in the NAS setting to access the
management server. When the NFS protocol is selected, the user mapping server must be
installed for NFS authentication.
The characteristics of NFS and CIFS are as follows. It is recommended to use NFS
because setting is automated by workflows.
CIFS
NFS
Advantag
es
• A datastore can be created with the
logical disk creation workflow for the
software repository.
• No additional component is required.
Disadvan
tage
• NFS service must be installed on the
management server.
• Setting to enable CIFS protocol must be
performed manually to the storage.
• No user mapping server (ActiveDirectory
setting) is required.
• Installation of a user mapping server
(ActiveDirectory setting) is required.
3.6 Studying DC Resource Group Configuration
This section describes the resource group configuration.
3.6.1 DC Resource Group
A DC resource group is a pool of virtual machines, virtual network devices, and logical resources
provisioned by vDC Automation. Pooling these makes it possible to centrally manage and
automatically control a large amount of resources.
• Virtual machine (SigmaSystemCenter resource pool)
76
Chapter 3. System Design
• Virtual network devices (virtual firewall, virtual load balancer)
• Logical resource (VLAN/IP subnet, global IP address, OS license)
A DC resource group consists of resource groups that group management groups. A management
group is a pool of resources in one pod. Multiple management groups are created from one pod. By
using ProgrammableFlow, you can group multiple management groups created from multiple pods to
create a resource group that covers multiple pods or multiple sites.
Note
Virtual DataCenter Automation assigns all the virtual machines, virtual network devices, and VLANs that
make up a tenant from one resource group. Usually, a resource group consists of resources in one pod. To
assign resources that are used across multiple pods or multiple sites to a tenant, you must define the resource
group hierarchically. This allows you to assign resources within the range of the same resource group.
Create a SigmaSystemCenter resource pool and configure network devices with Network Manager
before creating a resource group. [Device Setting] is used to register the physical network devices to
be controlled by Virtual DataCenter Automation. Then, register the SigmaSystemCenter resource
pool, virtual network devices, and logical resources as components of the management group.
However, ensure that the number of VLANs you register in the management group does not exceed
the maximum number of active VLANs in the network device or the virtual switch of the hypervisor
in the pod. Check the usage and availability of resources in each resource group in the management
group.
3.7 Studying Resource Pool Configuration
This section describes the resource pool and sub-pool configurations.
3.7.1 Resource Pool
The resource pool is a concept of SigmaSystemCenter whereby the amount of resources that can be
allocated to the virtual machine is abstracted for management. A resource pool is created for each
virtualization base such as the datacenter/cluster of VMware and Hyper-V cluster.
The following resources can be abstracted in the resource pool.
• CPU
77
Chapter 3. System Design
The number of virtual CPUs that can be allocated to the virtual machine
• Memory
The size of memory that can be allocated to the virtual machine
• Disk
The disk capacity to which the images of the system disk and extended disk of the virtual
machine can be allocated
• Disk volume for RDM
The cluster of the disk volume to which the extended disk of the virtual machine can be
allocated with RDM. The disk volume is divided into groups for each size in 10GByte units.
The number of disks is managed for each group.
3.7.2 Resource Pool and Sub-pool
The sub-pool is the concept of the partial resource pool that can be created by extracting from the
resource pool.
The resource pool and sub-pool are described below.
• Resource pool
This is the pool created by summing up the total of the hardware resources that constitute the
virtualization base. The resource pool size is a physical resource amount, and depends on the
hardware specification.
• Sub-pool
This pool is created by extracting a certain amount of resources from the resource pool. The
sub-pool size is the upper limit of the resource consumption restricted by the software. If the
overcommit is valid for the sub-pool, a sub-pool can be created with the resource pool capacity
exceeded.
Virtual DataCenter Automation recommends the configuration created by extracting a sub-pool with
the overcommit valid from the resource pool and allocating it, instead of directly allocating the
resource pool to the resource user. By extracting a sub-pool, resources can be assigned flexibly to
individual IaaS providers or tenants.
The concept of the resource pool and sub-pool is described in the following figure. Please note that
the resources exceeding the physical resource amount are allocated to the sub-pool by setting the
overcommit.
78
Chapter 3. System Design
3.7.3 Configuration Examples of Sub-pool
The sub-pool configuration varies depending on how to allocate the resource as follows.
• Allocating the resource to a tenant via an IaaS reseller
- The sub-pool is allocated to the IaaS reseller.
- The size of the sub-pool is determined based on the sales target for each IaaS reseller.
• Allocating the resource to a tenant without an IaaS reseller
- The sub-pool is allocated to a tenant.
- The size of the sub-pool is determined based on the demand forecast for each IaaS reseller.
The following figure is the resource pool configuration example when allocating the resource to a
tenant via an IaaS reseller:
79
Chapter 3. System Design
• For the VMware resource pool, “VMware Gold” and “VMware Silver” are prepared according
to the service level.
• For the Hyper-V resource pool, the shared resource pool “Hyper-V” and solo resource pool
"Hyper-V Dedicated" are prepared.
• For tenants 1 and 2, select either of the sub-pools of "VMware Gold", "VMware Silver", and
"Hyper-V" allocated to the IaaS reseller A, and create the virtual machine.
• For tenants 3 and 4, select either of the sub-pools of "VMware Gold", "VMware Silver", and
"Hyper-V" allocated to IaaS reseller B, and create the virtual machine.
• For tenants 5, only the sub-pool of "Hyper-V Dedicated" allocated to IaaS reseller C is
available. Note that a sub-pool is extracted and allocated via an IaaS reseller even if the resource
pool is used solo bya tenant.
The following figure is the resource pool configuration example when allocating the resource to a
tenant without IaaS reseller.
80
Chapter 3. System Design
• For the VMware resource pool, "VMware Gold" and "VMware Silver" are prepared according
to the service level.
• For the Hyper-V resource pool, the shared resource pool "Hyper-V" and solo resource pool
"Hyper-V Dedicated" are prepared.
• For tenants 1, select either of the sub-pools of "VMware Gold", "VMware Silver", and "HyperV" allocated to tenant 1, and create the virtual machine.
• For tenants 2, select either of the sub-pools of "VMware Gold", "VMware Silver", and "HyperV" allocated to tenant 1, and create the virtual machine.
• For tenants 3, only the sub-pool of "Hyper-V Dedicated" allocated to tenant 3 is available. Note
that a sub-pool is extracted and allocated, even if the resource pool is used solo by a tenant.
3.8 Studying Resource Pool for Each Cloud
Virtual DataCenter Automation assumes 3 types of cloud configurations as a cloud application
pattern: public cloud, private cloud, and on-premises cloud.
The point of view for the resource pool in each cloud configuration is described below.
3.8.1 Public Cloud
The public cloud assumed in Virtual DataCenter Automation is the cloud configuration operated by
allocating the resource pool for shared or solo use to multiple companies using the externally
operated data center.
81
Chapter 3. System Design
The features of the public cloud are described below.
1.
The IaaS provider operates the data center. He/she sells the resources directly or indirectly via
an IaaS reseller to the tenants. To sell the resource directly, the provider and tenants agree on
the conditions of use (price, SLA, and others), and then conclude an agreement for use.
Conversely, to sell the resource via an IaaS reseller, the provider and reseller agree on the
service contents (Infrastructure type/SLA and others), and then conclude a sales agreement.
2.
The resource pool in the data center is used solo or shared by tenants or IaaS reseller.
According to the demand forecast of the tenants or the sales target of the IaaS reseller, the
required quantity of the resource is divided as a sub-pool to be allocated to the tenant or IaaS
reseller.
3.
The IaaS reseller is the sales agency or reseller selling the resource of the virtual system.
He/she sells the resources to the tenants holding a basic agreement with him/herself within the
scope of the agreement with the IaaS provider.
4.
The tenant is a company organization using the external data center. A tenant administrator
belongs to the tenant and manages the tenant’s virtual system. The tenant holding an agreement
for use with the IaaS provider uses the virtual system within the scope of the agreement.
Conversely, the tenant holding a basic agreement with the IaaS reseller purchases the virtual
system on an on-demand basis.
5.
Internet access to the virtual system in the data center is available. To secure Internet access
safety, an SSL-VPN device is installed in the data center.
82
Chapter 3. System Design
3.8.2 Private Cloud
The private cloud assumed in Virtual DataCenter Automation is the cloud configuration operated by
allocating the resource pool for solo use to multiple companies using the externally operated data
center.
The features of the private cloud are described below.
1.
The IaaS provider operates the data center. He/she sells the resources directly or indirectly via
an IaaS reseller to the tenants. To sell the resource directly, the provider and tenants agree on
the conditions of use (price, SLA, and others), and then conclude an agreement for use.
Conversely, to sell the resource via an IaaS reseller, the provider and reseller agree on the
service contents (Infrastructure type/SLA and others), and then conclude a sales agreement.
2.
The resource pool in the data center is used solo by the tenants or IaaS reseller. According to
the demand forecast of the tenants or the sales target of the IaaS reseller, the required quantity
of the resource is divided as a sub-pool to be allocated to the tenant or IaaS reseller.
3.
The IaaS reseller is the sales agency or reseller selling the resource of the virtual system.
He/she sells the resources to the tenants holding a basic agreement with him/herself within the
scope of the agreement with the IaaS provider.
4.
The tenant is a company organization using the external data center. A tenant administrator
belongs to the tenant and manages the tenant’s virtual system. The tenant holding an agreement
for use with the IaaS provider uses the virtual system within the scope of the agreement.
Conversely, the tenant holding a basic agreement with the IaaS reseller purchases the virtual
system on an on-demand basis.
83
Chapter 3. System Design
5.
Internet access to the virtual system in the data center is available. To secure Internet access
safety, an SSL-VPN device is installed in the data center.
6.
Conversely, the virtual system in the data center can be accessed via a closed-network WAN
service such as an IP-VPN or dedicated line.
3.8.3 On-premises Cloud
The on-premises cloud assumed in Virtual DataCenter Automation is the cloud configuration
operated by allocating the resource pool for shared or solo use to multiple internal organizations
(departments, subsidiaries, and others) using the data center internally operated.
The features of the on-premises cloud are described below.
1.
The IaaS provider operating the data center is the IT section which manages a data center at
the inside of the company. The provider and tenants agree on the conditions of use (price,
SLA, and others), and then conclude an agreement for use.
2.
The resource pool in the data center is used solo or shared by tenants. According to the
demand forecast of the tenants, the required quantity of the resource is extracted as a sub-pool
to be allocated to the tenant.
3.
The tenant is an internal organization (department or subsidiary) using the internal data center.
A tenant administrator belongs to the tenant and manages the tenant’s virtual system. The
tenant uses the virtual system within the scope of the agreement held with the IaaS provider.
4.
Intranet access to the virtual system in the data center is available.
84
Chapter 4. Design of Operation Management Server Configuration
Chapter 4.
Design of Operation Management
Server Configuration
This chapter provides supplementary notes for the standard configuration.
Contents
4.1 Studying ID Management ..........................................................................................................86
4.2 Studying DB Configuration........................................................................................................87
4.3 Studying Management of 100000 Virtual Machines...................................................................89
85
Chapter 4. Design of Operation Management Server Configuration
4.1 Studying ID Management
4.1.1 Users Handled in ID Management
The type of user information handled in Virtual DataCenter Automation is described below.
IaaS provider
Tenant administrator
Operation management function
√
-
Network devices
√
√
• IaaS providers using the operation management function
The operation management function is used for the management of login accounts for
SystemManager G and AssetSuite, Network Manager, SigmaSystemCenter, and
DeploymentManager components that constitute Virtual DataCenter Automation. If IaaS
providers integrally manage their login accounts using the ID management server, they can log
into the monitor screen of each function by the login accounts created for the ID management
server, instead of creating login accounts for each function. The users (IaaS providers) are
registered when configuring the data center.
• IaaS providers managing network devices
Network devices mean Fortinet FortiGate, A10 Thunder/AX series, F5 BIG-IP compatible with
Virtual DataCenter Automation. IaaS providers managing network devices are registered when
configuring the data center. The users (IaaS providers) are registered when configuring the data
center. Register them in network devices without using the ID management server.
• Tenant administrators using network devices
Automation allows tenant administrators to log into the devices within the scope of the virtual
resource such as the VDOM(Virtual domain provided by the firewall virtualization function of
Fortgate) assigned to tenants or partition. Users (tenant administrators) are registered when
assigning to tenants.
As Virtual DataCenter Automation, the MasterScope Identity Manager component (included in this
product) and Active Directory can be used for ID management. For how to register accounts, see
Chapter 8 Setting up the ID Management Server in the Virtual DataCenter Automation
Configuration Guide.
4.1.2 Precautions for ID Management
IaaS providers can register accounts individually. However, automation does not enable functions
other than those described above.
Manage the accounts that IaaS users in a tenant uses for operation separately from this ID
management server. Study measures such as introducing an authentication server into the virtual
machine assigned to tenants.
When using Active Directory for the ID management server, handled users must be managed in the
same hierarchy. For details, see 8.3 Using Active Directory in Virtual DataCenter Automation
Configuration Guide.
4.1.3 ID Management Configuration
Allocate one ID management server per site. The ID management server must be allocated so that
users can access the authentication function using this server from network devices.
86
Chapter 4. Design of Operation Management Server Configuration
For multiple sites, allocate one server per site, and synchronize among ID management servers using
the replication function.
4.2 Studying DB Configuration
4.2.1 Point of View for DB Configuration
In Virtual DataCenter Automation, the possible DB Configuration is as follows.
• Local DB (Recommendation setting)
DB is allocated on the servers (global management server, management server, and VM
monitoring server).
• Remote DB
DB is allocated to other servers (global management server, management server, and VM
monitoring server) as the DB server.
When managing large-scale virtual machines, allocate the DB server so that the data of 100,000 or
fewer virtual machines per DB server can be stored. Allocate the DB server to avoid access from
servers to the DB server across multiple sites.
When managing 100,000 virtual machines (the largest Virtual DataCenter Automation
configuration), the data of 100,000 virtual machines are stored in one global management server, the
87
Chapter 4. Design of Operation Management Server Configuration
data of 1000 virtual machines are stored in one management server, and the data of 256 virtual
machines are stored in one VM monitoring server.
4.2.2 DB Configuration
Locally allocate the DB used in Virtual DataCenter Automation to a global management server,
management server, and VM monitoring server.
Figure 4-1 Configuration example (local allocation of DB on servers)
Instead of allocating the DB used in Virtual DataCenter Automation to a global management server,
management server, and VM monitoring server, allocate the DB to other servers as the DB server.
88
Chapter 4. Design of Operation Management Server Configuration
Figure 4-2 Configuration example (allocation of the DB server)
4.3 Studying Management of 100000 Virtual
Machines
4.3.1 Point of View for Management of 100000 Virtual
Machines
When managing large-scale virtual machines, the performance of the global management server that
processes requests from portal servers may be bottlenecked. In this case, allocate a load balancer
among portal servers. Dividing the service governor function of the global management server and
allocating the divided functions enables scale out of the global management server.
4.3.2 Configuration of Management of 100000 Virtual
Machines
Allocate a load balancer between the portal server and global management server. By allocating
multiple service governor functions of the global management server to the back end of the load
balancer, the load can be divided.
89
Chapter 4. Design of Operation Management Server Configuration
For how to set, see 3.1 Redundant Configuration of the Service Governor in the Virtual DataCenter
Automation Installation Guide.
90
Chapter 5. Design of Optional Function
Chapter 5.
Design of Optional Function
This chapter describes considerations for the Virtual DataCenter Automation optional function.
Contents
5.1 Studying Distribution Package Configuration ............................................................................92
5.2 Studying Physical Machine Configuration .................................................................................93
91
Chapter 5. Design of Optional Function
5.1 Studying Distribution Package Configuration
To install software in the created VM, the software must be registered as a distribution package. The
distribution package refers to a set of information that consists of a group of files to be distributed
and the settings of post-distribution operation. The point of view and considerations when studying
introduction of the distribution package are described.
• Supported middleware
The middleware distributions described in "6.14 Distributed Middleware (page 107)".
• Distribution package unit
Create the distribution package for each middleware version. Create the package so that the
combination of package name and version is unique. Register the same package name and
version as the product name and version. The same number of packages with the same product
name shall be created as there are product versions.
Example: Package name: Oracle, Package version: 10
Package name: Oracle, Package version: 11g
• Configuration of servers for storing a distribution package
The server configuration studied when creating the distribution package is described.
NAS devices are installed in each site to store the distribution package as shown in the diagram
below. The created distribution package is stored in all NAS installed in the sites.
Since the files necessary for installing each software program are included in the distribution
package, the size of one distribution package is approximately a few MB to a few GB.
Therefore, secure the disk capacity according to the distribution package to be created in NAS.
92
Chapter 5. Design of Optional Function
5.2 Studying Physical Machine Configuration
5.2.1 Physical Machine Configuration
As a resource to assign to tenants, Virtual DataCenter Automation is compatible not only with virtual
machines on the virtualization base, but also with physical machines. The study points when handling
physical machines are listed below.
Details
Item
Physical
machines
Serial number of physical machines, installed CPU, memory, NIC, and HBA
Network
Number of installed NICs, NW devices of connecting destinations
Storage
Embedded disk configuration, external storage, OS boot configuration
OS image
Creation policy, sharing scope
5.2.2 Physical machines
Physical machines are assigned one by one in response to requests. It is recommended to prepare
multiple physical machines with the same configuration and same specification so that the same
machine can be assigned in response to requests for the same specification. To meet requests for
different specifications, prepare physical machines for each specification.
93
Chapter 5. Design of Optional Function
Physical machines are controlled in SigmaSystemCenter, DeploymentManager of the management
server component. Therefore, their support devices must be considered.
5.2.3 Network
For the connection between the network device assigned to tenants and physical machines, the
VLAN enables communication between other network devices, between virtual machines, and
between physical machines when the L2 switch is connected according to "3.2 Customization of
Network Configuration (page 53)".
As a study point, plan the number of NICs to be installed in physical machines, and the number of
ports and L2 switches that connect the NICs, when adding physical machines.
5.2.4 Storage
Local storage embedded in physical machines and external storage for SAN/NAS are provided in
physical machines.
• Local storage
When physical machines that include embedded disks are assigned, storage is provided without
controlling storage. To change the embedded disk configuration, the work must be performed in
a physical machine. Therefore, it cannot be changed easily.
• External storage
External storage is provided by assigning it from the storage devices of SAN (FC/iSCSI) or
NAS to the physical machines to be assigned. The connection configuration of physical
machines and the storage device and control of the storage device must be considered. External
storage enables the SAN boot and iSCSI boot configuration.
5.2.5 OS Image
An OS image is a pattern of the OS to be installed in physical servers. Similarly to the VM templates
of virtual machines, OS can be installed from one image in physical servers by setting a different host
name or IP address.
An OS image strongly depends on the hardware configuration of a master machine of the creation
source. An OS image can be installed in physical machines with the same configuration as a master
machine. For the availability of installation in physical machines with different configurations, see
Q4. Restriction of Machines Used in Main and Standby Machines under Configuration in the
SigmaSystemCenter FAQ (Japanese only).
94
Chapter 6. Operating Environments/System Requirements
Chapter 6.
Operating Environments/System
Requirements
Before the installation of Virtual DataCenter Automation, the system must be designed with careful
consideration of system requirements, hardware environments, and others. This section describes the
Virtual DataCenter Automation operating environment.
Contents
6.1 Virtual DataCenter Automation Version Information .................................................................96
6.2 Global Management Server........................................................................................................96
6.3 Management Server ...................................................................................................................97
6.4 VM Monitoring Server...............................................................................................................98
6.5 Managed Machine (Virtual Base) ...............................................................................................99
6.6 Managed Machine (Physical Machine) ....................................................................................101
6.7 Management Agent ..................................................................................................................102
6.8 Console ....................................................................................................................................103
6.9 ID Management Server ............................................................................................................103
6.10 DB Server ..............................................................................................................................104
6.11 Service Governor ...................................................................................................................105
6.12 Network Devices....................................................................................................................106
6.13 Storage ...................................................................................................................................107
6.14 Distributed Middleware..........................................................................................................107
6.15 Monitored Middleware...........................................................................................................108
95
Chapter 6. Operating Environments/System Requirements
6.1 Virtual DataCenter Automation Version
Information
The version information of the components included in Virtual DataCenter Automation v4.0 is listed
below.
Function name
Version
SigmaSystemCenter
3.6
DeploymentManager
6.6
SystemManager G
7.0
Network Manager
6.1.2.32
AssetSuite
3.2.1.17
Identity Manager
5.1.0
Network Automation
3.0
Topology Template Orchestrator
1.0
6.2 Global Management Server
To operate the standard function of Virtual DataCenter Automation, and Network Automation, the
following system requirements must be met for the global management server.
To use the same management server for Virtual DataCenter Automation, Network Automation and
linked products, the system requirements of linked products must also be met.
When using vDC Automation portal for the service portal, also refer to Chapter 1 Operating
Environments in Virtual DataCenter Automation Portal Installation Guide.
System Requirements
Type
CPU
At least Intel Compatible 2GHz 4 Core*1
Memory capacity*2
At least 8GB*1
Disk capacity*3
At least 50GB
NIC
At least 1Gbps
OS*4
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Display resolution
At least 1024X768 pixels
Required software
• Microsoft SQL Server 2012 (64bit) or later
• .NET Framework 3.5 Service Pack 1
• .NET Framework 4.0
• Web browser
Remarks
To construct storage for the software repository with NFS sharing, the NFS service
must be installed.
96
Chapter 6. Operating Environments/System Requirements
*1 Necessary system resource changes with number of Management Server under Global Management
server.
*2 The memory capacity used in the database (minimum: 1GB, recommended: at least 4GB) is included.
*3 The disk capacity necessary for installing components. Separately, free space of 1GB or more is
required in %TMP% or %TEMP% as the working area during installation.
The disk capacity for the database used for components is required separately.
To install the linked product to the same management server, the disk capacity for the linked product is
required separately.
*4 Only the full installation is supported. Server Core installation is not supported.
6.3 Management Server
To operate the standard function of Virtual DataCenter Automation, and Network Automation, the
following system requirements are required for the management server.
To use the same management server for Virtual DataCenter Automation, Network Automation, and
linked products, the system requirements of the linked products must also be met. For details about
the system requirements required when using the virtual environment management function, see
"6.5 Managed Machine (Virtual Base) (page 99)".
System Requirements
Type
CPU
At least Intel Compatible 2GHz 4 Core*1
Memory capacity*2
At least 16GB*1
Disk capacity*3
At least 6GB
NIC
At least 1Gbps
OS*4
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Display resolution
At least 1024X768 pixels
Required software
• Microsoft SQL Server 2012 (64bit) or later
• IIS version 6.0 or later
• .NET Framework 3.5 Service Pack 1
• Microsoft Chart Controls for Microsoft .NET Framework 3.5*5
• .NET Framework 4.5.2
• ASP.NET 2.0
• Windows Management Framework 4.0
• Web browser
Remarks
• A DHCP server is required on the same network as the DPM server.*6
• To install DPM on the management server, JRE (Java Runtime Environment 32 bit
version) 6.0 Update29 is required.*7
• To control PET reception with Out-of-Band Management, SNMP Service must be
installed.
97
Chapter 6. Operating Environments/System Requirements
Type
System Requirements
• To use ESMPRO/ServerManager from the browser, JRE (Java Runtime
Environment) 5.0 or later must be installed to the machine using the browser.
• To use VMware in the virtualization base and use template replication and broughtin VM functions, VMware vSphere PowerCLI, NFS service must be installed.
• To construct storage for the software repository with NFS sharing, the NFS service
must be installed.
*1 The necessary system resources vary depending on the number of hosts of virtual machines to be
managed by the management server.
*2 The memory capacity used in the database (minimum: 1GB, recommended: at least 4GB) is included.
*3 The disk capacity necessary for installing components. Separately, free space of 1GB or more is
required in %TMP% or %TEMP% as the working area during installation.
The disk capacity for the database used for components is required separately.
To install the linked product to the same management server, the disk capacity for the linked product is
required separately.
*4 Only the full installation is supported. Server Core installation is not supported.
*5 The installer operates during the component installation to install automatically.
*6 Operation without the DHCP server is also available. If not using a DHCP server, some functions are
restricted.
*7 JRE6.0 Update29 is included in this product.
6.4 VM Monitoring Server
To operate the standard function of VM monitoring server, the following system requirements must
be met.
To use the same management server for linked products, the system requirements of linked products
must also be met.
System Requirements
Type
CPU
At least Intel Compatible 2GHz 2 Core
Memory capacity*1
At least 4GB
Disk capacity*2
At least 4GB
NIC
At least 1Gbps
OS*3
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Required software
Microsoft SQL Server 2012 (64bit) or later
Remarks
• VM Monitoring Server must be installed on the same subnet as a Management
Server.
• To construct storage for the software repository with NFS sharing, the NFS service
must be installed.
98
Chapter 6. Operating Environments/System Requirements
*1 The memory capacity used in the database (minimum: 1GB, recommended: at least 4GB) is included.
*2 The disk capacity necessary for installing components. Separately, free space of 1GB or more is
required in %TMP% or %TEMP% as the working area during installation.
The disk capacity for the database used for components is required separately.
To install the linked product to the same management server, the disk capacity for the linked product is
required separately.
*3 Only the full installation is supported. Server Core installation is not supported.
6.5 Managed Machine (Virtual Base)
You can manage the integrated virtual base as below in Virtual DataCenter Automation.
• VMware
• Hyper-V
• KVM
This chapter describes the virtual environment that can be managed with Virtual DataCenter
Automation.
6.5.1 System Requirements
• System requirements for the VMware-linked environment
For the latest requirements for the VMware-linked environment, see the manuals of products
issued by VMware, and the website below.
http://www.nec.co.jp/vmware/
• System requirements for the Hyper-V environment
For the latest requirements for the Hyper-V environment, see the website below.
http://www.nec.com/en/global/support/index.html
Note
Note that the guest OS listed in the above website is differs from the guest OS supported by Virtual
DataCenter Automation.
• System requirements for the KVM environment
For the latest requirements for the KVM environment, see the website below.
http://www.nec.co.jp/linux/linux-os/kvm.html
6.5.2 Virtual Machine Base
The virtual machine base and management software required during the virtual environment
management support the following:
Tip
The latest requirements of Virtual DataCenter Automation can be obtained from the following website:
http://www.nec.com/en/global/prod/masterscope/vdcautomation/
• VMware vCenter Server 5.0, 5.1, 5.5, 6.0, 6.5
• VMware ESXi 5.0, 5.1, 5.5, 6.0, 6.5 *1
99
Chapter 6. Operating Environments/System Requirements
• Windows Server 2012 R2 Hyper-V / 2016 Hyper-V *2
• Ret Hat Enterprise Linux 6.8 KVM
• Ret Hat Enterprise Linux 7.3 KVM
Note
On KVM, the following packages and libraries must be installed.
•
Packages: redhat-lsb
6.5.3 Managed Guest OS
The following guest OS on the virtual machine base is supported in Virtual DataCenter Automation.
Tip
The latest requirements of Virtual DataCenter Automation can be obtained from the following website:
http://www.nec.com/en/global/prod/masterscope/vdcautomation/
Virtual machine base
VMware ESXi*1*2
Guest OS
• Windows Server 2008 Standard (x86) SP1 / SP2
• Windows Server 2008 Enterprise (x86) SP1 / SP2
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2008 Datacenter (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
• Red Hat Enterprise Linux 5 (x86)*3
• Red Hat Enterprise Linux 5 AP (x86)*3
• Red Hat Enterprise Linux 6 (x86)*3
• Red Hat Enterprise Linux 6 (AMD64/EM64T)*3
• Red Hat Enterprise Linux 7 (AMD64/EM64T)*3
Windows Server 2012 R2
Hyper-V / 2016 HyperV*2*4
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2008 Datacenter (x64) R2 / R2 SP1
• Windows Server 2008 Standard (x86, x64) SP1 / SP2
• Windows Server 2008 Enterprise (x86, x64) SP1 / SP2
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Red Hat Enterprise Linux
KVM
• Red Hat Enterprise Linux 6 (x86)*5
• Red Hat Enterprise Linux 6 (AMD64/EM64T)*5
• Red Hat Enterprise Linux 7 (AMD64/EM64T)*5
*1
ESXi of the free license is not managed.
*2
Only the cluster configuration is supported in Hyper-V.
100
Chapter 6. Operating Environments/System Requirements
*1 The support requirements of the VMware guest OS must be met for the supported guest OS. For the
latest support requirements, see the manuals of products issued by VMware.
*2 Free space of 400MB or more is required for installation of the managed machine component. Also,
the following free space is required separately as the working area during installation.
•
For Windows, 1GB or more in %TMP% or %TEMP
•
For Linux, 1GB or more in /tmp
*3 See "Appendix C. Managed Guest OS require packages (page 113)". The packages and libraries must
be installed.
*4 The maximum number of virtual CPUs to be supported varies depending on the OS. For details, see
the following website:
http://www.microsoft.com/japan/windowsserver2008/technologies/hyperv-guestos.mspx
*5 The guest OS to be creation in Differential Clone. When other OS support is required, please contact
us.
Windows Server 2003 and Red Hat Enterprise Linux 4 will be supported in response to an RPQ
request. Please contact us.
6.6 Managed Machine (Physical Machine)
The following physical machine operating environment is supported in Virtual DataCenter
Automation.
Tip
The latest requirements of Virtual DataCenter Automation can be obtained from the following website:
http://www.nec.com/en/global/prod/masterscope/vdcautomation/
System Requirements
Type
Model*1
• Blade server
SIGMABLADE(B120d,B110d,B120d-h,B120a,B120a-d,B120b, B120bLw,B120b-d,B120b-h,120Bb-6,120Bb-m6,120Bb-d6, 140Ba-10,B140a-T),
120Ba-4,110Ba-e3/-m3,420Ma
• ECO CENTER*2
• Scalable HA server*2
• Express5800/100 series
• 6200 series fabric interconnect*3
• UCS 5100 series blade server chassis, Cisco UCS B series blade
OS*4
• Windows Server 2008 Standard (x86) SP1 / SP2*5
• Windows Server 2008 Enterprise (x86) SP1 / SP2*5
• Windows Server 2008 Standard (x64) R2 / R2 SP1*5
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1*5
• Windows Server 2008 Datacenter (x64) R2 / R2 SP1*5
• Windows Server 2012 Standard / R2*5
• Windows Server 2012 Datacenter / R2*5
• Windows Server 2016 Standard*5
• Windows Server 2016 Datacenter*5
• Red Hat Enterprise Linux 5 (x86)
101
Chapter 6. Operating Environments/System Requirements
Type
System Requirements
• Red Hat Enterprise Linux 5 (AMD64/EM64T)
• Red Hat Enterprise Linux 5 AP (x86)
• Red Hat Enterprise Linux 5 AP (AMD64/EM64T)
• Red Hat Enterprise Linux 6 (x86)
• Red Hat Enterprise Linux 6 (AMD64/EM64T)
• Red Hat Enterprise Linux 7
Hardware specification
• Network adapter (compatible with Wake-on-LAN, recommended link speed:
1000Base or higher)
• CPU, memory, disk capacity*6 etc. are compatible with the OS and application to
be operated.
• When using the out-of-band management function, use the model containing the
baseboard management controller (BMC) that is compatible with RMCP or RMCP
+.
*1 For details of the Express5800 series support model, see the available device list of MasterScope
DeploymentManager.
Application of the MasterScope DeploymentManager model support module may be required.
*2 For the management function by the out-of-band management controller, only some serial numbers of
Express5800/A1080a will be supported. Some functions will be restricted.
*3 The installed UCS Manager is Version 1.4 or later.
*4 The available management target OS depends on the support OS of the target hardware.
*5 Full installation and Server Core installation are supported.
For the disk replication OS installation function of MasterScope DeploymentManager, only full
installation is supported.
For some models of SIGMABLADE, please note that the Wake-on-LAN is not supported in the event
of Server Core installation.
*6 Free space of 400MB or more is required for installation of the managed machine component. Also,
the following free space is required separately as the working area during installation.
•
For Windows, 1GB or more in %TMP% or %TEMP
•
For Linux, 1GB or more in /tmp
Windows Server 2003 and Red Hat Enterprise Linux 4 will be supported in response to an RPQ
request. Please contact us.
6.7 Management Agent
To operate the management agent function, the following system requirements must be met.
System Requirements
Type
CPU
At least Intel Compatible 1GHz
Memory capacity
32MB or larger
Disk capacity
200MB or larger*1
OS
• Windows Server 2008 Standard (x86/x64) SP1 / SP2
• Windows Server 2008 Enterprise (x86/x64) SP1 / SP2
• Windows Server 2008 Standard R2 / R2 SP1
• Windows Server 2008 Enterprise R2 / R2 SP1
102
Chapter 6. Operating Environments/System Requirements
Type
System Requirements
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Required software
Microsoft Visual C++ 2005 SP1 Redistributable Package (x86)*2
Additional software
• NEC Storage Manager Ver3 or later*3
• NetApp OnCommand Core 5.0.2 or later*4
*1 Free space of 1GB or more is required separately in %TMP% or %TEMP% during installation of the
management agent.
*2 For how to install, see the Virtual DataCenter Automation Installation Guide.
*3 To manage NEC Storage, installation in the same machine is required.
*4 To collect NetApp performance information, installation to the same machine is required.
6.8 Console
To operate the console function of the global management server, management server, and VM
monitoring server, the following system requirements must be met.
System Requirements
Type
CPU
At least Intel Compatible 2GHz 2 Core
Memory capacity
At least 1GB
Disk capacity
At least 1GB*1
OS
• Windows Server 2008 (x86) SP1 / SP2
• Windows Server 2008 (x64) R2 / R2 SP1
• Windows Server 2012 / R2
• Windows Server 2016
• Windows 7 (x86) SP1
• Windows 7 (x64) SP1
• Windows 8
Required software
Internet Explorer 7, 8 ,9, 10(when using the Web monitoring screen)*2
*1 Free space of 1GB or more is required separately in %TMP% or %TEMP% during installation of the
console.
*2 When using tenant network view for Internet Explorer 10, necessary to operate it with a compatibiliry
mode.
*1 Open the "F12 Developer tool" in "tool" menu.
*2 Changed blowser mode.
6.9 ID Management Server
To operate the standard function of Virtual DataCenter Automation, the following system
requirements for the ID management server must be met.
103
Chapter 6. Operating Environments/System Requirements
To use the same management server for linked products, the system requirements of linked products
must also be met.
Type
System Requirements
CPU
At least Intel Compatible 2GHz 2 Core
Memory capacity*1
At least 4GB
Disk capacity*2
At least 4GB
NIC
At least 1Gbps
OS*3
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Required software
• Java execution environment JRE7update3 (included in product DVD)
• Apache Tomcat (64bit) Ver7.0.26 (included in product DVD)
• Any of the following browsers
- InternetExplorer8
- InternetExplorer9
- InternetExplorer10
- Firefox12
- Safari5
*1 To install the linked product to the same management server, the memory capacity for the linked
product is required separately.
*2 To install the linked product to the same management server, the disk capacity for the linked product is
required separately.
*3 Only the full installation is supported. Server Core installation is not supported.
6.10 DB Server
To use a DBSM as a server separately from the global management server, management server, and
VM monitoring server and to use the Virtual DataCenter Automation standard function, the following
system requirements must be met for the DB server.
Virtual DataCenter Automation is based on the SQL Server 2012 or later system requirement to use
SQL Server.
To use the same management server for Virtual DataCenter Automation and linked products, the
system requirements of linked products must also be met.
System Requirements
Type
CPU
At least Intel Compatible 2GHz 4 Core
Memory capacity*1
At least 4GB
Disk capacity*2
At least 6GB*3
NIC
At least 1Gbps
OS
Based on SQL Server 2012 or later system requirements
104
Chapter 6. Operating Environments/System Requirements
Type
System Requirements
Display resolution
800 X 600 pixels or more
Required software
• Microsoft SQL Server 2012 or later
• .NET Framework 3.5 Service Pack 1
• .NET Framework 4.0
• Windows PowerShell
*1 This is the recommended value of SQL Server 2012. To secure optimum performance, the capacity
must be larger as the database size is larger.
*2 This is the free disk space necessary for installation of the minimum necessary SQL Server 2012
components. The necessary disk free space varies depending on the SQL Server 2012 component to be
installed.
To install the linked product to the same management server, the disk capacity for the linked product is
required separately.
*3 The necessary disk space is standard. This varies depending on the environment (VM, network device,
storage device, etc.) of the monitored VM.
Disk capacity to store the SQL Server transaction log, data, and log backup is separately required.
6.11 Service Governor
To use as a server separately from the global management server due to the load distribution
configuration of the service governor and to use the Virtual DataCenter Automation standard
function, the following system requirements must be met for the servers to which the service
governor is introduced.
To use the same management server for Virtual DataCenter Automation and linked products, the
system requirements of linked products must also be met.
System Requirements
Type
CPU
At least Intel Compatible 2GHz 4 Core
Memory capacity
At least 4GB
Disk capacity
At least 2GB
NIC
At least 1Gbps
OS*1
• Windows Server 2008 Standard (x64) R2 / R2 SP1
• Windows Server 2008 Enterprise (x64) R2 / R2 SP1
• Windows Server 2012 Standard / R2
• Windows Server 2012 Datacenter / R2
• Windows Server 2016 Standard
• Windows Server 2016 Datacenter
Display resolution
At least 800 X 600 pixels
Required software
Web browser
*1 Only the full installation is supported. Server Core installation is not supported.
105
Chapter 6. Operating Environments/System Requirements
6.12 Network Devices
A script template of network automation in Virtual DataCenter Automation corresponds to the
following network equipment and the version. Please inquire about the latest support situation.
Type
ProgrammableFlow
System Requirements
• Network Coordinator
UNIVERGE PF6800 Network Coordinator (V5.1/V6.0/V6.1/V6.2/V6.3/V7.0/
V7.1/V7.2)
• ProgrammableFlow Controller
UNIVERGE PF6800 (V5.0/V5.1/V6.0/V6.1/V6.2/V6.3/V7.0/V7.1/V7.2)
• ProgrammableFlow Switch
UNIVERGE PF5200 Series (V5.0/V5.1/V6.0)
UNIVERGE PF5340 (V6.2/V6.3/V7.1)
UNIVERGE PF5459 (Ver 7.1)
Layer 2 switch
Cisco switchCisco IOS 12*1
SIGMABLADE switch module BLADE OS CLI, or AOS CLI*2
Firewall (Multi Tenant
Function)*3
Fortinet FortiGate FortiOS 5.4
SSL-VPN device (Muiti
Tenant Function)*4
Fortinet FortiGate FortiOS 5.4
Load balancer (Multi
Tenant Function)
A10 Thunder 2.7
Virtual Switch*5
• VMware
F5 BIG-IP 11.4/12.1
Virtual Switch for vSphere*6
vSphere Distributed Switch*6
• Hyper-V
Hyper-V virtual switch(default switch)
• KVM
KVM virtual switch(default switch)
Layer2 switch for physical
server*7
Devices indicated on (ProgrammableFlow switch)
Devices indicated on (Layer2 switch)
*1 The equipment for which port base VLAN and tag VLAN are practicable is a target.
*2 It doesn't correspond to SCLI.
*3 It's used as tenant firewall or back-end firewall
*4 Using Multi Tenant Function.
*5 The virtual switch of the virtualization base is controlled by SigmaSystemCenter.
*6 To have to put port group setting of virtual SW into effect and synchronize in case of an addition of
ESXi and a replace, use of the dispersion virtual switch is recommended.
*7 The switch for network control of an expenditure target physical server to a tenant.
106
Chapter 6. Operating Environments/System Requirements
6.13 Storage
The storage management software supported in Virtual DataCenter Automation and the storage to be
managed in Virtual DataCenter Automation are listed below.
Type
Hardware type
System Requirements*1
• iStorage M Series
• iStorage D Series
• iStorage E Series
• iStorage S Series
• EMC VNX Series (Block only)
• NetApp FAS2500 Series
• NetApp FAS8000 Series
Required software
• iStorage
- NEC Storage Manager Ver7 or later*2
- NEC Storage Manager Integration Base Ver7 or later
• EMC VNX
- Navisphere/Unisphere Manager
- Navisphere/Unisphere CLI 07.31, 07.32, 07.33*2
• NetApp
Data ONTAP 8.0.x (8.0.2 or later), 8.1.x, 8.2.x
*1 For details, see SigmaSystemCenter FirstStepGuide.
*2 Installation in the same machine as the management agent is required.
6.14 Distributed Middleware
This section describes the middleware supported by Virtual DataCenter Automation (asset
distribution function).
• Windows *3
Supported version
Middleware
Apache
Apache HTTP Server 2.2
Tomcat
Apache Tomcat 7.0
IIS
IIS 7.5*2
WebOTX
WebOTX Application Server Standard V8.4
WebLogic
Oracle WebLogic Server 11gR1 (10.3)
PostgreSQL
PostgreSQL 9.1
MySQL
MySQL Community Server
Oracle
Oracle Database 11g Release 2 (11.2)
SQL Server
SQL Server® 2008 R2 SP1 - Express Edition
*1 Supported only when OS is Windows Server 2008 (x64) R2.
*3
For OS, see "Chapter 6. Operating Environments/System Requirements (page 95)".
107
Chapter 6. Operating Environments/System Requirements
• Linux *3
Middleware
Supported version
Apache
Apache HTTP Server 2.2
Tomcat
Apache Tomcat 6.0
WebOTX
WebOTX Application Server Standard V8.4
WebLogic
Oracle WebLogic Server 11gR1 (10.3)
PostgreSQL
PostgreSQL 8.4
MySQL
MySQL Server 5.1
Oracle
Oracle Database 11g Release 2 (11.2)
6.15 Monitored Middleware
This section describes the support version of the middleware supported by Virtual DataCenter
Automation (middleware monitoring).
The following middleware is monitored by Virtual DataCenter Automation.
Platform compatible with a remote host
Oracle
WebLogi
Database
c
*1*2
Server*1
SQL
Server
SAP
Windows Server 2003 (SP1, SP2) (32bit)
√
√
√
√
Windows Server 2003 (SP1, SP2) (x64)
√
√
√
√
Windows Server 2003 R2 (SP1, SP2) (32bit)
√
√
√
√
Windows Server 2003 R2 (SP1, SP2) (x64)
√
√
√
√
Windows Server 2008 (SP1, SP2) (32bit)
√
√
√
√
Windows Server 2008 (SP1, SP2) (x64)
√
√
√
√
Windows Server 2008 R2 (SP N/A, SP1)
√
√
√
√
Windows Server 2012
√
√
√
√
Windows Server 2012 R2
√
√
√
√
Red Hat Enterprise Linux 5 (x86)
√
√
-
Red Hat Enterprise Linux 5 (x86_64)
√
√
-
Red Hat Enterprise Linux 6 (x86)
√
-
-
Red Hat Enterprise Linux 6 (x86_64)
√
√
-
Oracle Enterprise Linux 5
-
Oracle Linux 6 (UEK R2) (x86_64)
-
*1 When using the Oracle Database and WebLogic Server with a Named User Plus (NUP) license, the
count for one user is required for monitoring.
*2 The support status for each Oracle version is separately listed.
*3 √:Supported, Blank: Not supported, -: Out of range according to the product definition
108
Chapter 6. Operating Environments/System Requirements
Table 6-2 platform compatible with remote host for each Oracle Database version
Remote host
10gR2
11gR1
11gR2
12cR1
Windows Server 2003 (SP1, SP2) (32bit)
√
*
√
-
Windows Server 2003 (SP1, SP2) (x64)
√
*
√
-
Windows Server 2003 R2 (SP1, SP2) (32bit)
√
*
√
-
Windows Server 2003 R2 (SP1, SP2) (x64)
√
*
√
-
Windows Server 2008 (SP1, SP2) (32bit)
√
*
√
-
Windows Server 2008 (SP1, SP2) (x64)
√
*
√
√
Windows Server 2008 R2 (SP N/A, SP1)
√
-
√
√
Windows Server 2012
-
-
√
√
Windows Server 2012 R2
-
-
√
√
Red Hat Enterprise Linux 5 (x86)
√
*
√
-
Red Hat Enterprise Linux 5 (x86_64)
√
*
√
√
Red Hat Enterprise Linux 6 (x86)
-
-
√
-
Red Hat Enterprise Linux 6 (x86_64)
-
-
√
√
Oracle Enterprise Linux 5
-
Oracle Linux 6 (UEK R2) (x86_64)
-
-
Table 6-3 Platform compatible with the remote host for each application version
Application
Oracle
Database
WebLogic
Version
Monitoring
availability
10gR1
×
10gR2
√
11gR1
*
11gR2
√
12cR1
√
Supplementary
Monitoring by 11gR1 clients is not available. Monitoring by
clients of other versions that can be connected to 11gR1 Oracle
DB server is available.
9.2
10.0
10.3
SQL Server
SAP
11gR1
√
Java1.5 and Java6 are available.
12c
√
Java6 and Java7 are available.
2005
√
The MW library is not required for SQL Server monitoring.
2008
√
2008R2
√
2012
√
7.0
7.3
√
109
Appendix A. Revision History
Appendix A. Revision History
• First edition (June. 2016): Newly created
110
Appendix B. Manual System
Appendix B. Manual System
• For Virtual DataCenter Automation, information on the product overview, installation, settings,
operation, and maintenance is included in the following manuals. The role of each manual is
described below.
- Virtual DataCenter Automation First Step Guide
This document is intended for Virtual DataCenter Automation or Network Automation
users and includes details of the product overview, system design method, operation
environment, and others.
- Virtual DataCenter AutomationInstallation Guide, Network Automation Installation Guide
This document is intended for system administrators and includes details of how to install,
upgrade install, and uninstall Virtual DataCenter Automation or Network Automation.
- Virtual DataCenter Automation Configuration Guide
This document is intended for system administrators in charge of overall post-installation
settings and those post-setting operation/maintenance. The procedure from the postinstallation settings to the operation is provided based on the actual workflow.
Maintenance operation is also described.
- Virtual DataCenter Automation Cluster Configuration Guide, Network Automation Cluster
Configuration Guide
This document is intended for system administrators who configure the cluster system for
the Virtual DataCenter Automation or Network Automation includes details of how to
configure it.
- Virtual DataCenter Automation API Reference
This document includes details of the API provided to the service portal by Virtual
DataCenter Automation or Network Automation.
- Virtual DataCenter Automation Portal Installation Guide, Virtual DataCenter Automation
Portal Operations Guide
This document is intended of system administrators who install and use the operations for
Virtual DataCenter Automation Portal.
• For Virtual DataCenter Automation Standard Edition, information on the product overview,
installation, settings, operation, and maintenance is included in the following manuals. The role
of each manual is described below. For the configuration that uses the Virtual DataCenter
Automation Standard Edition Topology Template Orchestrator Option, refer to the "with
Topology Template Orchestrator" manuals.
- Virtual DataCenter Automation Standard Edition Setup Guide
This document is intended for system administrators and includes details of how to install,
initial settings, and uninstall Virtual DataCenter Automation Standard Edition.
- Virtual DataCenter Automation Standard Edition Portal User Manual (Install)
This document is intended for system administrators and includes details of how to install
and uninstall Virtual DataCenter Automation Standard Edition portal.
- Virtual DataCenter Automation Standard Edition Portal User Manual (Resource
Management)
111
Appendix B. Manual System
This document is intended of system administrators who use the operations for Virtual
DataCenter Automation Portal Resource Management function.
- Virtual DataCenter Automation Standard Edition Portal User Manual (Monitoring)
This document is intended of system administrators who use the operations for Virtual
DataCenter Automation Portal Monitoring function.
- Virtual DataCenter Automation Standard Edition Topology Template Orchestrator Option
User's Guide
This document is intended for system administrators and includes details of how to install
and uninstall Virtual DataCenter Automation Standard Edition Topology Template
Orchestrator Option.
Tip
Contact a sales representative for the latest edition of any Virtual DataCenter Automation manual.
112
Appendix C. Managed Guest OS require packages
Appendix C. Managed Guest OS require
packages
The following packages and libraries must be installed (* indicates a numeric value).
Package
RHEL 5
bc
compat-libstdc++-33 (32bit version)
e2fsprogs-libs (32bit version)
glibc (32bit version)
libgcc (32bit version)
ncompress
ncurses (32bit version)
net-tools
procps
redhat-lsb
rpm-build
sysstat(either of 5.0.5, 6.0.2, 7.0.0, 7.0.2)
openssh
openssh-server
openssh-clients
openssl
libpthread.so.0
libc.so.*
ld-linux.so.2
sg3_utils
RHEL 6
bc
compat-libstdc++-33 (32bit version)
glibc
libgcc (32bit version)
libuuid (32bit version)
ncompress
ncurses-libs (32bit version)
redhat-lsb
rpm-build
net-tools
sysstat(9.0.4)
procps
openssh
113
Appendix C. Managed Guest OS require packages
Package
openssh-server
openssh-clients
openssl
libpthread.so.0
libc.so.*
ld-linux.so.2
sg3_utils
RHEL 7
bc
compat-libstdc++-33 (32bit version)
glibc
libgcc (32bit version)
libuuid(32bit version)
ncompress
ncurses-libs (32bit version)
redhat-lsb
rpm-build
sysstat(10.1.5)
procps-ng
iproute
openssh
openssh-server
openssh-clients
openssl
libpthread.so.0
libc.so.*
ld-linux.so.2
sg3_utils
114
Appendix D. License Information
Appendix D. License Information
This product partially includes open-source software. For details of the licensing conditions of the
software, please refer to the following files included. The source code is released based on LGPL.
Please make inquiries if replication, alteration, or distribution of the open-source software is desired.
<Install DVD>:\oss_license
• PXE Software Copyright (C) 1997 - 2000 Intel Corporation.
• This product includes JRE (Java Runtime Environment) distributed by Oracle Corporation at no
charge. You must agree to the licensing conditions for use. For details of the copyright or
property, refer to the following LICENSE files.
<Folder in which JRE is installed>:\LICENSE
• Some icons used in this program are based on Silk Icons released by Mark James under a
Creative Commons Attribution 2.5 License. Visit http://www.famfamfam.com/lab/icons/silk/ for
more details.
• This product includes software developed by Routrek Networks, Inc.
115
Glossary
Glossary
■ BMC
An abbreviation of "Baseboard Management Controller".
■ Business VLAN
The VLAN used for the virtual machine production.
■ CLARiX
Storage of EMC products.
■ CSV(Cluster Shared Volumes)
File system simultaneously accessible from multiple servers installed for Hyper-V in Windows
Server 2008 R2. Recommended for Live Migration.
■ Console
The console is connected to the manager function to browse the information managed by the
manager function and control the managed machines. There are 3 types; global management server
console, management server console, and VM monitoring server console. Also called "Viewer" or
"SVC".
■ DataCenter
Bundles the virtual machine servers. Corresponds to the DataCenter of vCenter Server when
managing the vCenter Server environment. vCenter Server cluster is used equally to the DataCenter
in Virtual DataCenter Automation. To manage the Hyper-V cluster environment, only one
DataCenter is created. No addition or deletion is allowed.
■ Data center administrator
The person or service provider organization managing the overall services from the standpoint of the
service provider. Manages (configures, adds, responds to a failure) the hardware resources utilized in
the cloud service. Lends the managed resources to tenants as a service.
■ Data Transfer VLAN
Prepare one per tenant. The back-end firewall connects the data transfer VLAN and operation
management LAN to provide a routing function among LANs.
116
■ DHCP server
DHCP is an abbreviation of "Dynamic Host Configuration Protocol". In the network, the DHCP
server is equipped with a function dynamically allocating an IP address to the computer. Responding
to the request from DHCP client, DHCP server allocates information previously prepared such as IP
address, subnet mask, or domain name.
■ Differential Clone
Creates a virtual machine based on the basic image created from the master VM. The virtual machine
created by Differential Clone retains only the differential information from the basic image.
■ Disk Clone
Creates a virtual machine by directly copying the basic image created from the master VM.
■ Disk volume
In Virtual DataCenter Automation, refers to the logical disk consisting of multiple physical disks and
recognized as a single disk from OS. Known as "LD" in NEC Storage, and "logical disk" in EMC
storage.
■ DPM
An abbreviation of "DeploymentManager". Distributes/updates OS, applications, and software
(patches, etc.) to the machine to be managed, starts/stops the machine responding to the instructions
from SystemProvisioning.
■ DPM client
DPM component. Installed to the DPM machine to be managed for management using DPM.
■ DPM command line
DPM component. Enables the status of the DPM machine to be managed and processed to be
checked by command line entry.
■ DPM server
DPM component. Manages the DPM machine to be managed. Processes the DPM machine to be
managed responding to the instructions from the DPM Web console.
■ ESMPRO/ServerManager, ESMPRO/ServerAgent
Machine management software attached with the Express5800 Series as standard. When managing a
physical machine, Virtual DataCenter Automation monitors it via ESMPRO/ServerManager.
117
■ ESX
VMware product enabling the virtual machine.
■ ESXi
VMware product enabling the virtual machine.
■ FASxxxx Series
Storage of NetApp products.
■ Full Clone
Creates a virtual machine based on the standard template of the virtual base products created from the
master VM.
■ Global Management Server
Server in which the components necessary for the manager function (described in "2.3.2 Installed
Functions (page 33)") are installed. Also called GM.
■ Global Object
A global object is a variable that can be shared between scenarios. Using a global object enables the
transfer of information and flow synchronization between scenarios.
■ HBA
An abbreviation of "Host Bus Adapter". The interface card to connect the storage devices. Including
the Fibre Channel controller.
■ HW Profile Clone
Creates a vacant VM based on HW Profile information and restores the basic image using the DPM
function to create a virtual machine.
■ Hyper-V
Virtualization technology owned by Microsoft. Embedded in Windows Server 2008/R2 as standard.
■ Hyper-V cluster
Clustered Hyper-V. Virtual DataCenter Automation only supports this configuration in Windows
Server 2008 R2.
■ Hyper-V manager
Microsoft's standard Hyper-V management console.
118
■ IaaS
A service configuration providing an environment capable of supporting the use of any software as
well as OS, middleware, and application with the virtual machine. There may be 2 cases depending
on the configuration providing IaaS users with software: IaaS users' software prepared by the IaaS
service side or by IaaS users themselves.
■ IaaS provider
A provider managing the data center or operations generally using Virtual DataCenter Automation.
Ensures resources supplied according to the demand forecast and allocates them to tenants.
■ IaaS reseller
A sales agency or reseller who sells the resources of the virtual system. Ensures a sub-pool from IaaS
providers and sells the resources to tenant administrators.
■ IaaS user
A person or organization utilizing the IaaS service. Includes the administrators, users, and operators
of the provided virtual computer. In Virtual DataCenter Automation, included in tenant
administrators.
■ IIS
An abbreviation of "Internet Information Services". Software for the Internet server provided by
Microsoft.
■ Image builder
DPM tool. Creates an image file such as OS, and registers it to the DPM server.
■ Integration services
A component installed on the virtual machine on Hyper-V. Performance will be improved and an
additional function will be available.
■ IPMI
An abbreviation of "Intelligent Platform Management Interface". Provides an interface to obtain
sensor information, power operation and the device logs for the device.
■ Machine
The generic name of the physical/virtual machines that can be managed with Virtual DataCenter
Automation.
119
■ Maintenance mode
Mode used for ignoring a failure report such as during machine maintenance work. Any machine
failure occurring while the maintenance mode is set is not restored based on the policy.
■ Master machine
By configuring the machine as a creation source, and cloning the machine image to another machine,
multiple machines with the same configuration can be created. This creates a source machine in the
master machine.
■ Master VM
A virtual machine to be the creation source of the template used for the virtual machine creation.
■ Management server
The server on which the component required for the manager function (described in "2.3.2 Installed
Functions (page 33)") are installed. Also called MoM.
■ Managed machine
The machine to be managed in Virtual DataCenter Automation.
■ Management VLAN
The VLAN used for managing the virtual machines of tenants by IaaS providers or Virtual
DataCenter Automation.
■ MSFC (Microsoft Failover Cluster)
A cluster function included in Microsoft Windows Server Enterprise Edition or later. Required for
Live Migration of Hyper-V virtual machine.
■ Migration
Migrates the virtual machine stored in the shared disk to another virtual machine server. When the
power of the virtual machine is on, live migration of the machine is performed while it keeps
operating (Hot Migration). When the power of the virtual machine is off, the machine is migrated
with the power off (Cold Migration). Suspending and migrating the virtual machine with the power
on is Quick Migration.
■ NAS
An abbreviation of "Network Attached Storage". A storage device used as a file server.
■ NEC Storage
NEC storage product.
120
■ NEC Storage Manager
The name of NEC storage management software.
■ Network Manager
Generic name for Network Manager products (network operation management software).
■ On-premises cloud
Cloud configuration to set devices in the user companies (on-premises).
■ OOB
An abbreviation of "Out-of-Band". A management method to directly manage and operate hardware
instead of communication with software operated on hardware.
■ Operation
Using the SigmaSystemCenter to allocate machines to hosts and register them to a group.
■ Operation group
The SigmaSystemCenter manages the machines in group units during operation. Group management
can reduce the work load of machine management and operation cost. A group of the machines used
for the same purpose is known as an operation group. The SigmaSystemCenter manages the
machines as resources. With the [Resource] view of the Web console, the group can be created to
classify and display the machines to be managed. This group is the "resource group".
■ Operation Management Appliance
Provides monitoring function and software repository function to tenant administrators.
■ Operation Management Appliance Template
To assign the operation management appliance machines, templates of virtual and physical machines
are used. Those templates are called as "Operation Management Appliance Template".
■ Operation Management Appliance Machine
Virtual and physical machines to which the operation management appliance is installed.
■ Operation Management Appliance Master
Virtual and physical machines which are source of operation management appliance templates.
121
■ Operation Management LAN
Connects portal server, back-end Firewall, servers accommodating VM, and operation management
server of Virtual DataCenter Automation system.
■ Orchestration
An architecture layer capable of managing one cloud base. Manages the cloud base triggering
requests from the service portal and events having occurred in the system. Provisioning, appropriate
allocation in the data center and operation automation are included in orchestration.
■ PET
An abbreviation of "Platform Event Trap". Directly reports the occurrence of events in BIOS or
hardware from BMC, etc. using an SNMP trap.
■ Physical machine
Generic name of substantial hardware machines. Includes general machines and virtual machine
servers in the physical machine.
■ Policy
Restoration process settings in the event of failure such as "What type of process must be set to the
automatic execution in case of a machine failure?". In the SigmaSystemCenter, the restoration
process can be set for the virtual machine base such as ESMPRO/ServerManager or vCenter Server,
Out-of-Band Management function, and machine failures detected by SystemMonitor Performance
Monitoring.
■ Private cloud
A configuration in which companies configure a cloud computing system only for their own use, and
provides departments in the companies or group companies with the cloud service.
■ Primary NIC
The NIC to be connected with the network to manage the machines.
■ Provisioning
An architecture layer to provide a virtualized resource pool and the management function of the
physical/virtual machines. Including physical/virtual server management functions, storage/network
management functions (as a resource pool).
■ Public cloud
A business configuration to secure/release resources freely by customers based on vast resources to
be owned.
122
■ PXE boot
An abbreviation of "Preboot eXecution Environment". A BIOS function to start the machine or
install the OS using the network. Used for machine detection and software distribution in DPM.
■ RDM
An abbreviation of "Raw Device Mapping". A function enabling the virtual machine to directly
access to the disk volume by bypassing the virtualization layer of the disk.
■ Resource pool
Consists of the physical total values of storage on the multiple VM servers to be managed, CPU, and
memory resource. The sub-pool can be extracted from the resource pool as needed.
■ RMCP/RMCP+
An abbreviation of "Remote Management Control Protocol". A protocol to remotely execute IPMI
instructions via the network. Uses UDP.
■ Root resource pool
Same as the resource pool.
■ SAN
An abbreviation of "Storage Area Network". Sets the network only for the storage and provides
machines with storage.
■ Scenario control/Scenario
Scenario control is a function in the SystemManager G component. It is used to execute workflows.
Scenario is the name of a workflow defined by the scenario control. In Virtual DataCenter
Automation, workflows for typical operations are provided as scenarios in the scenario control
function.
■ Service model
The resource pool line up for each quality provided by service providers. In the service model, the
resource pool is classified into "Gold", "Silver" and "Bronze" for each quality.
■ Service portal
The function which can cooperate with vDC Automation to use interface of a service governor is
offered. Virtual DataCenter Automation Portal is relevant. It's possible to mount a service portal
originally based on API where it's opened as well as something established.
123
■ Service provider
Manages the resources of cloud and tenants and provides them as a service.
■ Shared disk
The disk volume that can be shared by multiple machines.
■ SLA
An abbreviation of "Service Level Agreement". An agreement on service quality made between the
service providers and tenant/virtual system administrators. Service quality includes not only the
system operating rate but also guidelines for security measures, internal information reports, and
inquiry support processing, etc.
■ SNMP Trap
Communication in SNMP (Simple Network Management Protocol). SNMP agents notify managers
of events.
■ Software Repository
This is a component to manage (registration, deletion, and group management) the software (VMs,
middleware, patches, etc.) as templates so as to install such software in the managed server.
■ SQL Server
Management software to configure and operate the relational database provided by Microsoft. Virtual
DataCenter Automation uses SQL Server as the database to store the system configuration
information.
■ Sub-pool
Consists of the upper storage limits that can be allocated to the virtual machine, CPU, and memory
resource. A sub-pool can be created by extracting from the resource pool. However, a sub-pool with a
capacity exceeding that of the resource pool can be created by overcommitting.
■ Sysprep
A tool to effectively operate Microsoft Windows OS.
■ SystemMonitor Performance Monitoring
A SigmaSystemCenter component to monitor machine resource usage.
124
■ SystemProvisioning
SigmaSystemCenter core component. Configures the machine to be managed, manages the
configuration information, changes the configuration, and executes autonomous restoration from a
machine failure, etc.
■ Task scheduler
Automatic execution utility of the program prepared as standard in the Windows OS. Task scheduler
enables the set program to be automatically executed at the set time.
■ Tenant
The unit used for borrowing the computing resources from service providers as a service. Charges
will be incurred to this unit.
■ Tenant administrator
The person or organization managing IaaS users under a tenant and acting as the window for DC
administrators as a representative of tenants. Manages (creates or deletes) IaaS users who actually
use the resource. Manages the resource usage within a tenant and pays the service providers
according to the management result.
■ Tenant VLAN
The VLAN used for managing the virtual machines of tenants by tenant managers.
■ vCenter Server
A VMware product for integrated management of multiple ESXs and of the virtual machine
configured on them. Also used as a generic name including vCenter Server in this document.
■ Virtual system
VM, and a system consisting of a combination of the provided network and storage source. This is a
unit and managed as one system. For IaaS users, the virtual system is a unit of the scope of authority
for operation and reference. IaaS users can add or delete resources to/from the virtual system if they
have the authority. Resource usage within the virtual system can be managed.
■ vSphere Client
A VMware product equipped with the virtual machine and a user interface capable of creating,
managing, and monitoring the resource and host of the virtual machine.
■ VLAN
Technology capable of configuring the logical network separately from the physical network
configuration, and dividing the network into multiple broadcast domains.
125
■ VM
An abbreviation of "Virtual Machine". Means the same as the virtual machine listed in this glossary.
Refer to the term "Virtual machine".
■ VMFS
An abbreviation of "Virtual Machine File System". In Virtual DataCenter Automation, the term
VMFS is also used to refer to the VMFS volume, corresponding to Datastores items in the
management screen of the Virtual Infrastructure Client. The VMFS volume houses the virtual
machine disk, etc. of the virtual machine.
■ VMS
An abbreviation of "Virtual Machine Server". Means the same as the virtual machine server listed in
this glossary. Refer to the term "Virtual machine server".
■ VM Image
File group constituting a VM (virtual machine).
■ VM Import
Use VM image as a virtual machine.
■ VM Monitoring Server
Server in which the components necessary for the manager function (described in "2.3.2 Installed
Functions (page 33)") are installed. Also called as RM.
■ VM Server
Means the same as the virtual machine server listed in this glossary. Refer to the term "Virtual
machine server".
■ VNX
Storage of EMC products.
■ WOL (Wake On LAN)
Turns the power of the computer connected by LAN on via a network from other computers. Used
when turning the power on remotely in DPM.
■ Workflow
Defines the detailed process orders required to achieve the purpose (execution of instruction). Used
when automating operations. Administrator workflow (approval flow) is not included.
126
(Example) In VM creation, a series of works such as adding a charge, securing a resource, or
provisioning based on the self-service portal instruction as a trigger.
■ WWN
An abbreviation of "World Wide Name". An identification code uniquely allocated to the Host Bus
Adapter. Unique identifier in SAN. Allocated also to Host Bus Adapter.
127
MasterScope Virtual DataCenter Automation v4.0
First Step Guide
April, 2017 1st Edition
NEC Corporation
©NEC Corporation 2012-2017