Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide

Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide
Building a Scalable Microsoft® Hyper-V
Architecture on the Hitachi Universal
Storage Platform® Family
Reference Architecture Guide
By Rick Andersen
April 2009
Summary
Increasingly, organizations are turning to server virtualization because of the important business and IT
benefits it provides. Organizations that virtualize their server environments are able to consolidate their
physical IT infrastructures and improve the overall efficiency, resilience and agility of their environments. Doing
so has potentially significant implications from a business perspective, allowing organizations to reduce capital
and operating costs as well as reduce their data center and carbon footprints.
However, as virtualized server deployments scale, capacity and management can become increasingly
problematic, keeping organizations from meeting service level agreements (SLAs) and cost-cutting objectives
as well as minimizing the very benefits gained from virtualizing a server environment. That’s why selecting the
right storage system to support virtualized environments — and deploying it properly — is hugely important.
In Microsoft® Hyper-V environments, virtual machines (VMs) run as guests on top of physical host servers. In
all virtualized environments, the host server can be a single point of failure if it loses power or has a hardware
or software failure. If the Hyper-V host server fails, all VMs running on the host server are out of service.
Creating a Hyper-V failover cluster with multiple host servers in a shared storage environment alleviates this
problem.
The Hitachi Universal Storage Platform family is best-in-class for Windows Server 2008 Hyper-V
environments. The Universal Storage Platform® V is the most powerful and intelligent enterprise storage
system in the industry. The Universal Storage Platform V and the smaller footprint Universal Storage Platform
VM are based on the Universal Star Network™ architecture: a fourth-generation implementation of the
massively parallel crossbar switch architecture.
This document defines a reference architecture that is highly scalable by leveraging the features and functions
of the Hitachi Universal Storage Platform family and Hyper-V failover clustering capability. The reference
architecture allows scaling of the environment by adding nodes in the Hyper-V failover cluster to support a
growing virtual machine workload.
For best results use Acrobat Reader 8.0.
Feedback
Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to
SolutionLab@hds.com. Be sure to include the title of this white paper in your email message.
Table of Contents
Solution Components .......................................................................................................................................................... 2 Tested Deployment .............................................................................................................................................................. 5 Hitachi Universal Storage Platform VM .................................................................................................................... 5 Servers ..................................................................................................................................................................... 6 Storage Area Network .............................................................................................................................................. 7 Software ................................................................................................................................................................... 9 Storage Deployment Considerations................................................................................................................................ 11 Standard Windows Volumes .................................................................................................................................. 11 Cluster Shared Volumes ........................................................................................................................................ 12 Deploying the Solution ...................................................................................................................................................... 15 Configuring the Hyper-V Servers for Clustering ..................................................................................................... 16 Configuring the Storage Area Network for Hyper-V Failover Clustering ................................................................. 17 Configuing Host Storage Groups ........................................................................................................................... 17 Assigning World Wide Names to Host Groups ....................................................................................................... 18 Creating a Dynamic Provisioning Pool ................................................................................................................... 19 Creating a Dynamic Provisioning LU ...................................................................................................................... 21 Associating V-VOLs Groups with a Dynamic Provisioning Pool ............................................................................. 22 Assigning LUs to a Host Storage Group................................................................................................................. 23 Best Practices for Scaling Your Environment ................................................................................................................. 24 Number of Virtual Machines per Standard LU or CSV ........................................................................................... 24 Scaling Dynamic Provisioning Pools ...................................................................................................................... 25 Scaling Hyper-V Cluster Deployments ............................................................................................................................. 25 Adding Nodes to Hyper-V Failover Cluster............................................................................................................. 25 Adding VMs to the Hyper-V Failover Cluster .......................................................................................................... 26 Hyper-V Cluster Management ........................................................................................................................................... 30 Changing a Virtual Machine’s Storage Configuration ............................................................................................. 30 Validating Storage on a Hyper-V Failover Cluster .................................................................................................. 30 Lab Validated Results and Specifications ........................................................................................................................ 32 Hyper-V Failover Cluster Storage Management............................................................................................................... 33 Windows Performance Monitor .............................................................................................................................. 33 Hitachi Performance Monitor Feature .................................................................................................................... 33 Hitachi Tuning Manager Software .......................................................................................................................... 34 Microsoft Virtual Machine Manager .................................................................................................................................. 35 Conclusion .......................................................................................................................................................................... 36 Building a Scalable Microsoft® Hyper-V
Architecture on the Hitachi Universal
Storage Platform® Family
Reference Architecture Guide
Increasingly, organizations are turning to virtualization to achieve several important objectives:
• Increase return on investment by eliminating underutilization of hardware and reducing administrative
overhead
• Decrease total cost of operation by reducing data center physical space requirements and energy usage
• Improve operational efficiencies by increasing availability and performance of critical applications and
simplifying deployment and migration of those applications
In addition, virtualization is a key tool companies use to improve responsiveness to the constantly changing
business climate and to become more environmentally friendly.
While virtualization offers many benefits, it also brings risks that must be mitigated. The move to virtualization
requires that IT administrators adopt a new way of thinking about storage infrastructure and application
deployment. Improper deployment of storage and applications can have catastrophic consequences due to the
highly consolidated nature of virtualized environments.
In a Microsoft® Hyper-V based infrastructure, virtual machines (VMs) run as guests on top of the physical host
server. In all virtualized environments, the host server can be a single point of failure if it loses power or has a
hardware or software failure. If the Hyper-V host server fails, all VMs running on the host server are out of
service. Creating a Hyper-V failover cluster with multiple host servers in a shared storage environment
alleviates this problem.
The Hitachi Universal Storage Platform family is best-in-class for Windows Server 2008 Hyper-V environments.
The Universal Storage Platform® V is the most powerful and intelligent enterprise storage system in the
industry. The Universal Storage Platform V and the smaller footprint Universal Storage Platform VM are based
on the Universal Star Network™ architecture: a fourth-generation implementation of the massively parallel
crossbar switch architecture. Universal Storage Platform V and Universal Storage Platform VM provide unique
controller-based virtualization that aggregates all storage, including internal and externally attached Hitachi
branded and third-party storage, to create a common pool of capacity. Reusable storage services, including
thin provisioning, host port virtualization and nondisruptive heterogeneous data migration, can then access all
storage in the virtual pool.
This white paper describes how to use the Microsoft Windows Server 2008 failover clustering feature in
conjunction with the Universal Storage Platform to provide high availability to the Hyper-V host servers or
parent partitions and the underlying guest virtual machines. It defines a reference architecture that is highly
scalable by leveraging the features and functions of the Universal Storage Platform V or Hitachi Universal
Storage Platform VM storage systems and Hyper-V failover clustering. The reference architecture allows
scaling of the environment by adding nodes in the Hyper-V failover cluster to support a growing virtual machine
workload.
This white paper also provides guidance on how to configure both the Hyper-V environment and a Hitachi
Universal Storage Platform family storage system to achieve the best performance, scalability, and availability.
It is intended for use by IT administrators who are planning storage for a Hyper-V failover clustering
deployment. It assumes that the reader has Windows and storage administration skills.
1
Solution Components
This white paper describes a highly scalable and highly available Hyper-V Failover Cluster reference
architecture using the Hitachi Universal Storage Platform. A Hyper-V failover cluster protects against downtime
for important applications or services that must be available at all times. Hyper-V failover clusters help ensure
high availability for end users by minimizing the amount of time that scheduled or unscheduled outages
interrupt end user access.
Hyper-V failover clusters also enable high scalability by allowing administrators to dynamically add resources to
enhance performance and availability. Clusters by definition are scalable, and with Hyper-V failover clustering,
you can add up to 16 nodes based on available resources. This provides more nodes to which services can
failover in the event of a failure or scheduled downtime.
To allow for a highly available, highly scalable Hyper-V failover cluster, it is important that the storage used is
also highly available and scalable. With the Hitachi Dynamic Provisioning software for the Hitachi Universal
Storage Platform family, the addition of additional cluster nodes and the storage required to support the
deployment of the underlying virtual machines are greatly simplified. This solution provides guidelines and
recommendations for configuring the Universal Storage Platform storage system to provide for availability and
scalability in a Hyper-V failover cluster environment.
This solution supports up to 16 Hyper-V host nodes in a failover cluster, with high availability achieved with
redundant physical paths enabled via multiple host bus adapters (HBAs) from the servers, proper zoning within
the storage fabric and storage system, and the use of multipathing software to allow for continued operation in
the event of a hardware component failure.
The Hitachi Universal Storage Platform family allows for rapid provisioning of storage to support scaling up of
additional Hyper-V nodes and virtual machines within the cluster. Figure 1 illustrates the highly available and
scalable reference architecture using the Hitachi Universal Storage Platform VM as the storage platform. Note
that although this reference architecture was tested on a Universal Storage Platform VM, it can be deployed on
a Universal Storage Platform V as well.
2
Figure 1. Solution Topology
In this configuration, each virtual machine is hosted on its own logical unit (LU) within a Dynamic Provisioning
pool. Allocating an individual LU or multiple LUs for each virtual machine allows for the use of quick or live
migration between nodes in the Hyper-V failover cluster.
This reference architecture deployed Windows 2008 R2 guest VMs on the Hyper-V hosts using a web server
profile as defined by industry standards.
When using Hitachi Dynamic Provisioning software for hosting guest virtual machines, a Dynamic Provisioning
pool hosts the guest virtual machine VHDs on LUs configured to use storage from the pool. As the requirement
for the number of guest virtual machines increases, capacity can be added to the Dynamic Provisioning pools
dynamically. Adding RAID groups to the pool not only increases the storage capacity available for deploying
guest virtual machines, but also provides additional I/O processing capabilities due to the increased number of
spindles in the pool.
Configuration details for using Hitachi Dynamic Provisioning software, an intelligent storage technology that
provides advanced wide-striping and thin-provisioning capabilities, are included in this white paper.
Incorporating Hitachi Dynamic Provisioning software into the architecture leverages virtual machines, storage
pools and virtual volumes as the fundamental building blocks.
LUs from the Universal Storage Platform VM are allocated to the Hyper-V hosts and formatted as volumes in
which virtual hard disks (VHDs) can be created. The VHDs are presented to the Windows 2008 operating
system guest OS, partitioned and used as containers to house the virtual machine OS and paging files. Based
on the I/O and capacity requirements of the virtual machine, application files can also exist within this VHD or
they can be placed in separate VHDs stored on additional LUs based on application I/O and capacity
requirements. For example, if a virtual machine hosts an application with a large number of LUs or very
3
performance-sensitive LUs, such as Exchange or SQL databases, Hitachi Data Systems recommends using
separate LUs for each VHD or the use of pass-through disks.
For this architecture, multiple web servers were deployed across multiple nodes in the Hyper-V failover cluster.
The industry standard Iometer profile was used to generate web server traffic. The I/O definition for this profile
consists of random reads of various block sizes as defined in the “Iometer Specifications and Results” section
in this paper. The web server I/O profile was originally distributed by Intel, the author of Iometer, and used by
Microsoft as a typical web server profile.
Hitachi Data Systems testing shows that the storage building block can support up to eight typical web servers
in a Dynamic Provisioning pool, consisting of two RAID-5 (3D+1P) parity groups, with each web server typically
able to sustain around 1000 IOPS. For more information about drive size and performance, see Table 1. This
building block can be scaled by adding additional parity groups to an existing Dynamic Provisioning pool, or by
creating a new Dynamic Provisioning pool when hosting additional virtual machines in the cluster. This
configuration meets Microsoft’s 20 millisecond I/O response time requirement to the disks that host the web
Server virtual machines. Each web server VM is configured with four virtual CPUs and 1.5GB of memory and
the underlying storage configuration, as shown in Figure 2.
Figure 2. Single Storage Building Block
Figure 3 illustrates adding a second building block of storage to host six more web server virtual machines for a
total of 12 typical web servers. Additional RAID-5 (3D+1P) groups are added to the existing Dynamic
Provisioning pool to contain the additional virtual machine VHDs.
4
Figure 3. Additional Storage Building Block
Tested Deployment
The following sections describe the key components used in this solution.
Hitachi Universal Storage Platform VM
Hitachi Data Systems testing used a Hitachi Universal Storage Platform VM storage system, which provides a
reliable, flexible, scalable and cost effective storage system for the Microsoft Hyper-V scalable architecture
described in this white paper. The Hitachi Universal Storage Platform VM brings performance and ease of
management to organizations of all sizes that are dealing with an increasing number of virtualized businesscritical applications. It is ideal for a failover clustering environment that demands high availability, scalability
and ease-of-use.
The Hitachi Universal Storage Platform VM with Hitachi Dynamic Provisioning software supports both internal
and external virtualized storage, simplifies storage administration and improves performance to help reduce
overall power and cooling costs.
The Hitachi Universal Storage Platform provides end-to-end secure virtualization for Hyper-V Infrastructure
environments. With the ability to securely partition port, cache and disk resources, and to mask the complexity
of a multivendor storage infrastructure, the Hitachi Universal Storage Platform VM is an ideal complement to a
Hyper-V environment. With up to 1024 virtual ports for each physical Fibre Channel port, the Hitachi Universal
Storage Platform VM provides the connectivity to support large Hyper-V failover clusters.
Table 1 lists the configuration specifications for the Universal Storage Platform VM deployed in this reference
architecture.
5
Table 1. Deployed Storage System Configuration
Component
Details
Storage system
Hitachi Universal Storage Platform VM
Microcode level
60-06-10
RAID group type
RAID-5 (3+1)
Cache memory
128GB
Drive capacity
300GB
Drive type
Fibre Channel 15K RPM
LU size
100GB
Number of Dynamic Provisioning pools
1
As the number of guest virtual machines being deployed grows, you can scale the Hyper-V failover cluster
environment by increasing the amount of storage allocated. Table 2 lists the storage used in Hitachi Data
Systems labs for deploying six and 12 2eb server guest VMs.
Table 2. Deployed Scaled Configuration Specifications
Item
6 Virtual Machines
12 Virtual Machines
Number of ports used
2
4
Number of RAID groups in Dynamic Provisioning pool
2
4
Number of drives
8
16
Number of VHD LUs in Dynamic Provisioning pool
6
12
Servers
Table 3 lists the servers used in this clustered Hyper-V solution.
Table 3. Deployed Servers
Quantity
Server Make and Model
Role
Memory and Processor
16
Dell 2950s
Hyper-V host server
12GB memory, 4x dual-core
AMD processors
1
HP DL385
Domain controller and DNS
8GB memory, 2x dual-core
AMD processors
1
Dell Power Edge 750
Management server for Hitachi Storage Navigator
Modular 2 software
2GB memory, 2 x Intel Xeon
processors
Servers must meet specification requirements for the Hyper-V roles they are hosting. For more information, see
the System Requirements page on Microsoft’s Hyper-V 2008 R2 web site.
6
Storage Area Network
For this solution, Hitachi Data Systems connected the Hyper-V servers and the Hitachi Universal Storage
Platform VM through an enterprise-class director. Another option is to use two Fibre Channel switches. Either
of these options provides high availability and redundancy.
In addition, Hitachi Data Systems configured two redundant paths from each Hyper-V host to the Universal
Storage Platform VM. Each Hyper-V host had two HBAs configured for high availability. Microsoft’s MPIO
software provided a round-robin load balancing algorithm that automatically selects a path by rotating through
all available paths, thus balancing the load across all available paths, optimizing IOPS and response time.
Figure 4 illustrates the storage area network configuration for the 16- node Hyper-V failover cluster used for
this reference architecture.
Figure 4. Deployed Storage Area Network Configuration
DIrector and HBA Zoning Configuration
The solution described by this white paper uses a Brocade DCX enterprise-class director. Another option is to
deploy this solution in a dual switch configuration, Configure two zones for each host to use different switches
to provide redundancy. The fabric is configured so that a separate zone exists for each path from each Hyper-V
server’s HBA and its corresponding Hitachi Universal Storage Platform VM front-end port. This means that
each zone contains a single host bus adapter and a single Universal Storage Platform VM front-end port.
Having unique and separate zones configured for each HBA is referred to as single initiator zoning. Table 4
lists the zoning deployed in this solution.
7
Table 4. Zoning Configuration
Hyper-V
Host
Host HBA
Number
Director Zone Name
Storage
System
Port
Storage System
Host Group
Node 1
HBA 1
Node_1_HBA1_USPVM _ 1A
1A
Node_1_HBA_1
HBA 2
Node_1_HBA2_ USPVM _2A
2A
Node_1_HBA_2
HBA 1
Node_2_HBA1_ USPVM _1B
1B
Node_2_HBA_1
HBA 2
Node_2_HBA2_ USPVM _2B
2B
Node_2_HBA_2
HBA 1
Node_3_HBA1_ USPVM _1C
1C
Node_3_HBA_1
HBA 2
Node_3_HBA2_ USPVM _2C
2C
Node_3_HBA_2
HBA 1
Node_4_HBA1_ USPVM _1D
1D
Node_4_HBA_1
HBA 2
Node_4_HBA2_ USPVM _2D
2D
Node_4_HBA_2
HBA 1
Node_5_HBA1_ USPVM _1E
1E
Node_5_HBA_1
HBA 2
Node_5_HBA2_ USPVM _2E
2E
Node_5_HBA_2
HBA 1
Node_6_HBA1_ USPVM _1F
1F
Node_6_HBA_1
HBA 2
Node_6_HBA2_ USPVM _2F
2F
Node_6_HBA_2
HBA 1
Node_7_HBA1_ USPVM _1G
1G
Node_7_HBA_1
HBA 2
Node_7_HBA2_ USPVM _2G
2G
Node_7_HBA_2
HBA 1
Node_8_HBA1_ USPVM _1H
1H
Node_8_HBA_1
HBA 2
Node_8_HBA2_ USPVM _2H
2H
Node_8_HBA_2
HBA 1
Node_9_HBA1_ USPVM _1A
1A
Node_9_HBA_1
HBA 2
Node_9_HBA2_ USPVM _2A
2A
Node_9_HBA_2
HBA 1
Node_10_HBA1_ USPVM _1B
1B
Node_10_HBA_1
HBA 2
Node_10_HBA2_ USPVM _2B
2B
Node_10_HBA_2
HBA 1
Node_11_HBA1_ USPVM _1C
1C
Node_11_HBA_1
HBA 2
Node_11_HBA2_ USPVM _2C
2C
Node_11_HBA_2
HBA 1
Node_12_HBA1_ USPVM _1D
1D
Node_12_HBA_1
HBA 2
Node_12_HBA2_ USPVM _2D
2D
Node_12_HBA_2
HBA 1
Node_13_HBA1_ USPVM _1E
1E
Node_13_HBA_1
HBA 2
Node_13_HBA2_ USPVM _2E
2E
Node_13_HBA_2
HBA 1
Node_14_HBA1_ USPVM _1F
1F
Node_14_HBA_1
HBA 2
Node_14_HBA2_ USPVM _2F
2F
Node_14_HBA_2
HBA 1
Node_15_HBA1_ USPVM _1G
1G
Node_15_HBA_1
HBA 2
Node_15_HBA2_ USPVM _2G
2G
Node_15_HBA_2
HBA 1
Node_16_HBA1_ USPVM _1H
1H
Node_16_HBA_1
HBA 2
Node_16_HBA2_ USPVM _2H
2H
Node_16_HBA_2
Node 2
Node 3
Node 4
Node 5
Node 6
Node 7
Node 8
Node 9
Node 10
Node 11
Node 12
Node 13
Node 14
Node 15
Node 16
8
Table 5 lists firmware levels for the HBA and the director deployed in this solution.
Table 5. Deployed Firmware
Device
Firmware Level
Brocade DCX Director
6.3.0.b
Brocade HBA 825 8 GB
Storport Miniport Driver
2.1.0.2
Firmware 2.1.0.2
Host Storage Group Configuration
This section describes the host storage group configuration deployed on the Universal Storage Platform VM to
support the 16-node Hyper-V failover cluster configuration.
Provisioning storage on two Fibre Channel front-end ports (one port per controller) is sufficient for redundancy
on the Hitachi Universal Storage Platform VM. This results in two paths to each LU from the Hyper-V host's
point of view. For higher availability, ensure that the target ports are configured to use two separate fabrics if
using switches or through an enterprise-class director to ensure that multiple paths are always available to the
Hyper-V server.
Hyper-V servers that access LUs on the storage systems must be configured properly so that the appropriate
Hyper-V parent and child partitions can access the storage. With the Universal Storage Platform VM, this is
accomplished at the storage level by using host storage groups (HSGs). HSGs define which LUs a particular
Hyper-V server can access. Hitachi Data Systems recommends creating a HSG group for each Hyper-V server
and using the name of the Hyper-V server in the HSG for documentation purposes.
In this reference architecture, host storage groups are created to allow and control Hyper-V host access to
LUNs. They are created on a per Hyper-V host basis within the failover cluster on Fibre Channel ports, on both
cluster 1 and cluster 2 on the storage system.
This configuration is described in Table 4, where every Hyper-V host has two HBAs (HBA1 and HBA2), the
host groups are created on ports 1A,1B, 1C, 1D, 1E, 1F, 1G, and 1H on Cluster 1, and ports 2A, 2B, 2C, 2D,
2E, 2F, 2G, and 2H on Cluster 2.
Software
This section describes the software required to deploy the Hyper-V scalable cluster architecture on the Hitachi
Universal Storage Platform VM. Table 6 lists the software used in this reference architecture.
Table 6. Deployed Software
Software
Version
Windows Server 2008 Enterprise Edition
Release 2
Hitachi Storage Navigator
7.0
Hitachi Performance Monitor
7.0
Microsoft MPIO
006.0001.7600.6385
9
Windows Server 2008
Windows Server 2008 Enterprise Edition or Windows Server 2008 Datacenter must be used for the physical
computers. These servers must run the same version of Windows Server 2008, including the same type of
installation. That is, both servers must be either a full installation of Windows 2008 or a Windows 2008 Server
Core installation. The failover clustering feature enables the creation and management of failover clusters.
This reference architecture uses Windows Server Enterprise 2008 Release 2 with the Failover Cluster feature
installed.
The Hyper-V role was enabled on all servers that formed the Hyper-V failover cluster.
Multipathing Software
Multipathing software, such as Hitachi Dynamic Link Manager or Microsoft Windows Server 2008 native
multipath IO (MPIO), is a critical component of a highly available system. Multipathing software allows the
Windows operating system to see and access multiple paths to the same LU, enabling data to travel any
available path for increased performance or continued access to data in the case of a failed path. Hitachi Data
Systems recommends using the round robin load-balancing algorithm in both Hitachi Dynamic Link Manager
software and MPIO to distribute load evenly over all available HBAs. Hitachi Data Systems testing used MPIO
for the solution described in this white paper.
As the number of nodes in the Hyper-V failover cluster increases, consider using Hitachi Global Link Availability
Management software to simplify the management of multiple paths across the cluster. Global Link Availability
Management software can greatly simplify the management of larger multipath environments by providing
centralized visibility and reporting of all the paths in the cluster.
Microsoft Virtual Machine Manager 2008 R2
Virtual Machine Manager R2 (VMM) is Microsoft’s management solution for the virtualized data center. VMM
enables the consolidation of multiple physical servers onto Hyper-V host servers running as guest virtual
machines, provides for the rapid provisioning of virtual machines, and unified management of the virtual
infrastructure through one console. This reference architecture uses Microsoft Virtual Machine Manager 2008
R2.
Hitachi Management Tools
This section describes Hitachi management tools used to deploy this solution on the Hitachi Universal Storage
Platform VM.
Hitachi Storage Navigator Software
Hitachi Storage Navigator software, a required part of this solution, is the basic management tool licensed with
the system. It monitors and manages the Hitachi Universal Storage Platform VM through either a GUI or a
command-line interface (CLI). Use Storage Navigator software to create RAID groups and logical units and to
assign those logical units to the Hyper-V host servers. Storage Navigator software is also useful for monitoring
events and status of the various components on a Universal Storage Platform.
Hitachi Device Manager Software
Hitachi Device Manager software provides centralized management of all Hitachi storage systems, including
the Universal Storage Platform. Device Manager software can link to Storage Navigator software, and it has
the ability to provision using storage pools, manage replication between storage systems, and logically group
resources for more efficient management. While this software is optional, Hitachi Data Systems recommends
its use because it simplifies management of multiple storage systems.
Hitachi Storage Array Management Pack
The Hitachi Storage Array Management Pack allows for the monitoring of key components of the Hitachi
Universal Storage Platform VM. It is installed under Microsoft System Center Operations Manger 2007 Service
Pack 1 and displays and monitors the health of Hitachi Storage Platform VM’s storage system groups and LUs.
10
Hitachi Performance Monitor Feature
The Hitachi Performance Monitor feature is included as part of the Storage Navigator software, It provides
detailed, in-depth storage performance monitoring and reporting of Hitachi storage systems including drives,
logical volumes, processors, cache, ports and other resources. It helps organizations ensure that that they
achieve and maintain their service level objectives for performance and availability, while maximizing the
utilization of their storage assets. Performance Monitor’s in-depth troubleshooting and analysis reduce the time
required to resolve storage performance problems. It is an essential tool for planning and analysis of storage
resource requirements.
Hitachi Dynamic Provisioning Software
On the Universal Storage Platform VM, Hitachi Dynamic Provisioning software provides features that provide
virtual storage capacity to eliminate application service interruptions, reduce costs and simplify administration,
as follows:
• Optimizes or “right-sizes” storage performance and capacity based on business or application requirements.
• Supports deferring storage capacity upgrades to align with actual business usage.
• Simplifies and adds agility to the storage administration process.
• Provides performance improvements through automatic optimized wide striping of data across all available
disks in a storage pool.
Storage Deployment Considerations
This section describes storage options and considerations in a Hyper-V failover cluster environment. Windows
2008 R2 offers the choice of using standard Windows volumes or deploying Cluster Shared Volumes (CSVs)
on the Hitachi Universal Storage Platform family. You must also consider whether to place multiple VMs on a
single LU or allocate one LU per VM. This section also describes mapping of LUs containing VHDs and LUs
used as pass-through disks.
Standard Windows Volumes
A key storage decision in a Hyper-V failover cluster is whether to host multiple VMs on a single LU or whether
each VM has its own exclusive LU or set of LUs. The main difference between these two options is how
failover in the cluster is handled for the highly available virtual machines.
With standard Windows volumes, the file system accesses the LU at the volume level, not at the file level. This
means that when highly available VMs are moved between nodes in the Hyper-V cluster, all the LUs
associated with those VMs move also.
If you decide to share a LU among multiple VMs, consider the following:
• For a planned migration such as a quick or live migration of a single highly available VM, any other VMs that
share the same LU also migrate. To independently migrate a VM, it must reside on a non-shared LU.
Note that this restriction does not apply to CSVs.
• For an unplanned migration due to a failure of a node within a cluster when a shared LU is used, the
resource requirements for all the VMs on that shared LU can become an issue. If you use a shared LU,
ensure that the other nodes in the cluster have sufficient resources available to host the VMs hosted on the
LU. If a failure occurs, and the cluster service is unable to bring all the highly available VMs online on
another node in the cluster, it retries on all the other available nodes in the cluster. If all the other nodes in
the cluster have insufficient resources available to host all the VMs that share that single LU, those VMs that
share that single LU cannot come online.
11
Review the considerations listed in Table 7 to decide whether to share LUs between multiple VMs.
Table 7. VM Shared LU Considerations
Multiple VMs per LU
Single VM per LU
Only VHDs can be used
VHDs and pass-through disks
can both be used
No pass-through disks allowed
Pass-through disks can be used
All VMs sharing the LU move together
VMs can be migrated
individually
Note that using multiple VMs per LU might negatively affect performance due to additional I/O load on the
shared LU.
Cluster Shared Volumes
CSVs are cluster disks that can be accessed simultaneously by all nodes in the cluster. CSVs are available in
Microsoft Hyper-V Server 2008 R2 with failover clustering feature. A CSV is a standard cluster disk containing
an NTFS volume accessible in read/write mode by all nodes in the cluster. While access is shared by all cluster
nodes, the CSV is physically mounted on only one of the cluster nodes, the coordinator node. All NTFS
metadata updates are sent over the LAN to the coordinator node during I/O operations, but each cluster node
is free to send block-mode read/write commands and data directly to the CSV.
Using CSV in a Hyper-V failover cluster offers the following advantages:
• Simplifies storage management, because CSVs require fewer LUs to host the same number of virtual
machines.
• Eliminates the drive letter restriction because CSVs do not require a drive letter.
• Provides individual failover of virtual machines even when multiple virtual machines share the same LU. This
allows for quick or live migration to move virtual machines independently within the cluster.
Keep these storage planning considerations in mind when using CSVs in a Hyper-V failover cluster:
• Windows backup of CSVs from the Hyper-V host is not yet supported (Windows backup is supported for
standard volumes.) At this time, Windows backup is only supported within the guest virtual machine. To
safely backup CSV at the Hyper-V parent level, the VSS Hyper-V writer must be used. Check with your
application backup vendor to ensure compatibility with CSVs.
• Hardware-based storage system replication copies at the LU level, so the replicated CSV most likely
contains multiple virtual machine VHDs. If you are using the replicated CSV in a high availability or disaster
recovery solution to support quick or live migration, all virtual machines contained in the CSV are migrated.
• To provide for ease of management and scalability, Hitachi Data Systems recommends deploying CSVs
using Dynamic Provisioning pools. Typically, CSVs contain multiple VHD files for the virtual machines and
the LUs deployed are usually larger than LUs deployed on standard Windows volumes. This makes using
Dynamic Provisioning pools a good choice in that as the demand for additional virtual machines grows,
capacity and I/O processing power can be added dynamically to the environment.
• It is important to understand the workloads of individual virtual machines when hosting them on a CSV.
Ensure that the Dynamic Provisioning pool that is to host the CSV can support the aggregate workload of the
virtual machines that are contained within the CSV.
• The use of pass-through disks is not allowed with CSVs.
For more information, see the Microsoft TechNet article “Hyper-V: Using Live Migration with Cluster Shared
Volumes in Windows Server 2008 R2.”
12
VHDs and LU Mapping
LUs used in this architecture that contain VHDs are deployed using Dynamic Provisioning pools. Figure 6
shows how the VHD LUs are allocated from the Dynamic Provisioning pools to the mapping within the Hyper-V
hosts and to the VMs. Although this figure illustrates a one-to-one assignment of a VM to a node, with Hyper-V
failover clustering, multiple VMs can run on one or more cluster nodes.
Figure 6. VHD LUN Mapping
VHDs and Pass-through Disks for Scaling and Availability
A Hyper-V pass-through disk is a physical disk or LU that is mapped or presented directly to the guest OS.
Hyper-V pass-through disks normally provide better performance than VHDs, although with the release of
Windows Server 2008 Release 2, Microsoft indicates that the baseline performance difference between VHDs
and pass-through disks is negligible.
13
After the pass-through disk is visible to and offline within the Hyper-V parent partition, it can be made available
to the guest virtual machine using the Hyper-V Manager. Pass-through disks have the following characteristics:
• Must be in the offline state from the Hyper-V parent perspective, except in the case of clustered or highly
available virtual machines.
• Presented as raw disk to the Hyper-V host partition.
• Cannot be dynamically expanded.
• Do not allow the capability to take snapshots or utilize differencing disks.
• Easier to scale to a larger number of virtual machines using pass-through disks because pass-through disks
do not require drive letters. The raw disk is formatted and assigned a volume label and drive letter in the
guest virtual machine partition.
VHD Storage Path
With VHDs, all I/O goes through two complete storage stacks, once in the guest virtual machine partition and
once in the Hyper-V host partition. This means that the guest application disk I/O request goes through the
storage stack within the guest OS and the Hyper-V parent partition file system.
Pass-through Disk Storage Path
When using the pass-through disk feature, the NTFS file system on the parent partition can be bypassed
during disk operations, minimizing CPU overhead and maximizes I/O performance. With pass-through disks,
the I/O traverses only one file system, the one in the child partition. Pass-through disks offer higher throughput
because only one file system is traversed, thus requiring less code execution.
Hitachi Data Systems recommends using pass-through disks when hosting applications with high storage
scalability and performance requirements.
Figure 7 shows how the pass-through LUs are allocated from the Dynamic Provisioning pools to the mapping
within the Hyper-V hosts and down to the actual VMs.
14
Figure 7. Pass-through LUN Mapping
For more information about storage options when deploying Hyper-V and the best practices for implementing
Hyper-V on the Universal Storage Platform, see the Hitachi Universal Storage Platform Family Best Practices
with Hyper-V Best Practices Guide white paper.
Deploying the Solution
This section describes considerations and steps required for deploying a Hyper-V failover cluster on a
Universal Storage Platform family storage system.
15
Configuring the Hyper-V Servers for Clustering
This section provides high-level steps for configuring a Hyper-V Cluster and provides a high-level overview of
the steps required for setting up the servers.
This solution uses the Node and Disk Majority configuration, in which the servers and a single shared disk
resource vote to determine if the cluster is in a high availability state. Keep the following considerations in mind
when using Node and Disk Majority configuration:
• Microsoft and Hitachi Data Systems recommend this configuration for Hyper-V failover clusters with an even
number of nodes.
• A minimum of 500MB is required on the shared disk used as a witness disk.
• If the witness disk becomes unavailable, more than half of the nodes must be available for the cluster to
continue running.
For more information, including step-by-step procedures about building a Hyper-V failover cluster, see the
Microsoft Download Center article “Step-by-Step Guide for Testing Hyper-V and Failover Clustering.”
To configure your Hyper-V servers for clustering, follow these high-level steps:
1. Install Microsoft Windows Server 2008 R2 x64 Enterprise edition or Microsoft Windows Server 2008
R2 x64 Datacenter on all servers that will form the Hyper-V failover cluster.
2. Ensure that the servers and storage used to deploy the Hyper-V failover cluster are supported by the
Microsoft Failover Cluster Configuration Program.
For more information, see Microsoft’s Failover Clustering Program Overview web site.
3. Configure the proper network connections and also configure the storage system as described in this
document.
4. Ensure that either Microsoft MPIO or Hitachi Dynamic Link Manager software is installed on each node
in the Hyper-V failover cluster.
5. Configure a shared LU to be accessible to all servers in the cluster to support the Node and Disk
Majority configuration.
6. Configure the Hyper-V failover clustering feature by installing this feature on all the servers that will
make up the Hyper-V failover cluster.
Enable this feature by using Server Manager’s Add Features wizard.
7. Use Failover Cluster Manager to create the failover cluster.
8. Run the cluster validation routines as nodes are added to the failover cluster.
These routines address any problems that might occur and ensure that all the requirements for a
Hyper-V failover cluster are met.
For more information about using cluster validation routines with the Hitachi Universal Storage Platform
family, see the “Best Practices” section of this document.
9. Enable the Hyper-V role on each of the nodes that comprise the Hyper-V failover cluster.
10. Add virtual machines to the cluster.
For more information, see the “Adding VMs to the Hyper-V Failover Cluster” section of this paper.
16
Configuring the Storage Area Network for Hyper-V Failover Clustering
Configure separate zones for each HBA installed in the Hyper-V hosts in the cluster.
Configuing Host Storage Groups
To use LUN Manager to configure host storage groups and security, follow these steps:
1. In Hitachi Storage Navigator software, select GO > LUN Manager > LUN Manager.
The LUN Manager/LU Path & Security window displays.
2. Select a host port by clicking on it.
This highlights the host port.
3. Right-click the host port and select LUN Security:Disable->Enable from the pop-up menu.
Security for the port is set to enabled.
4. Select a host port by clicking on it.
This highlights the host port.
5. Right-click the host port to be used by your application and select Add New Host Group from the popup menu.
The Add New Host Group window displays
6. In the Group Name field, enter a name for the Hyper-V cluster node and HBA, preferably one that
matches the server name
17
7. For the Host Mode field, choose 2C[Windows Extension] from the drop down menu and click OK.
8. Repeat Step 1 through Step 7 for all host groups to be used for the Hyper-V failover cluster.
Assigning World Wide Names to Host Groups
To assign a world wide name (WWN) to the Hyper-V node host groups, follow these steps:
1. In Hitachi Storage Navigator software, choose GO > LUN Manager > LUN Manager.
The LUN Manager/LU Path & Security window displays
2. Expand the tree for the port that hosts the host group to which you want to add a WWN.
3. Select a host group by clicking on it.
This highlights the host group.
4. Right-click on the host group name, choose Add New WWN from the pop-up menu.
The Add New WWN pop-up menu displays.
18
5. Choose a WWN from the WWN drop-down menu.
You can manually enter a WWN if it has not been discovered yet.
6. Repeat Steps 1 - 5 for all Hyper-V host groups in your Hyper-V failover cluster.
Note: Ensure that at least two HBA ports on each server are connected to the storage and create host
groups with the appropriate WWN of the HBA to configure Fibre Channel connectivity between server
and storage. Spread access across different cluster (CL) interfaces to provide best performance,
throughput and availability.
Creating a Dynamic Provisioning Pool
In this scalable Hyper-V architecture, the Dynamic Provisioning pools host the LUs that support the guest
virtual machine operating systems running within the Hyper-V failover cluster.
In this configuration, data is distributed across all of the hard disk drives (HDDs) in the Dynamic Provisioning
pool. This helps to prevent contention for I/O on heavily used LUNs by distributing the usage across all HDDs
within the Dynamic Provisioning pool.
Hitachi Dynamic Provisioning software requires the following components:
• Storage Navigator software
• Hitachi Dynamic Provisioning software license key
Hitachi Dynamic Provisioning software enables you to create a Dynamic Provisioning pool that is made up of
one or more RAID groups. A Dynamic Provisioning LU does not consume space from the Dynamic
Provisioning pool until the host writes to the Dynamic Provisioning LU.
Note: If a full format is performed on a Windows volume it will consume all of the space allocated for the
volume from the Dynamic Provisioning pool.
To create a Dynamic Provisioning pool, follow these steps:
1. In Hitachi Storage Navigator software, select GO > LUN Expansion/VLL > Pool
The Pool window displays
2. In the Pool sub-window, right-click the Dynamic Provisioning folder and select New Pool from the
pop-up menu.
The New Pool dialog box displays.
19
3. In the Pool ID field enter a number to identify the pool.
Use numbers from 0-127 that are not in use by another pool.
4. Verify that the settings are correct and click on the Set button.
The pool is created but LDEVs must be added before the process is complete.
5. In the Free LDEVs pane, choose values from the LDKC and CU using the drop-down menus.
The free LDEVs for that LDKC and CU combination displays.
6. In the Free LDEVs list, select the LDEVs that you want to add to the pool as Pool-VOLs by clicking
them.
The selected LDEVs are highlighted.
7. Click the Add Pool-VOL button to add the selected LDEVs to the pool.
A pop-up window displays asking you to verify your selections
8. Click OK.
The selected LDEVs are displayed in the Pool-VOL pane.
9. Click the Apply button.
A pop-up window displays asking you to verify the action
10. Click OK.
A pop-up displays indicating that the requested operation is complete
11. Click OK.
The new Pool-VOLs are added and the pool is ready.
20
Creating a Dynamic Provisioning LU
To create a Dynamic Provisioning LU, follow these steps:
1. In Hitachi Storage Navigator software, choose GO > LUN Expansion/VLL > V-VOL.
The V-VOL window displays.
2. In the pool tree, right-click the Dynamic Provisioning folder and select New V-VOL Group from the
pop-up menu.
The New V-VOL Group dialog box displays.
3. Choose a V-VOL group ID from the V-VOL Group drop-down menu.
The V-VOL group ID can be any number between 1 and 65535 that is not already in use.
4. Choose OPEN-V from the Emulation Type drop-down menu if it isn't already selected and click Next.
The Create V-VOL dialog box displays.
5. In the Capacity field, enter the size of the V-VOL in megabytes that you want to create.
The range of allowable entries is shown to the right of the field
6. In the Number of V-VOLs field, enter the number of V-VOLs that you want to create in this V-VOL
Group
The range of allowable entries is shown to the right of the field,
7. Click the Set button.
The DP-VOLS are added to the V-VOL list.
21
8. Click the Next button.
The Create V-VOL dialog box displays.
9. In the Volume list, select a volume, choose a value in the Select CU No drop-down menu, and click a
cell to select the LDEV number to be assigned to the V-VOL.
You can select multiple volumes and LDEV numbers.
Note that only the areas displayed in the white cells are available to be assigned to DP-VOLs. After
you select an LDEV, it turns blue.
If you are prompted for an SSID, contact your Hitachi Data Systems field representative.
10. Click Next.
The Create V-VOL Confirmation dialog box displays.
11. Verify that the settings are correct and click OK.
The V-VOL dialog box displays.
12. Click Apply and OK to create the V-VOLs.
Associating V-VOLs Groups with a Dynamic Provisioning Pool
To associate V-VOLs with a Dynamic Provisioning pool, follow these steps:
1. In Hitachi Storage Navigator software, choose GO > LUN Expansion/VLL/V-VOL.
The V-VOL dialog box displays.
2. In the V-VOL Group – V-VOL tree, select the V-VOL group that contains the V-VOLs that you want to
associate with a Dynamic Provisioning pool.
22
3. Right-click the V-VOLs you want to associate to a pool and choose Associate V-VOL with Pool from
the drop-down menu.
The Connect Pool dialog box displays.
4. Highlight the pool ID with which you want to associate the V-VOL group and click Next.
The Change Threshold dialog box displays.
5. Select the threshold from the list displayed in the dialog box and click the Set button.
The settings are implemented and the V-VOL window displays.
6. Click Apply and OK.
These V-VOLs are now Dynamic Provisioning volumes (DP-VOLs) and can now be assigned to the
host storage group the same as a standard LDEV.
Assigning LUs to a Host Storage Group
To assign DP-VOLs as LUs to a host storage group, follow these steps:
1. In Hitachi Storage Navigator software, select GO > LUN Manager.
The LUN Manager dialog box displays.
2. Click a host group assigned within a storage port.
The right side of the LU Path pane shows unassigned LUNs and the LDEV pane shows the available
LDEVs.
23
3. Highlight the LDEVs you want to bring into the configuration and click the Add LU Path button.
This assigns each LDEV to a corresponding LUN. You can also drag the LDEVs to the desired LUN.
4. Repeat Step 1 through Step 3 for the remaining host groups.
The newly configured LUNs appear in blue while the changes are pending.
5. Click the Apply button to implement the pending changes.
Best Practices for Scaling Your Environment
This section provides best practice recommendations for deploying storage in a Hyper-V failover cluster,
including storage validation in the cluster, and Dynamic Provisioning guidelines.
Number of Virtual Machines per Standard LU or CSV
If you decide to run multiple guest virtual machines on a single VHD or a CSV, understand that the number of
virtual machines that can run simultaneously depends on the aggregated capacity and performance
requirements of the guest virtual machines.
Because all LUs using storage from a particular Dynamic Provisioning pool share their performance and
capacity, Hitachi Data Systems recommends dedicating Dynamic Provisioning pools to the Hyper-V failover
cluster and not assigning LUs from the same Dynamic Provisioning pool to other non-Hyper-V hosts. This
prevents the Hyper-V I/O from affecting or being affected by other applications and LUs on the same Dynamic
Provisioning pool and makes management simpler.
24
Follow these best practices:
• Create and dedicate Dynamic Provisioning pools to your Hyper-V hosts.
• Always present a specific LU to all hosts using the same logical host LUN on each host when they are
shared within the Hyper-V failover cluster. Each logical host LU on all nodes in the cluster must point to the
same physical LU.
• Create VHDs on the LUs only as needed.
• Monitor and measure the capacity and performance usage of the Dynamic Provisioning pool with Hitachi
Tuning Manager and Hitachi Performance Monitor software.
For more information, see the “Hyper-V Failover Cluster Storage Management” section in this paper.
Scaling Dynamic Provisioning Pools
Following are suggestions for managing specific Dynamic Provisioning pool capacity and performance
situations that might arise:
• If all of the capacity offered by the Dynamic Provisioning pool is used but performance of the Dynamic
Provisioning pool is still able to keep all the I/O within the 20ms response time, add RAID groups to the pool,
which adds capacity and performance.
• If all of the performance offered by the Dynamic Provisioning pool is used but capacity is still available, do
not use the remaining capacity by creating more LUs. This leads to even more competition on the Dynamic
Provisioning pool and overall performance for the virtual machines residing on this Dynamic Provisioning
pool is affected. In this case, leave the capacity unused and add more RAID groups to the pool and therefore
more performance resources.
Scaling Hyper-V Cluster Deployments
Microsoft Hyper-V failover clustering allows you to start with the implementation of a two-node cluster with the
ability to incrementally scale up to a 16 node Hyper-V failover cluster. The Universal Storage Platform provides
the ease of management and scalability that enables the customer to add additional nodes to the failover
cluster.
Although Microsoft Hyper-V failover clusters can scale up to 16 nodes, managing a cluster of that size can
introduce additional complexity, and management of the cluster can become more difficult as the number of
nodes increases. Based on the need for high availability for an application or subset of applications, you might
decide to deploy multiple Hyper-V failover clusters to reduce complexity and management challenges.
The following sections describe required steps when scaling the Hyper-V failover cluster.
Adding Nodes to Hyper-V Failover Cluster
To add physical nodes to the failover cluster, follow these steps:
1. Create the zones for the new physical host servers HBAs on the director or dual fabric switch.
2. Create host storage groups on the Hitachi Universal Storage Platform VM using at least two front end ports
to provide performance and high availability.
3. Ensure MPIO or Hitachi Dynamic Link Manager multipathing software is installed.
4. Bring the new physical host server into the Hyper-V failover cluster and ensure that all existing cluster disks
are available to the new node.
5. Run the Microsoft Cluster Validation program against the cluster and correct any errors introduced by adding
the new node into the cluster.
25
Hitachi Data Systems recommends disabling storage tests when adding nodes to the failover cluster. For
further information on cluster validation storage tests refer to the section in this paper “Hyper-V Failover
Cluster Validation of Storage”.
Note: To run the storage test, all VMs and their associated cluster disks must be offline to the cluster while
the validation tests execute. Schedule this test during a cluster maintenance window.
Adding VMs to the Hyper-V Failover Cluster
Before you start this procedure, ensure that all files and disks (VHD or pass-through) that will belong to the VM
are on shared storage and are accessible to all the nodes in the cluster.
If new LUNs must be created, follow the steps in the “Creating a Dynamic Provisioning Pool LUN” section of
this paper to create the required LUNs to support the deployment of the new VM. Ensure that the pool hosting
the new LUN can support the added I/O and capacity requirements of the new VM being deployed.
If a new Dynamic Provisioning pool is required, follow the steps in the “Creating a Dynamic Provisioning Pool”
section of this paper.
To add a VM to the Hyper-V failover cluster, follow these steps:
1. Use the Disk Management console on each of the failover cluster nodes to verify that the newly
provisioned LU is visible and accessible by all nodes in the cluster.
Note: The LU is in an offline state at this time.
2. If the LU will host the VM’s VHD, bring the disk online to only one node in the cluster, initialize the LU,
and format it as an NTFS disk.
3. If the LU will be utilized as a pass-through disk for a new VM, bring the disk online to only one node in
the cluster, initialize the LUN and take it offline to the node.
4. Under the Failover Cluster Manager snap-in, select the Storage tab to ensure that that the newly
provisioned LUNs are listed as Available Storage disks to the cluster.
5. Use the Failover Cluster Management console to add the new LUs as storage cluster resources.
6. When the new LU is added as a cluster resource, specify which node owns the LUN in the Failover
Cluster Management console.
The Hyper-V Manager can now be used to create the VM on the newly provisioned LUN. The new VM
must be created on the node that owns the new LUN.
After the VM is created on the new LU, the VM can be made highly available. For more information
about making a VM highly available, see the Microsoft TechNet article “Hyper-V: Using Hyper-V and
Failover Clustering.”
Using the GUID to Present LUs to Hyper-V Hosts
As you add nodes or VMs to a cluster, it is likely that you will exceed the 26 drive letter limit. With Windows
Server 2008 it is now possible to scale the number of disks or LUs in the Hyper-V failover cluster to 2,000
volumes.
To scale beyond the 26 drive letter limitation, present the LUs to the Hyper-V hosts by using the LU’s global
unique identifier (GUID) instead of a drive letter or a mount point.
After defining an LU on the storage system and preparing the LU for use by a VM with either Disk Manager or
the command line utility diskpart.exe, obtain the volume GUID with the command line utility
mountvol.exe.
26
Figure 5 shows the output of the mountvol.exe command to enumerate the volumes on the Hyper-V host
server and displays the associated volume global unique identifiers (GUIDs).
Figure 5. mountvol.exe Output
If the storage you define will be used to host a highly available guest machine within the Hyper-V failover
duster, use the GUID information provided in the failover cluster management console. This ensures that the
proper GUID is selected and that the storage can failover properly between all nodes in the cluster.
Finding a GUID
Initialize the raw disk presented to the Hyper-V host with either the diskpart.exe command or using Disk
Manager.
Do not assign a drive letter or drive path. Assign a Volume Label and do a quick format.
To obtain a GUID and to create a highly available guest machine, follow these steps:
1. Launch the Hyper-V Failover Cluster Manager console.
2. In the navigation tree in the left pane, expand Features > Failover Cluster Manager.
3. Click Storage.
The Storage pane populates with all the available storage in your environment.
27
4. Expand the disk for which you want to display the GUID information.
5. Right-click the storage resource and select Properties from the pop-up menu.
The Properties dialog box displays. On the General tab, you can copy and paste the GUID
information for use in the Create New Virtual Machine wizard.
28
6. Use the GUID just obtained to set the VM’s location.
7. Create the VHD using the GUID information and continue with the creation of the new virtual machine.
Notice that the volume label of the disk is appended to the end of the GUID.
29
Hyper-V Cluster Management
This section provides guidance on common tasks required to manage highly available VMs in the Hyper-V
failover cluster environment as they pertain to storage.
Changing a Virtual Machine’s Storage Configuration
Changing a VM’s storage configuration normally involves changing its hardware profile. To make changes to a
VM’s storage configuration, follow these high-level steps:
1. Shut down the VM.
2. Make the required changes to the disk I/O configuration using the Hyper-V Manager console.
3. Restart the VM.
Note: This procedure is not necessary if you are running Hyper-V with Release 2 of Windows Server 2008,
because this version of Windows Server 2008 allows for the dynamic addition of storage to virtual machine.
Note the following restrictions:
• The hot addition and removal of VHD and pass-through storage LUs requires that the LUs are defined on
SCSI controllers in the virtual machines configuration file (not on IDE controllers).
• Hot addition and removal of SCSI controller is not supported.
Validating Storage on a Hyper-V Failover Cluster
Always run cluster validation routines after adding new LUs to the Hyper-V failover cluster. Cluster validation
performs many checks to ensure that the Hyper-V failover cluster is stable and reliable. The full cluster
validation routine checks the CPU manufacturer and versions, BIOS settings, network configurations, network
card interface (NIC) settings, storage system settings and security settings to validate that the servers can
function properly in Hyper-V failover clusters.
It is possible to run only the storage cluster validation routines to ensure that the newly added storage can
function properly in the cluster and meet Microsoft support requirements. It is important to note that LUs that
are newly added to the cluster configuration must not be allocated to an online VM for the tests to execute. In
addition, if the LU is configured as a CSV, you must explicitly take these disks offline to the cluster. For more
information about storage validation with CSVs, see the Microsoft TechNet article “Use Validation Tests for
Troubleshooting a Failover Cluster.”
Schedule storage cluster validation routines during a full cluster maintenance window because running cluster
validation with a single node is not a full cluster validation test.
To validate the storage on a Hyper-V failover cluster, follow these steps:
1. In the failover cluster snap-in, from the console tree, ensure that Failover Cluster Management is
selected.
2. Click Validate This Cluster in the Action pane.
3. Select the Hyper-V host servers to be tested.
The Testing Options dialog box displays.
4. Select the Run only tests I select radio button.
The Test Selection window displays.
30
5. Select the check boxes for the cluster storage validation routines you want to run and click Next to start the
tests.
6. Review the output of the Cluster Storage Validation test to ensure that it reports no errors.
31
Lab Validated Results and Specifications
Hitachi Data Systems lab testing shows that the Dynamic Provisioning pool building block, along with the
advanced cache features of the Hitachi Universal Storage Platform family has the ability to support over 7000
IOPS of web server traffic with average response times of 20ms or less. This is due to the cache-friendly, readonly of web server I/O traffic. The variance in IOPS delivered per guest machine is due to server and storage
caching algorithms. Table 8 lists the results of placing the VHDs for eight virtual machines into the building
block reference architecture described in this paper. This building block contains a single Dynamic Provisioning
pool consisting of two RAID-5 (3D+1P) groups.
Table 8. Iometer Results
Guest Machine
Web Server IOPS
Response TIme
VM1
907
17.44ms
VM2
2291
8.97ms
VM3
1081
14.78ms
VM4
1079
14.81ms
VM5
900
17.92ms
VM6
1693
9.45ms
Total VM IOPS
7951
Table 9 lists the I/O web server profile specifications used as input to the Iometer testing.
Table 9. Iometer Access Specifications for Web Server Guest Machines
SIze
% of Size
% Reads
% Random
Delay
Burst
Align
Reply
512
22
100
100
0
1
0
0
1024
15
100
100
0
1
0
0
2048
8
100
100
0
1
0
0
4096
23
100
100
0
1
0
0
8192
15
100
100
0
1
0
0
16384
2
100
100
0
1
0
0
32768
6
100
100
0
1
0
0
65536
7
100
100
0
1
0
0
131072
1
100
100
0
1
0
0
524288
1
100
100
0
1
0
0
It is important to monitor storage system resources to ensure that it can support the I/O profile of the virtual
machines deployed.
32
Hyper-V Failover Cluster Storage Management
A complete, end-to-end picture of your Hyper-V Server environment and continual monitoring of capacity and
performance are key components of a sound Hyper-V management strategy. The principles of analyzing the
performance of a guest partition installed under Hyper-V are the same as analyzing the performance of an
operating system installed on a physical machine. Monitor servers, operating systems, virtual machine
application instances, databases, database applications, storage and IP networks and the Hitachi Universal
Storage Platform family storage system using tools such as Windows Performance Monitor (PerfMon) and
Hitachi Performance Monitor feature.
Note that while PerfMon provides good overall I/O information about the Hyper-V parent and the guests under
the Hyper-V parent, it cannot identify all possible bottlenecks in an environment. For a good overall
understanding of the I/O profile of a Hyper-V parent and its guest partitions, monitor the storage system’s
performance with Hitachi Performance Monitor feature. Combining data from at least two performancemonitoring tools provides a more complete picture of the Hyper-V environment. Remember that PerfMon is a
per-server monitoring tool and cannot provide a holistic view of the storage system. For a complete view, use
PerfMon to monitor all servers that are sharing a RAID group. Also consider the use of Hitachi Tuning Manager
to proactively monitor the storage systems performance.
Windows Performance Monitor
PerfMon is a Windows-based application that allows administrators to monitor the performance of a system
using counters or graphs, in logs or as alerts on the local or remote host. The best indicator of disk
performance on a Hyper-V parent operating system is obtained by using the \Logical Disk(*)\Avg.
sec/Read and \Logical Disk(*)\Avg. sec/Write performance monitor counters. These performance
monitor counters measure the latency time that read and write operations take to respond to the operating
system. In general, average disk latency response times greater than 20ms on a disk are cause for concern.
For more information about monitoring Hyper-V related counters, see the Microsoft TechNet article “Measuring
Performance on Hyper-V.”
Hitachi Performance Monitor Feature
Hitachi Performance Monitor feature is a controller-based software application, enabled through Hitachi
Storage Navigator software, which monitors the performance of RAID groups, logical units and other elements
of the disk subsystem while tracking utilization rates of resources such as hard disk drives and processors.
Information is displayed using line graphs in the Performance Monitor windows, as shown in Figure 8, and can
also be saved in comma-separated value (.csv) files.
33
Figure 8. Hitachi Performance Monitor Feature
You can measure utilization rates of disk subsystem resources, such as load on disks and ports, with Hitachi
Performance Monitor feature. When a problem such as slow response occurs in a host, an administrator can
use Hitachi Performance Monitor feature to quickly determine if the disk subsystem is the source of the
problem.
Hitachi Tuning Manager Software
Hitachi Tuning Manager software enables you to proactively monitor, manage and plan the performance and
capacity for the Hitachi modular storage that is attached to your Hyper-V servers. Hitachi Tuning Manager
software consolidates statistical performance data from the entire storage path. It collects performance and
capacity data from the operating system, switch ports, storage ports on the storage system, RAID groups and
LUNs and provides the administrator a complete performance picture. It provides historical, current and
forecast views of these metrics. For more information about Hitachi Tuning Manager software, see the Hitachi
Data Systems support portal.
34
Microsoft Virtual Machine Manager
This reference architecture uses Virtual Machine Management (VMM) 2008 R2 to rapidly deploy virtual
machines into the Hyper-V failover cluster. The template feature in VMM was used to create a template or
guest virtual machine hardware profile. You can create a template for a new virtual machine in one of two
ways:
• Use the template wizard to create a new template from the VMM library
• Create a template from an existing VHD already stored in the VMM library. This is the method used in the
deployment described in this white paper. This deployment used a template that was created based on the
requirements of the web server virtual machines. The Web Server Virtual Manager template used for rapid
deployment of virtual machines in the cluster is stored in the VMM library under VMs and Templates, as
illustrated in Figure 9.
Figure 9. Virtual Machine Deployment Template
This template provides all the common information required to deploy a new virtual machine, including these:
• Hardware configuration information defining the processor configuration, memory size for the virtual machine
and network adapters.
• OS configuration information defining the name of the computer, passwords and domain information.
35
To create a new virtual machine, navigate to VMs and Templates in the library view and right-click the web
server template as shown in Figure 10 to create a new virtual machine. A wizard guides you through a process
in which you have the ability to change guest operating system options such as the computer name, license
keys, and the location of the LUN or LUNs that contain the VHDs or pass-through disks.
Figure 10. Virtual Machine Creation Template
For more information, see the Microsoft TechNet article “How to Add a Host Cluster to VMM” and the Virtual
Machine Manager 2008 R2 web site.
Conclusion
This document defines a reference architecture that is highly scalable by leveraging the features and functions
of the Hitachi Universal Storage Platform family and Hyper-V failover clustering. The reference architecture
allows scaling of the environment by adding nodes in the Hyper-V failover cluster to support a growing virtual
machine workload.
For more information about the Hitachi Universal Storage Platform family, see the Hitachi Data Systems web
site, a channel partner or sales representative.
36
Corporate Headquarters 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Asia Pacific and Americas 750 Central Expressway, Santa Clara, California 95050-2627 USA
Contact Information: + 1 408 970 1000 www.hds.com / info@hds.com
Europe Headquarters Sefton Park, Stoke Poges, Buckinghamshire SL2 4HD United Kingdom
Contact Information: + 44 (0) 1753 618000 www.hds.com / info.uk@hds.com
Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of
Hitachi, Ltd., in the United States and other countries.
All other trademarks, service marks and company names mentioned in this document are properties of their respective owners.
Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered
or to be offered by Hitachi Data Systems. This document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data Systems
being in effect and that may be configuration dependent, and features that may not be currently available. Contact your local Hitachi Data Systems sales office for
information on feature and product availability.
© Hitachi Data Systems Corporation 2010. All Rights Reserved.
AS-040-00 April 2010
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertising