Microsoft Hyper-V with IBM XIV Storage System Gen3

Microsoft Hyper-V with IBM XIV Storage
System Gen3
Best practices and guidelines
IBM Systems and Technology Group ISV Enablement
May 2014
Microsoft Hyper-V with IBM XIV Storage System Gen3
1
Table of contents
Abstract..................................................................................................................................... 3
Introduction .............................................................................................................................. 3
IBM XIV Storage System Gen3 ................................................................................................ 4
System x servers ...................................................................................................................... 5
Microsoft Failover Clustering overview .................................................................................. 6
Microsoft hypervisor overview ................................................................................................ 7
HA VM overview ....................................................................................................................... 7
HA VM quick migration ............................................................................................................................ 8
HA VM live migration ............................................................................................................................... 8
Hyper-V storage live migration ................................................................................................................ 9
Guest clusters .......................................................................................................................................... 9
Hyper-V storage options ........................................................................................................................ 10
Hyper-V with IBM XIV considerations ................................................................................... 11
Storage planning .................................................................................................................... 12
IBM XIV storage sizing for Microsoft Hyper-V ....................................................................................... 12
Fabric configuration including SAN zoning ............................................................................................ 13
Multipathing ............................................................................................................................................ 14
IBM XIV volumes ................................................................................................................................... 15
Thin provisioning .................................................................................................................................... 15
Space reclamation using SCSI UNMAP ................................................................................................ 16
Virtual machine storage ......................................................................................................... 17
Fibre Channel ........................................................................................................................................ 17
Hyper-V virtual FC GUI-based configuration ................................................................... 18
Hyper-V virtual FC CLI-based configuration ................................................................... 26
iSCSI ...................................................................................................................................................... 30
FCoE ...................................................................................................................................................... 31
VHD and VHDX volumes ....................................................................................................................... 32
Shared VHDX feature ............................................................................................................................ 32
Pass-through disks ................................................................................................................................ 40
IBM XIV online volume migration using Hyper-Scale Mobility ............................................................... 40
Hyper-V CSV Cache .............................................................................................................................. 41
Resource metering ................................................................................................................. 41
Microsoft System Center Virtual Machine Manager ............................................................. 42
VM data protection ................................................................................................................. 42
Tivoli FlashCopy Manager ..................................................................................................................... 44
Conclusion .............................................................................................................................. 46
Resources ............................................................................................................................... 47
Trademarks and special notices ........................................................................................... 49
Microsoft Hyper-V with IBM XIV Storage System Gen3
2
Abstract
This white paper reveals condensed Microsoft Hyper-V configuration guidelines and best
practices for IBM XIV Storage System Gen3. The emphasis is placed on storage considerations
and usage for Microsoft hypervisor solutions and includes array configuration, virtual machine
(VM) management, connectivity and disk options, and backup considerations. An introduction to
the IBM XIV storage automation features for Microsoft System Center 2012 Virtual Machine
Manager (SCVMM) is also included.
These Microsoft Hyper-V guidelines provide mid- to large-size businesses’ general storage
strategies for cloud environments covering the latest Microsoft Windows 2012 R2 technologies.
The guidelines are not intended to be all-encompassing but should help to provide the necessary
tools for individual Information Technology (IT) departments to customize solutions and achieve
core virtualization goals. Intermediate experience with Microsoft Windows Server, Microsoft
System Center, failover cluster, Hyper-V parent and guest, Fibre Channel (FC), Ethernet,
Internet Small Computer System Interface (iSCSI), and IBM XIV Storage System Gen3
administration is recommended. However, technical reviews and supplemental resources are
provided throughout the paper.
Introduction
Today more than ever, consolidating and streamlining data center resources is especially important to
maintain enterprise efficiency and continue delivering high quality IT services at low user costs. This
consolidation and streamlining effort is primarily driven by virtualization using various hypervisor
technologies such as Microsoft® Hyper-V. As a result, the ability to enhance physical resource agility, high
availability (HA), and scalability features at flexible price-performance points using much smaller data
center footprints and a substantially reduced total cost of ownership (TCO) is possible.
One of the main reasons for the Microsoft-centric data center reduced TCO is the favorable Hyper-V
licensing model, not to mention assured comfort levels with existing Microsoft software interfaces.
Customers already purchase various Microsoft licenses that include a comprehensive set of hypervisor
features at little-to-no extra cost based on the required number and type of VM deployments. Furthermore,
with Microsoft continuing to develop business-critical software designs customized for seamless
integration with their Hyper-V cloud platforms, the virtualization feature parity gap between VMware and
Microsoft continues to diminish. This combination of low-cost adaptation, existing Microsoft product
interface familiarity, and impressive Hyper-V features motivates more and more data centers to switch to
Hyper-V business solutions.
However, while Microsoft Hyper-V plays a pivotal role in providing a simple and cost-effective virtualization
solution, it does so with an obvious stipulation. At the backbone of the cloud infrastructure, a robust
storage platform is also required to provide a virtualized approach that delivers comparable value. In order
to fully benefit from an end-to-end virtualization solution, an industry-leading storage foundation, such as
one provided by the enterprise-proven IBM® XIV® Storage System Gen3, is necessary.
Ultimately, IBM XIV Gen3 provides a remarkably feature-rich storage foundation for Hyper-V. Only a small
part of its countless strengths, the IBM XIV Storage System Gen3 supports key Microsoft Windows®
Server 2012 R2 Hyper-V and SMI-S requisites. The latter enables users to centrally manage cloud
infrastructures including IBM XIV storage using Microsoft SCVMM 2012. The result is a highly efficient,
integrated and easily managed end-to-end solution that embodies the best of virtual compute, network,
Microsoft Hyper-V with IBM XIV Storage System Gen3
3
and storage characteristics to provide organizations a leading edge in a fiercely competitive global
marketplace.
IBM XIV Storage System Gen3
IBM XIV Storage System Gen3 is especially suited for forward-thinking customers seeking to enrich
neoteric or established Microsoft virtualization environments. This includes environments that use
stand-alone or clustered Hyper-V hosts with or without SCVMM 2012. The IBM XIV system is already
known for its highly acclaimed storage management simplicity while still offering enterprise-class reliability
and performance. However, the IBM XIV system also helps to deliver a complete virtualization solution
that all administrators can use with minimal effort while providing extensive Microsoft virtualization
backbone functionality that is favorably augmented with SCVMM SMI-S storage automation capabilities.
For all intents and purposes, pairing the IBM XIV system with Hyper-V and SCVMM 2012 provides
uncomplicated-to-practically effortless VM storage management that helps decidedly lower overall cloud
expenses.
Of course previously alluded to and at the heart of its innovative, user-friendly design, is a storage
framework built with high performance and high availability in mind. The highly scalable and distributed
architecture of the XIV system provides a combined total of up to 360 GB of cache and individual modules
powered by quad-core Intel® Xeon® processors. Up to six dedicated host interface modules ensure
optimal, balanced data distribution across up to 180 disks to eliminate hot spots. In addition to data
integrity benefits, since every logical unit number (LUN) is striped across all of the storage system disks,
the chance of saturating I/O is greatly reduced when compared to conventional architectural approaches
using Redundant Array of Independent Disks (RAID) sets.
IBM XIV Storage System Gen3 enhanced scripting capabilities and automation of many of its core
functions also greatly reduces data management burdens. At the center of the IBM XIV distributed
architecture is unparalleled virtual storage that automatically self-tunes as necessary based on fluctuating
application workloads. Accordingly, businesses save considerable time and labor not having to plan for
and maintain traditional, complex RAID configurations. Furthermore, the IBM XIV system automates many
self-healing and data-protection mechanisms to boost its high availability.
Contributing considerably to the storage system high availability, IBM XIV data distribution algorithms help
to ensure fast recovery from major and minor faults by using pre-failure detection and proactive corrective
healing. In the event of module or disk failures, global spares striped across all disks quickly redistribute
data back to a fully redundant state. During such events, the performance impact is notably minimized and
further enhanced by the IBM XIV physical data protection attributes.
The IBM XIV physical data protection attributes span multiple levels that include active/active N+1
redundancy of all data modules, disks, interconnect switches, and battery backup uninterruptible power
supply (UPS) units. The IBM XIV Storage System also contains an automatic transfer switch for external
power supply redundancy. A built-in UPS complex consisting of three UPS units protects all disks, cache,
and electronics with redundant power supplies and fans, which further promotes application data integrity,
availability, and reliability.
Microsoft Hyper-V with IBM XIV Storage System Gen3
4
In addition to the long list of previously referenced features, the IBM XIV Storage System Gen3 also
contains the following distinct highlights:













Linear scaling up to 325 TB per array and IBM Hyper-Scale Mobility for transparent online
volume migrations across multiple systems
A minimum of 72 and up to a maximum of 180 1 TB to 4 TB self-encrypting hard drives
Substantial hardware upgrades compared to the previous XIV generation including an
InfiniBand® interconnect, larger cache (up to 360 GB of combined memory), faster serialattached SCSI (SAS) disk controllers, and increased processing power—plus, each XIV
Gen3 interface module delivers 8 Gb FC and 1 Gb (or optional 10 Gb) iSCSI connectivity
Optional solid-state drive (SSD) cache provides up to 4.5 times faster performance for
highly random application workloads
Enhanced performance for business intelligence, archiving, and other I/O-intensive
applications with up to four times the throughput (10 GBps) compared to the previous XIV
generation
Industry-leading automatic data redistribution and rebuild times for disk or module failures
New cloud and virtualization enhancements including Microsoft Windows Server 2012
space support
State-of-the-art snapshot functionality including snap-of-snap, restore-of-snap, and nearly
unlimited snapshot quantities
Non-disruptive maintenance and upgrades
Quality of service (QoS) control per host/cluster for workload prioritization based on
business application precedence
Decreased TCO through greater energy efficiency and capacity optimization including
support for Small Computer System Interface (SCSI) UNMAP space reclamation
XIV family all-inclusive pricing model with no hidden costs for snapshot functionality, thin
provisioning, asynchronous and synchronous data replication, advanced management,
performance reporting, monitoring and alerting; not to mention full support of Microsoft
technologies including GeoClustering, Volume Shadow Copy Services (VSS), and
Multipath I/O (MPIO)
Uncommonly simple, intuitive management tools that make complex administrative tasks
seem effortless
For further information about IBM XIV Storage System Gen3, visit the following website:
ibm.com/ systems/storage/disk/xiv/
System x servers
Also integral to the successful end-to-end virtualization formula and offering numerous storage
connectivity options while capable of driving a full variety of essential workload I/O profiles, IBM
System x® servers effectively deliver the Hyper-V cloud performance and reliability required for the most
critical business applications. A complete range of powerful, entry-level to large-scale, systems is available
to accommodate any budget or data center needs. Moreover, System x servers not only meet a full
spectrum of individual data center requirements but they are also intelligently designed to considerably
reduce costs and complexity while allowing companies to grow smoothly in unison with their increasing
and dynamic customer base.
Microsoft Hyper-V with IBM XIV Storage System Gen3
5
For further information about the IBM System x server portfolio, visit the following website:
ibm.com/systems/x/index.html
Microsoft Failover Clustering overview
Historically, various forms of virtual servers have existed for a long time. Microsoft introduced their earliest
virtual server version as part of their shared-nothing server cluster design that has transformed
significantly over the years but still remains. The Microsoft cluster design is based on multiple physical
servers that behave as a single logical unit with each member node capable of hosting its own distinct
virtual servers that rely on the host and do not contain their own OS. To be more precise, a typical virtual
server consists of a cluster network name, IP address, one or more physical storage area network (SAN)
disk, and various application-specific resources that can only be online to a single member node at any
given point in time. In the event of a physical server or application failure, virtual servers can automatically
fail over to a healthy cluster member by transferring logical resource ownership with minimal downtime.
Since none of the virtual server cluster resources can be active to multiple cluster nodes at the same time,
they are considered to be part of a shared-nothing server cluster classification. Similarly, take into account
that traditionally, the shared-nothing concept often describes and emphasizes the Microsoft cluster disk
behavior.
It is also important to note that at a minimum, a cluster virtual server can consist of only two resources: a
network name and an IP address. So, it also helps to further examine cluster virtual servers from a
network perspective. That being said, each node contains multiple physical network adapters with different
traffic-based roles including a public adapter that is capable of binding multiple Network Basic Input/Output
System (NetBIOS) network names and IP addresses. Clients connect to their respective virtual server
applications using a node’s public adapters that not only bind the physical host’s network name and IP
address but also bind the same for one or more virtual servers.
Thus, the clients are completely unaware of the hardware-abstracted virtual name space they use to
access their application data. In other words, clients believe that they are connecting to physical server
resources and are completely unaware of the underlying virtual server structure. All that matters to the
client though, is that they have continuous access to their business-critical applications and that is where
Microsoft clusters excel.
In fact, original Microsoft cluster virtual servers were chiefly designed to provide application high
availability and have been quite proficient at this task. If a node that hosted a virtual server failed, clients
could still access their application data by using the same server name since they were automatically
redirected to a healthy node after a cluster failover occurred. It was the virtual server application’s flexibility
to move freely between physical nodes, through manual or automatic intervention, that made the cluster
feature stand out and when Microsoft eventually introduced their hypervisor, failover clustering became
even more powerful.
For more information about the Failover Clustering feature, visit the following website:
http://technet.microsoft.com/en-us/library/dn265972.aspx
Microsoft Hyper-V with IBM XIV Storage System Gen3
6
Microsoft hypervisor overview
Originally, Microsoft introduced their hypervisor as a stand-alone virtual server product that eventually split
into another variant as the familiar Hyper-V role that is bundled with Windows Server releases. The
Microsoft hypervisor is software that runs in the OS of a parent physical machine or host that allows
multiple child or guest operating systems to share the hardware of the parent host. The guest OS runs in a
logical object known as a virtual machine that has exclusive access to its own set of virtual processors,
memory, network, storage, and similar resources that normally characterize a physical machine. Just to be
clear, it is common to differentiate the two concepts by using parent or host terminology for physical
machines versus child or guest terminology for virtual machines. In any case, even though each guest VM
has its own resources, do not forget that the parent hypervisor controls everything and is responsible for
ensuring unique VM resource allocations to prevent the VMs from interfering with each other.
So in the beginning, Microsoft introduced virtual servers and then eventually, virtual machines. As
revealed earlier, they are two distinct virtual technologies that provide quite different benefits. These
functional differences led to marketing nomenclature changes to avoid confusion because Microsoft’s first
stand-alone hypervisor product was literally called Microsoft Virtual Server and now is always referred to
as Hyper-V. To briefly recap, virtual servers are part of Microsoft clustering and virtual machines are part
of Hyper-V and when combined to form what is referred to as HA VMs, they offer greatly enhanced high
availability. For more information about Hyper-V, visit the following website:
http://technet.microsoft.com/en-us/library/dn282278.aspx
HA VM overview
In the purest sense, an HA VM is any VM that belongs to a Microsoft cluster. Since Microsoft Windows
Server 2012 clusters scale up to 64 nodes, administrators can potentially deploy enormous amounts of HA
VMs (up to 8,000) in a single cluster. The VMs must be created using the Cluster Failover Manager or
SCVMM by adding new or existing virtual hard drives to store the guest OS files. When using a one-to-one
mapping of VMs to drives, the virtual storage management burden significantly increases.
Subsequently, Microsoft released the Cluster Shared Volume (CSV) feature in Microsoft Windows Server
2008 R2 to minimize storage provisioning and management workflows while providing a few other
benefits. The primary benefit stems from the fact that each CSV is built from a single, large SAN LUN that
provides a common storage repository for numerous HA VMs. CSVs allow multiple cluster nodes to
concurrently read and write to the same New Technology File System (NTFS) volume using a somewhat
unorthodox clustered file system. While all nodes can simultaneously access the CSV SAN volume at the
same time, a cluster member designated as the coordinator node that owns the CSV resource, controls
and synchronizes file system metadata changes for all member nodes. Notably, CSV ownership can be
changed by moving a CSV to another node using the Failover Cluster Manager or PowerShell.
Nevertheless, one of the auxiliary CSV benefits is the ability to independently move HA VMs (that reside
on the same CSV) from node to node using a few different migration methods.
For more information about configuring VMs for high availability, visit the following website:
http://technet.microsoft.com/en-us/library/cc753787.aspx
Microsoft Hyper-V with IBM XIV Storage System Gen3
7
HA VM quick migration
Also introduced in Windows Server 2008, the HA VM quick migration feature allows VMs to be moved or
migrated to other cluster nodes in a timely fashion as the name implies. This is accomplished by saving
the running state of the VM, including its memory to disk. Afterward, VM ownership is transferred to the
migration destination node, which restores the running state of the VM from disk. Quick migrations
typically occur faster than live migrations but VMs experience a small outage or downtime. The quick
migration time is directly related to the amount of memory assigned to the VM, along with the connection
speed between the server and the shared storage.
HA VM live migration
The HA VM live migration feature became available shortly after the quick migration feature in the
Windows Server 2008 R2 release and allows VMs to be migrated to other cluster nodes without any
outages or downtime. Technically, there is a very brief ‘blackout’ period but this occurs within the TCP
timeout window and is unnoticeable to VM clients. Live migrations are accomplished by transferring
memory pages and virtual hard disk (VHD) file access using a dedicated private network (live migration
network on all cluster nodes) to move a VM from a source cluster node to a target cluster node and finally
bringing the VM online to the target. Due to the online nature of live migrations, this process can take
longer than quick migrations but the VM is available during the entire move.
The following factors determine the speed of HA VM live migrations:





Network bandwidth between the source and target cluster hosts
Source and target physical host hardware resources
Performance load on the source and target physical hosts
Number of modified pages on the source VM - The greater the number of modified pages,
the greater the VM migrating state duration
Storage bandwidth (Fibre Channel or iSCSI) between the physical hosts and shared
storage
With the release of Windows Server 2012, live migrations are now possible between stand-alone Hyper-V
hosts. However, that does not eliminate the high availability benefits of Microsoft Failover Clusters.
Automatic HA VM failovers triggered by host or resource failures are still only possible with clustered
Hyper-V hosts.
Note: Live migrations do not require CSVs but using CSVs makes the operation smoother and can slightly
expedite the move. Unlike a physical disk resource, there is no need to unmount the disk from the source
host and re-mount it to the target host at the end of the migration because all hosts have access to the
CSV.
Microsoft Hyper-V with IBM XIV Storage System Gen3
8
Hyper-V storage live migration
Since the inception of HA VM live migrations, it was only a matter of time before Microsoft added the ability
to live migrate VM storage. Starting with Windows Server 2012, Hyper-V VHD files can now be nondisruptively relocated to different SAN or local volumes by using the Hyper-V Manager for stand-alone
VMs, Failover Cluster Manager for HA VMs, and the SCVMM console for either stand-alone or HA VMs.
During the migration copy process, the I/O continues to be directed to the source location. After the copy
finishes and the mirrored I/O becomes fully synchronized, the I/O is redirected only to the target VM file
and the original VM file is finally deleted. With this new feature, it is unnecessary to take a VM offline to
move its VHD file to different storage locations which simplifies VM administration and allows much
greater migration flexibility and high availability.
Note: When managing Hyper-V VMs, Microsoft best practices suggest using the primary application
interface. For example, if an administrator is using SCVMM, use its console to manipulate a VM or modify
its settings or properties. The same applies to HA VM environments that do not use SCVMM. In this case,
use the Failover Cluster Manager to control or modify HA VMs. Apply the same logic for stand-alone
deployments and use the Microsoft Hyper-V Manager to change any VM settings and so forth. Similar to
any rule or suggestion, there are exceptions and some VM setting modifications require the Hyper-V
Manager. With the latest Microsoft OS and application releases, the VM status and properties are much
quicker to synchronize across all application interfaces but that has not always been the case with older
software versions.
Guest clusters
Up to this point, it is clear that combining the traditional Microsoft Failover Cluster feature with Hyper-V has
many benefits but still lacking crucial functionality. If a host encounters a catastrophic failure or detects a
problem with a VM, the cluster can trigger a VM failover to another healthy node. So there are two layers
of system-level high availability. In other words, if a virtual or physical machine (different system type) has
a problem, the clustering components take the appropriate action to fail over the VM to ensure that it
remains online. However, if an application that is running inside a VM suffers a catastrophic failure,
traditional failover clusters are unaware of the application error state or termination. So, the VM application
can be offline while the VM remains online and the cluster behaves as though all resources are healthy
and does not take any corrective measure. As a result, this type of a VM application outage can negatively
impact service level agreements or true business production. In this case, the VM is considered to be
down because clients cannot access their application data.
To address this HA VM vulnerability, Microsoft introduced guest clustering that is based on two or more
VMs forming a cluster. Microsoft actually recommends running VM guest clusters on physical host clusters
to provide even greater resiliency but they also support guest clusters that use separate stand-alone
Hyper-V hosts. In any case, this enhanced feature allows administrators to run clustered applications or
roles within VMs. So not only are guest clusters capable of proactively monitoring clustered applications or
roles, but they also provide application mobility, protection from parent host failures, and VM mobility.
Fundamentally, guest clusters are similar to physical host clusters and share many of the same support
constraints. Similar to physical host clusters, guest clusters require shared storage and until recent, guest
clusters only supported iSCSI shared storage options.
Microsoft Hyper-V with IBM XIV Storage System Gen3
9
However, Windows Server 2012 R2 now supports the following guest cluster shared storage options:



iSCSI
Virtual Fibre Channel
Shared virtual hard disks
For further information about Microsoft guest clustering, visit the following website:
http://technet.microsoft.com/en-us/library/dn440540.aspx
Hyper-V storage options
The list of available Hyper-V storage options continues to grow with the latest Microsoft software releases.
Windows Server 2012 now contains a vast range of robust Hyper-V storage options to make
administration much easier. Choosing which options to pursue depends on business requirements,
existing or additional resource demands, budget constraints, and individual preferences. So there is no
single correct method to accomplish cloud storage goals; there are several. Many Hyper-V storage options
were previously introduced but improved with time by providing expanded functionality but many are new
to Windows Server 2012 and Windows Server 2012 R2.
The following list of Hyper-V storage specific features and enhancements are now available with the latest
Microsoft Windows Server 2012 R2 and IBM XIV 11.4x releases:
















Hosts and VMs can use thick and thin provisioned SAN volumes
Space reclamation is available using SCSI UNMAP
Hosts and guests can use multipath I/O for storage connectivity
Guest virtual Fibre Channel technology using host 8Gb FC
1 Gb or 10Gb iSCSI storage connectivity
FCoE storage connectivity
VM VHD and enhanced VHDX file formats
Shared VHDX files for Microsoft guest clusters
Online virtual hard disk resizing
Storage quality of service (QoS)
Pass-through disks
IBM XIV online volume migration using Hyper-Scale Mobility
CSV cache
Resource metering
Storage automation with Microsoft SCVMM 2012
Host and guest VSS snapshots
For further information about Microsoft Hyper-V, visit the following website:
http://technet.microsoft.com/en-us/library/hh831531.aspx
For further information about Hyper-V scalability in Windows Server 2012, visit the following website:
http://technet.microsoft.com/en-us/library/jj680093.aspx
Microsoft Hyper-V with IBM XIV Storage System Gen3
10
Hyper-V with IBM XIV considerations
Before deploying Microsoft Hyper-V solutions with IBM XIV Storage System Gen3, there are numerous
suggestions to consider. First and foremost, all system hardware should be updated to the latest
supported firmware and drivers. Additionally, the Hyper-V hosts should run Windows Update to ensure the
latest Hyper-V, Microsoft Failover Cluster, Microsoft application and other critical OS hotfixes, including
security, are up-to-date. The latest Windows Server 2012 R2 Hyper-V hotfixes are located at the following
website:
http://social.technet.microsoft.com/wiki/contents/articles/20885.hyper-v-update-list-for-windows-server2012-r2.aspx
Recommended hotfixes and updates for Windows Server 2012 R2 failover clusters are also located at the
following website:
http://support.microsoft.com/kb/2920151
IBM-specific solution support should be reviewed at the IBM System Storage Interoperation Center
(SSIC). The SSIC allows customers to validate interoperability for the most popular IBM multi-vendor
hardware and software combinations. The SSIC support matrices are located at the following website:
ibm.com/systems/support/storage/ssic/interoperability.wss
When configuring Hyper-V hosts, Microsoft recommends limiting the roles and features to just the Hyper-V
role, Failover Cluster and Multi-path I/O features. Thus, Hyper-V hosts must be dedicated to running only
the necessary core stand-alone or clustered hypervisor components to help reduce non-essential resource
usage, minimize solution complexity, and ease potential system-based troubleshooting. Instead, consider
installing extraneous roles, features, and necessary business applications on the VMs.
Furthermore, when configuring VMs, the Hyper-V host system drive must not be used for the default virtual
hard disk location. This helps to avoid host system disk latency and free space depletion issues.
Preferably, select a non-system IBM XIV or other volume path that is more appropriate to individual standalone or clustered Hyper-V configurations.
Also to avoid similar performance impacts, Microsoft recommends only running antivirus software in the
guest OS rather than in the Hyper-V host OS. If this is impractical due to company security policies,
ensure that the Hyper-V host VM file directories are excluded from the antivirus scans to help prevent
performance degradation and stability issues. For further Hyper-V antivirus exclusion details, visit the
following website:
http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-vhosts.aspx
Even though this is an impartial list of Hyper-V guidelines and individual environments might require
further considerations, the aforementioned suggestions can help to alleviate many of the common
virtualization pitfalls. For a complete list of Hyper-V prerequisites and detailed best practices, visit the
Microsoft TechNet website for countless excellent resources such as the Hyper-V pre-deployment and
configuration guide at:
Microsoft Hyper-V with IBM XIV Storage System Gen3
11
http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-ineasy-checklist-form.aspx
Storage planning
A typical Hyper-V deployment requires planning to avoid common performance and capacity penalties as
a virtual data center expands. Fortunately, the underlying IBM XIV storage platform is one of the most
flexible and forgiving SAN solutions in the market. The IBM XIV self-tuning and user-friendly management
features make it ideal for a variety of Microsoft virtualized workloads using the most popular host and
guest connectivity options. Nevertheless, the unique IBM XIV architecture still has non-rigid storage sizing
guidelines to consider.
IBM XIV storage sizing for Microsoft Hyper-V
As previously mentioned, the IBM XIV proprietary nature of its RAID X design means that each volume
spans all LUNs. That being the case, provisioning an extremely large number of small LUNs is not
advisable. However, the exact number and size of LUNs that can negatively impact storage performance
depends on the type and combination of workload profiles, not to mention various other factors. So it is
difficult to quantify and varies with each environment. Keeping that in mind, the opposite is true as the IBM
XIV distributed architecture performs best when a smaller number of large LUNs or volumes are used for
cluster or stand-alone Hyper-V configurations. Consequently and not by coincidence, users benefit from
fewer volumes to manage, which decreases storage workflows. However, additional considerations are
required for cluster Hyper-V implementations when compared to stand-alone Hyper-V implementations.
As suggested, both cluster and stand-alone Hyper-V hosts should use large IBM XIV volumes for guest
VMs. However, volume allocation and functional definition determine subtle yet unique considerations
between the two types of Hyper-V implementations. For HA VMs, it is recommended to use large,
separate CSVs, (each consisting of a single LUN mapped to all cluster hosts) that contain only VM OS
files. Spread the CSVs across multiple cluster nodes to balance the virtual workloads across servers.
Hence, with multiple active cluster nodes, administrators can maximize hardware resource utilization.
However, each physical server must contain sufficient resources to handle both planned and unplanned
VM migrations due to maintenance and one or more cluster node failures.
When preparing for potential failures, determine a comfortable amount of granularity when calculating the
number of VMs to assign to each CSV to protect against unplanned file system corruption or other
disastrous events. Placing no more than 10 to 20 VMs on a single 2 TB or greater CSV can help
accelerate disaster recoveries even though Microsoft Windows Server 2012 supports up to 125 VMs per
CSV (loosely based on 8000 VMs / 64 cluster nodes assuming 1 CSV on each node). Similarly, consider
using separate large CSVs for dedicated application data and application log files. While not much of a
performance consideration with the IBM XIV Storage System Gen3, this measure provides an additional
fail-safe and likely expedites disaster recovery.
Furthermore, from a Hyper-V storage performance perspective, there are several considerations. VMs with
virtual FC adapters directly connected to IBM XIV volumes perform best. Additionally, pass-through disks,
while not as secure, yield similar performance benefits but are no longer as popular due to the introduction
of the virtual FC adapter. Both are conceptually similar to physical node configurations and simplify VM
application backups by taking advantage of hardware-based VSS snapshots. Basically, any VM direct
Microsoft Hyper-V with IBM XIV Storage System Gen3
12
storage connection method is superior when it comes to performance and that includes the IBM XIV 10 Gb
iSCSI option. However some prefer to place VMs on CSV fixed virtual hard disks even though the
performance is slightly less due to easier administration with greater Microsoft functionality and flexibility. It
is really just a matter of choosing the appropriate virtualization storage options that work best for individual
environments and noting the previous IBM XIV considerations.
Finally, most of the HA VM storage sizing considerations also apply to Hyper-V stand-alone hosts and
their VMs. However, IBM XIV volumes are only mapped to a single host or VM. Moreover, the volumes
require NTFS formatting and nothing further short of Windows disk partition alignment considerations,
which are application-dependent.
Fabric configuration including SAN zoning
No matter whether clustered or stand-alone, both Microsoft Hyper-V hosts and VMs now have the ability to
balance workloads using multiple I/O paths to the storage array. Moreover, multiple paths distributed
across multiple interface modules within the XIV Grid architecture not only balance the host workload I/O
but also yield better performance. Additionally, using two or more FC host bus adapters (HBAs) per server
ensures fault tolerance in the event of path failures at the HBA, SAN switch, or IBM XIV interface module
level.
To help mitigate path failures while maintaining optimal performance, the IBM XIV Storage System Gen3
contains six host interface modules. Host connectivity is established with the storage array using IBM XIV
interface modules that contain two dual-port 8Gb FC HBAs. This IBM XIV distributed architecture not only
balances host workloads but as a reminder, it also helps to preserve data integrity.
To maximize Hyper-V host and VM high availability and performance, the following guidelines should be
implemented:









Use two or more redundant SAN switches.
Use two or more dual- or quad-port HBAs per Hyper-V server.
Use multiple paths up to a maximum of four VM virtual FC adapters.
Zone each Hyper-V VM or host HBA port to no more than three alternate IBM XIV
interface modules (refer to Figure 1).
Set the Hyper-V HBA maximum queue depth to 128.
Use only two of the four XIV IBM interface module FC ports for host connectivity as two
are reserved for data replication.
Because each interface module FC port concurrently supports up to 1,400 IOPs, calculate
the preferred maximum queue depth for each host by dividing the total number of hosts
defined in a zone by 1,400.
For servers with more than four HBA ports, reduce the number of paths by zoning to fewer
XIV interface modules, but continue to alternate and balance paths for all hosts.
Use MPIO software to balance I/O across all paths per IBM XIV volume.
Microsoft Hyper-V with IBM XIV Storage System Gen3
13
Figure 1: XIV GUI view of Microsoft Hyper-V host and VM zoned to alternate IBM XIV interface module FC ports
Multipathing
If a Hyper-V host or VM directly connects to the IBM XIV using FC or iSCSI, the IBM XIV Host Attachment
Kit (HAK) for Windows must be installed to manage multipath I/O traffic. It is supported to install the IBM
XIV HAK on both the Hyper-V host (assuming it uses SAN volumes for VM files) and within any VM that
has direct access to SAN volumes. To further clarify, the IBM XIV HAK does not need to be installed in the
guest OS of a VM that only uses VHD or VHDX files for its volumes. Essentially, the IBM XIV HAK enables
the Microsoft Windows Server MPIO feature and the default round-robin load-balancing policy is
recommended for optimal storage performance. After the Hyper-V systems have multipathing software in
place, the IBM XIV volumes can be provisioned to the hosts or VMs.
For further information about IBM XIV HAK for Windows including the software download location, visit the
following website:
http://pic.dhe.ibm.com/infocenter/strhosts/ic/index.jsp?topic=%2Fcom.ibm.help.strghosts.doc%2Fhakhomepage.html
Microsoft Hyper-V with IBM XIV Storage System Gen3
14
IBM XIV volumes
IBM XIV volumes can be thick or thin provisioned for both Microsoft cluster and stand-alone Hyper-V
implementations. However, it is recommended to use thin provisioning due to the latest IBM XIV and
Microsoft Windows Server 2012 thin provisioning improvements (at the time of this publication). Just make
sure to closely monitor the storage array used capacity to prevent running out of free space. Before
moving forward though, it helps to explain how thin provisioning works in the IBM XIV system. For the IBM
XIV system, a volume inherits the storage pool attributes and thus it is necessary to define thick or thin
storage pools. Both IBM XIV regular (thick) and thin provisioned pools are similarly managed and there are
no major physical distinctions between the pool types. Pool types are mainly distinguished by minor
metadata differences. Accordingly, pools and their corresponding volumes are created instantaneously
because only IBM XIV metadata tables are modified and volumes can be instantaneously moved from a
thick to thin pool or from a thin to a thick pool. Due to this inherent architectural design, IBM XIV provides
thin-like provisioning for all of its virtual storage. This is one of the main reasons why an increasing number
of IBM XIV customers prefer to use thin provisioning for their hypervisor storage.
Thin provisioning
Reflecting on the past, it is easy to see how thin provisioning was introduced to enhance storage efficiency
and to help reduce storage sprawl. Thin provisioning allows administrators to allocate logical capacity that
is greater than a storage system’s total physical capacity. It does so by using on-demand block allocation
of data based on host writes versus allocating all of the blocks during the initial volume creation. As a
result of this on-demand approach to allocating actual physical storage capacity, customers can realize
significant economic benefits by over-provisioning their storage. This is due to not having to commit
considerable storage capacity up front (as with thick provisioning) to users or business groups that often
consume only a fraction of the allocated physical capacity. Consequently, multi-level cost reductions are
achieved by diminished storage capacity requirements that result in smaller data center footprints that
require less administrative effort and power and cooling. However, for the majority of administrators, thin
provisioning still lacked distinctive maturity due to space reclamation deficiencies. More or less,
administrators wanted to be able to reclaim storage array dead space without having to use limited or
primitive utilities.
To better comprehend dead space reclamation, it helps to examine the host front-end and the storage
back-end. After a host writes to a thin-provisioned volume, physical capacity is allocated to the host file
system. Unfortunately, if the host deletes the file, only the host file system frees up that space. The
physical capacity of the storage system remains unchanged. In other words, the storage system does not
free up the capacity from the deleted host file, which is commonly referred to as dead space. Obviously,
this is not the most effective method for handling back-end block-level storage. Ideally, when a host
deletes files, that space is not only reclaimed by the host file system but also the back-end storage
system.
Microsoft Hyper-V with IBM XIV Storage System Gen3
15
Space reclamation using SCSI UNMAP
To address this thin provisioning limitation, the T10 Technical Committee established the T10 SCSI Block
Command 3 (SBC3) specification which defines the UNMAP command for a diverse spectrum of storage
devices including hard disk drives (HDDs) and numerous other storage media. With IBM XIV Storage
System Gen3 code level 11.2.x and later versions, using SCSI UNMAP, storage administrators can now
reclaim host file system space and back-end storage dead space typically within 30 seconds of a host file
deletion. However, not only does SCSI UNMAP require T10 SBC3 compliant SCSI hardware such as IBM
XIV Storage System Gen3s at code level 11.2.x and later versions, it also requires necessary software
application programming interfaces (APIs) such as those now included in Windows Server 2012 or
Windows 8. That being said, previous Windows OS releases do not support the necessary APIs.
From Microsoft Windows Server 2012 Hyper-V host perspective, SCSI UNMAP behavior is straightforward
both at the operating system and XIV storage level. Within Microsoft Windows on the parent host, when a
file is written to an NTFS volume, the XIVGUI or XCLI interface immediately reflects an increase in the XIV
volume used capacity for the consumed space. When a file is permanently deleted on that NTFS volume,
the XIV GUI or XCLI used capacity for the XIV volume usually decreases within 30 seconds. Essentially, if
a Hyper-V host file (that is, VHD, VHDX, ISO, and so on) is permanently removed from the host NTFS
volume, the host file system and XIV volume used capacity decreases and free space increases. This
includes VM storage migrations where the VM files no longer reside on the source host volume. In
summary, for Hyper-V host-based file deletions, space reclamation occurs at the following levels:



Parent host NTFS frees up space by deleting or moving VM or any host files to another
volume.
XIV volume used capacity decreases.
XIV pool used capacity decreases only in 17 GB increments.
Conceptually, Hyper-V guest-based file deletions are similar to the parent host but space reclamation
occurs at four different levels:




Guest NTFS volume frees up space by deleting files.
XIV volume used capacity decreases.
XIV pool used capacity decreases only in 17 GB increments.
Host VHDX file size decreases.
The first three Hyper-V guest processes are dynamic and near-instantaneous while the latter requires
manual user intervention beyond the scope of this paper. For further details, refer to the following website:
ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102254
Microsoft Hyper-V with IBM XIV Storage System Gen3
16
Virtual machine storage
Not only do administrators have to determine the type of IBM XIV Gen3 volume during planning but also
which Microsoft Hyper-V storage connectivity options to select. As mentioned earlier, IBM XIV volumes
can be mapped directly to the Hyper-V host or guest, and quite often, a combination of the two. For IBM
XIV volumes mapped to the parent Hyper-V host, virtual hard disks (older VHDs or preferably newer
VHDXs) are typically used to store the VM OS files. Older host pass-through disk methods that map SAN
volumes to the parent host and transfer the disk ownership to the guest [same as VMware raw device
mappings (RDMs)] are now being substituted by advanced VM direct-access storage technologies such as
virtual FC.
VM direct-access storage technologies (that is, iSCSI and pass-through disks) have been available for
some time but continue to evolve with Microsoft Windows Server 2012. Early VM direct-access storage
technology started with iSCSI, which is comparable between virtual and physical machines.
Fundamentally, VM direct-access to the storage occurs through the underlying physical host components
such as the network adapters, FC HBAs, and so on by presenting extra hardware-based I/O paths to the
Windows software virtual hard disk stack. Thus, it is the physical hardware bandwidth and throughput
limitations that determine how many VMs can run on any single server and what type of performance each
VM can expect.
As a reminder, whenever a VM business-critical application requires the utmost performance, it is best to
use direct-access storage whether it is virtual FC, iSCSI (the 10Gb iSCSI option is preferred with the IBM
XIV versus the older standard 1Gb iSCSI), or pass-through disks. The recommendations are in that order
for most IBM XIV customers because 10Gb iSCSI has not been offered for very long. Anyway, it is also
suggested to use separate VM direct-access application data and log volumes that are sized according to
traditional physical server application methods. However, it is prudent to test VM application data and log
VHDX files in comparison to VM direct-access storage before production deployments because
performance is often not the only determining factor. Regardless, for performance-sensitive VM
applications, the new direct-access FC option is usually best, especially for IT departments that currently
rely on or prefer FC host connectivity.
Fibre Channel
As pointed out in the previous section, FC connectivity can now take place at both the Hyper-V host and
guest levels. Hyper-V hosts continue to use FC SAN volumes for VM file repositories that consist of VHD,
VHDX, ISO and other common configuration files. When using Hyper-V clusters, it is recommended to use
CSVs to store the guest files. For VM direct-access FC connections, Microsoft’s virtual FC technology is
required. Beginning with Microsoft Windows Server 2012, VMs can take advantage of virtual FC SAN
switches and FC HBAs by using N-Port ID Virtualization (NPIV) compliant SAN switches and HBAs. NPIV
is a FC feature that allows multiple FC initiators to bind to a single physical port. For further information
about Hyper-V virtual FC, visit the following website:
http://technet.microsoft.com/en-us/library/hh831413.aspx
Microsoft Hyper-V with IBM XIV Storage System Gen3
17
Hyper-V virtual FC GUI-based configuration
On the surface, virtual FC is rather straightforward. The technology allows virtualized workloads to
profit from existing FC physical infrastructures. For the most part, it is as simple as adding a virtual
SAN switch and one or more virtual FC adapters within the VM settings. This allows the guest to
connect directly to the FC SAN storage and maximize an organization’s return on investment (ROI) for
existing FC data center assets including IBM XIV Storage System Gen3. As referenced earlier, virtual
FC also allows guest clusters to use FC shared storage all while providing many familiar and valuable
conventional FC SAN benefits such as IBM XIV VSS snapshot capabilities. Additionally, VMs profit
from the same FC MPIO benefits as their physical machine counterparts. All of these concepts and
benefits are easy to comprehend with the exception of virtual FC considerations relevant to VM
migrations.
The most difficult challenge to new virtual FC adaptors is to correctly grasp the multi-level virtual port
set configuration that is necessary for VMs to properly migrate. Specifically, if the virtual FC is
misconfigured at any level including the guest, SAN switch, or IBM XIV, VM live migrations fail. In
order to fully appreciate this new concept, it helps to examine the guest virtual port address sets and
how they affect VM migrations especially with failover clusters.
When first adding a VM Fibre Channel adapter, notice that there are two virtual port address sets each
with the same world wide node name (WWNN) and only one is active at a time while a VM is in a
normal online state. If the VM is offline, the virtual ports are inactive (or deleted for VMs with virtual FC
adapters that are taken offline) and cannot be seen at the parent host or SAN switch level. By default,
port address set A is used for failover cluster VM quick migrations that result in a slight outage.
However, the VM can switch to port set B if a VM live migration is followed by a quick migration. The
key point is that VM quick migrations use a single port set that result in downtime during the disk
ownership transfer while VM live migrations use both port sets.
Failover cluster or stand-alone Hyper-V live migrations require both port set A and port set B in order
for the VM to remain online during the move. One port set binds to a physical HBA port of the source
host and the other port set binds to a physical HBA port of the destination host. There is a brief
moment during live migrations when both port sets are active during a failover transition or disk
ownership transfer that allows the SAN storage to remain online during the entire VM migration
process. Basically, Hyper-V first ensures that the storage is available to the destination host before the
live migration can complete. Interestingly, simultaneous active port sets during VM live migrations are
noticeable in the XIVGUI host connectivity view if the VMs have large resource configurations (that is,
large memory and state transfer; both of which impact live migration durations as well as the
underlying network speed). For small VM configurations, the failover cluster live migrations are
normally too fast to notice the port set transition state. In the latter case, the VM appears to quickly
switch from port set A to port set B or from port set B to port set A.
Note: IBM XIV Storage System Gen3 NPIV support requires SAN switches that also support the
virtualization technology. NPIV is not supported for point-to-point FC connections between the storage
array and host.
Microsoft Hyper-V with IBM XIV Storage System Gen3
18
Since Hyper-V virtual FC is a newer feature, detailed configuration steps are shared to help
administrators understand the various VM migration behaviors. The first GUI-based configuration
steps are primarily intended for conceptual purposes to solidify administrative comprehension. They
are followed by command-line interface (CLI)-based configuration guidelines that expedite the virtual
FC implementation process. Administrators who already grasp the virtual FC port set concepts can
proceed with the faster Hyper-V virtual FC CLI-based configuration steps.
Before proceeding with the Hyper-V virtual FC GUI-based steps, make sure that the following
prerequisites are met:





The Hyper-V role needs to be installed on a Windows Server 2012 or later host. The
physical server requires processor hardware virtualization support.
The physical server requires one or more FC HBAs with updated firmware and drivers that
support virtual FC. The virtual FC HBA ports must be set up in a FC topology that
supports NPIV.
An NPIV-enabled SAN is required.
Virtual FC adapters are only supported with Windows Server 2008, Windows Server 2008
R2, Windows Server 2012, and Windows Server 2012 R2 guests.
Virtual FC LUNs cannot be used as boot devices.
The following configuration steps apply to existing, online HA VMs that belong to a Microsoft failover
cluster. However, the same concepts and most of the steps apply to stand-alone Hyper-V
implementations.
1.
Open the Hyper-V Manager for all failover cluster hosts and in the right pane, click Virtual SAN
Manager.
2.
Click Create.
3.
Enter a name for the virtual SAN switch and make sure to standardize this name across all cluster
hosts. Otherwise, HA VMs will not be able to move or migrate between cluster members.
4.
Select the check box of the desired physical HBA port. The screen capture in Figure 2 was from a
physical Hyper-V host with a dual-port FC HBA. Similar to physical hosts, the VM is configured
with two virtual FC SANs for redundancy that each map to a single HBA port. Thus, if one virtual
SAN switch or physical HBA port fails, the VMs can still access the storage.
Microsoft Hyper-V with IBM XIV Storage System Gen3
19
Figure 2: Virtual SAN Manager – first switch mapping to a single FC HBA port
5.
Click Apply and add as many new Fibre Channel SAN switches as necessary. Figure 3 illustrates
the addition of a second, redundant virtual SAN. Click OK to save the virtual SAN configuration.
Figure 3: Virtual SAN Manager – second switch mapping to a single FC HBA port
6.
Next, from the Failover Cluster Manager on the host with the active VM, shut down the VM to add
virtual Fibre Channel adapters. It must be powered off to add the virtual devices.
Microsoft Hyper-V with IBM XIV Storage System Gen3
20
7.
In the center pane of the Failover Cluster Manager, make sure that the VM is highlighted and click
Settings in the lower-right pane.
8.
In the VM settings, make sure that the Add Hardware option in the left pane and Fibre Channel
Adapter in the upper-right pane are highlighted and click Add.
9.
In the upper-right pane, select the previously created virtual SAN and click Apply. Figure 4
illustrates the VM settings for the first virtual Fibre Channel adapter.
Figure 4: Windows Failover Cluster Manager – VM settings for the first Fibre Channel adapter
Microsoft Hyper-V with IBM XIV Storage System Gen3
21
10. Apply the settings for the second Fibre Channel adapter. Figure 5 illustrates the VM settings for
the second virtual Fibre Channel adapter.
Figure 5: Windows Failover Cluster Manager VM settings for second Fibre Channel adapter
11. After the virtual Fibre Channel adapters are added and the VM is online, use the parent HBA
software to confirm the presence of the virtual ports (refer to Figure 6).
Figure 6: Parent Brocade Host Connectivity Manager view of online VM virtual Fibre Channel adapter port
Microsoft Hyper-V with IBM XIV Storage System Gen3
22
12. From the SAN switch management GUI, add the SAN zones with the newly discovered NPIV
virtual ports (note the physical port numbers so that you can easily return to them during the latter
port swap steps) so that the VM can communicate with the IBM XIV Storage System. Refer to
Figure 7 where one SAN switch was used for testing rather than two (suggested for production
solutions).
Figure 7: Brocade Zone Administration view of NPIV ports of an online VM
13. Additionally, administrators can confirm the presence of the Microsoft Hyper-V Fibre Channel
HBAs within the Device Manager of the VM (refer to Figure 8).
Microsoft Hyper-V with IBM XIV Storage System Gen3
23
Figure 8: Guest Device Manager view of Microsoft Hyper-V Fibre Channel HBAs
14. From the VM guest OS, run the IBM XIV Host Attachment Kit to discover the storage system and
to add the host or VM and its port set A virtual ports. The first part of the installation is a basic GUI
wizard while the second part uses the command line. During the CLI portion, the wizard checks for
and enables the Windows built-in MPIO feature if it is not present. Restart the VM as instructed
(refer to Figure 9).
Figure 9: First portion of the IBM XIV Host Attachment Kit initial command-line installation steps
15. After the VM restarts, re-launch the IBM XIV HAK command-line wizard to complete the storage
system discovery. The wizard also automatically adds the host / VM and the corresponding port
set A virtual Fibre Channel adapter ports to the IBM XIV Storage System (refer to Figure 10).
Microsoft Hyper-V with IBM XIV Storage System Gen3
24
Figure 10: Second portion of the IBM XIV HAK initial command-line installation steps
16. From the parent host, use the Failover Cluster Manager or PowerShell to confirm that quick
migrations work properly by moving the VM to another cluster member.
Reminder: Do not attempt VM live migrations as they fail because virtual ports cannot
concurrently bind to two different parent host ports.
17. From the parent host, use the Failover Cluster Manager or PowerShell to move the VM back to
the original host by performing a quick migration.
18. Shut down the VM.
19. In the Failover Cluster Manager VM settings, swap the port set worldwide port names (WWPNs)
for all virtual Fibre Channel adapters, which is just a matter of modifying the last character of the
WWPN. The original port set B should be the active port this time since it is now defined for the
port set A WWPN.
20. Start the VM.
21. From the SAN switch management GUI, add SAN zones for the newly discovered NPIV virtual
ports so the VM can communicate with the IBM XIV Storage System. It is easier to rename
previous aliases and zones to port set B and create new ones for port set A.
22. From the VM guest OS, run the IBM XIV HAK to discover the storage system and to add the host /
VM and its virtual ports associated with the swapped port set WWPNs. The CLI wizard notes that
the host name already exists and asks if you still want to use it. The default value is “no” but type
“yes” to discover the new port set A WWPNs and add them to the host / VM (refer to Figure 11).
Note: In the test scenario, there are two virtual Fibre Channel adapters that require the addition of
four host / VM ports on the IBM XIV system. Use the parent HBA management utilities or the
XIVGUI to confirm the virtual machine storage connectivity.
Microsoft Hyper-V with IBM XIV Storage System Gen3
25
Figure 11: Third portion of the IBM XIV Host Attachment Kit command-line installation after virtual FC port set swap
23. From the parent host, use the Failover Cluster Manager or PowerShell to confirm that both quick
and live migrations work properly by moving a VM back and forth between the cluster members
multiple times.
24. For large memory VMs, the live migration port set transition states can be seen in the IBM XIVGUI
host connectivity in three stages (refer to Figure 12, Figure 13, and Figure 14).
Figure 12: XIVGUI host connectivity stage one of HA VM live migration
Figure 13: XIVGUI host connectivity stage two of HA VM live migration
Figure 14: XIVGUI host connectivity stage three of HA VM live migration
For more information about Microsoft Hyper-V virtual FC, visit the following website:
http://technet.microsoft.com/en-us/library/hh831413.aspx
Hyper-V virtual FC CLI-based configuration
For administrators seeking the quickest and easiest method to incorporate VM FC adapters in their
Hyper-V environment, the key Brocade SAN switch zoning steps are provided. Apply the same logic to
any other vendor SAN switch zoning configurations. This eliminates the additional zoning steps
Microsoft Hyper-V with IBM XIV Storage System Gen3
26
required in the aforementioned section. It is important to realize that it does not matter if the virtual FC
ports have ever been online or if the VM is currently online and this applies to the host, SAN switch,
and IBM XIV levels. Rather than repeating many of the previous GUI-based steps, only the pertinent
SAN switch steps are shared.
1.
After using the Failover Cluster Manager VM settings to add the virtual FC adapters, copy the
WWPN information for port set A and B for each adapter to a text file.
2.
Use the information to manually create the SAN objects including zones. The following SAN
switch telnet excerpts are self-explanatory and detail the key lab testing configuration details.
VM port set aliases:
IBM_2498_B24:admin> alicreate
"C0:03:FF:92:2E:45:00:04"
IBM_2498_B24:admin> alicreate
"C0:03:FF:92:2E:45:00:05"
IBM_2498_B24:admin> alicreate
"C0:03:FF:92:2E:45:00:06"
IBM_2498_B24:admin> alicreate
"C0:03:FF:92:2E:45:00:07"
"vm01_vfca1_port_address_set_a",
"vm01_vfca1_port_address_set_b",
"vm01_vfca2_port_address_set_a",
"vm01_vfca2_port_address_set_b",
VM Fibre Channel adapter SAN switch zone creation:
IBM_2498_B24:admin> zonecreate "vm01_vfca1_pas_a_zone",
"x2013vm01_vfca1_port_address_set_a ; xiv_mod5_p1 ; xiv_mod7_p1 ; xiv_mod9_p1"
IBM_2498_B24:admin> zonecreate "vm01_vfca1_pas_b_zone",
"vm01_vfca1_port_address_set_b ; xiv_mod5_p1 ; xiv_mod7_p1 ; xiv_mod9_p1"
IBM_2498_B24:admin> zonecreate "vm01_vfca2_pas_a_zone",
"vm01_vfca2_port_address_set_a ; xiv_mod4_p1 ; xiv_mod6_p1 ; xiv_mod8_p1"
IBM_2498_B24:admin> zonecreate "vm01_vfca2_pas_b_zone",
"vm01_vfca2_port_address_set_b ; xiv_mod4_p1 ; xiv_mod6_p1 ; xiv_mod8_p1"
SAN switch configuration addition of zones:
cfgadd "superlabr23_xiv_zone_cfg0", "vm01_vfca1_pas_a_zone ;
vm01_vfca2_pas_a_zone ; vm01_vfca2_pas_b_zone ; vm01_vfca1_pas_b_zone"
SAN switch configuration save:
IBM_2498_B24:admin> cfgsave
You are about to save the Defined zoning configuration. This action will only
save the changes on Defined configuration.
Any changes made on the Effective configuration will not take effect until it
is re-enabled.
Do you want to save Defined zoning configuration only? (yes, y, no, n): [no]
yes
Updating flash ...
SAN switch configuration enable:
IBM_2498_B24:admin> cfgenable "superlabr23_xiv_zone_cfg0"
You are about to enable a new zoning configuration. This action will replace
the old zoning configuration with the current configuration selected. If the
update includes changes to one or more traffic isolation zones, the update may
result in localized disruption to traffic on ports associated with the traffic
isolation zone changes
Microsoft Hyper-V with IBM XIV Storage System Gen3
27
Do you want to enable 'superlabr23_xiv_zone_cfg0' configuration
n): [no] yes
zone config "superlabr23_xiv_zone_cfg0" is in effect
Updating flash ...
(yes, y, no,
SAN switch zone change confirmation:
IBM_2498_B24:admin> zoneshow
Defined configuration:
cfg:
superlabr23_xiv_zone_cfg0
vm01_vfca1_pas_a_zone; vm01_vfca2_pas_a_zone;
vm01_vfca2_pas_b_zone; vm01_vfca1_pas_b_zone
zone: vm01_vfca1_pas_a_zone
vm01_vfca1_port_address_set_a; xiv_mod5_p1; xiv_mod7_p1;
xiv_mod9_p1
zone: vm01_vfca1_pas_b_zone
vm01_vfca1_port_address_set_b; xiv_mod5_p1; xiv_mod7_p1;
xiv_mod9_p1
zone: vm01_vfca2_pas_a_zone
vm01_vfca2_port_address_set_a; xiv_mod4_p1; xiv_mod6_p1;
xiv_mod8_p1
zone: vm01_vfca2_pas_b_zone
vm01_vfca2_port_address_set_b; xiv_mod4_p1; xiv_mod6_p1;
xiv_mod8_p1
alias: vm01_vfca1_port_address_set_a
c0:03:ff:92:2e:45:00:04
alias: vm01_vfca1_port_address_set_b
c0:03:ff:92:2e:45:00:05
alias: vm01_vfca2_port_address_set_a
c0:03:ff:92:2e:45:00:06
alias: vm01_vfca2_port_address_set_b
c0:03:ff:92:2e:45:00:07
alias: xiv_mod4_p1
50:01:73:80:4e:60:01:40
alias: xiv_mod5_p1
50:01:73:80:4e:60:01:50
alias: xiv_mod6_p1
50:01:73:80:4e:60:01:60
alias: xiv_mod7_p1
50:01:73:80:4e:60:01:70
alias: xiv_mod8_p1
50:01:73:80:4e:60:01:80
alias: xiv_mod9_p1
50:01:73:80:4e:60:01:90
Effective configuration:
cfg:
superlabr23_xiv_zone_cfg0
zone: vm01_vfca1_pas_a_zone
c0:03:ff:92:2e:45:00:04
50:01:73:80:4e:60:01:50
50:01:73:80:4e:60:01:70
50:01:73:80:4e:60:01:90
zone: vm01_vfca1_pas_b_zone
c0:03:ff:92:2e:45:00:05
50:01:73:80:4e:60:01:50
50:01:73:80:4e:60:01:70
50:01:73:80:4e:60:01:90
Microsoft Hyper-V with IBM XIV Storage System Gen3
28
zone:
zone:
vm01_vfca2_pas_a_zone
c0:03:ff:92:2e:45:00:06
50:01:73:80:4e:60:01:40
50:01:73:80:4e:60:01:60
50:01:73:80:4e:60:01:80
vm01_vfca2_pas_b_zone
c0:03:ff:92:2e:45:00:07
50:01:73:80:4e:60:01:40
50:01:73:80:4e:60:01:60
50:01:73:80:4e:60:01:80
This section shows the MPIO output after mapping an XIV volume to a VM with Fibre Channel adapters. A
VM disk management rescan was performed before running mpclaim.exe –v mpclaim.txt from the
Windows PowerShell.
MPIO Storage Snapshot on Tuesday, 18 March 2014, at 23:28:33.381
Registered DSMs: 1
=============
+--------------------------------|-------------------|----|----|----|---|----+
|DSM Name
|
Version
|PRP | RC | RI |PVP| PVE
|
|--------------------------------|-------------------|----|----|----|---|----|
|Microsoft DSM
|006.0003.09600.16384|0020|0003|0001|030|False|
+--------------------------------|-------------------|----|----|----|---|----+
Microsoft DSM
=============
MPIO Disk0: 06 Paths, Round Robin, Symmetric Access
SN: 0173804E600E
Supported Load Balance Policies: FOO RR RRWS LQD WP LB
Path ID
State
SCSI Address
Weight
-------------------------------------------------------------------------0000000077030002 Active/Optimized
003|000|002|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 1792
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
0000000077030001 Active/Optimized
003|000|001|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 2304
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
0000000077030000 Active/Optimized
003|000|000|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 1280
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
0000000077040002 Active/Optimized
004|000|002|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 1536
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
Microsoft Hyper-V with IBM XIV Storage System Gen3
29
0000000077040001 Active/Optimized
004|000|001|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 2048
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
0000000077040000 Active/Optimized
004|000|000|001
0
* TPG_State: Active/Optimized , TPG_Id: 0, TP_Id: 1024
Adapter: Microsoft Hyper-V Fibre Channel HBA...
(B|D|F:000|000|000)
Controller: 303137333830344536303030 (State: Active)
MSDSM-wide default load balance policy: N\A
No target-level default load balance policies have been set.
After adding VM FC adapters, the Windows event log provides Hyper-V virtual FC specific logging that can
be helpful for troubleshooting in the event of any problems. The log is located under Applications and
Services Logs > Microsoft > Windows > Hyper-V SynthFC as shown in Figure 15.
Figure 15: Hyper-V virtual Fibre Channel event log
For more information about Hyper-V virtual FC troubleshooting, visit the following website:
http://social.technet.microsoft.com/wiki/contents/articles/18698.hyper-v-virtual-fibre-channeltroubleshooting-guide.aspx
iSCSI
IBM XIV Storage System Gen3 supports 1Gb and optional 10Gb iSCSI connections. As a result, the IBM
XIV Storage System model 214, with the 10Gb iSCSI option, for machine types 2810 and 2812 offers
Microsoft Hyper-V with IBM XIV Storage System Gen3
30
significantly higher iSCSI host throughput for Microsoft cloud solutions in comparison to previous models.
There are up to twenty-two 1Gb iSCSI ports or twelve 10Gb iSCSI ports that are spread evenly across the
IBM XIV interface modules. It is preferred to use the 10Gb iSCSI option when compared to the older 1Gb
offering because the storage performance is far superior.
When configuring IBM XIV host connectivity for iSCSI, the same process mostly applies to Hyper-V hosts
and guests. Hardware iSCSI HBAs are not currently supported so administrators must use the native
Microsoft Windows software iSCSI initiator. Make sure to use multiple iSCSI networks that are dedicated
exclusively to storage traffic. For the iSCSI jumbo frame maximum transmission unit (MTU) settings, select
9000 bytes, which is the largest supported value for the IBM XIV iSCSI ports. When enabling this, it must
be done at all network junctions – physical and virtual network interface cards (NICs), storage, and
physical switches. No further configuration is required on the Hyper-V virtual switch as it automatically
senses the MTU.
Similar to FC connectivity, the latest IBM XIV HAK for Windows needs to be installed on the Hyper-V hosts
and guests. The most up-to-date information for configuring IBM XIV iSCSI host connectivity is located at
the following websites:

http://pic.dhe.ibm.com/infocenter/strhosts/ic/index.jsp?topic=%2Fcom.ibm.help.strghosts.doc
%2Fhsg_hak_2.2.0.html

ibm.com/redbooks/abstracts/sg247904.html
FCoE
Fibre Channel over Ethernet (FCoE) uses the IBM XIV FC ports by connecting to a converged networking
switch that either has dedicated FC ports or ones that can be defined as such. The latter depends on the
vendor switch hardware and software. On the host side, the physical servers must contain converged
networking adapters (CNA) that essentially present both an Ethernet port and a FC port to the OS.
For FCoE connections, the ports need to be zoned using the IBM XIV 8Gb FC ports on the converged
switch. Then, a FC host definition can be added using the IBM XIV HAK wizard. The host creation,
mapping of drives, and drive initialization is similar to any other FC-connected drive.
Also note that Hyper-V FCoE support is at the host-level only and there are no guest FCoE adapter
options available at this time. However, FCoE pass-through disks can be used for guests. Additionally,
both stand-alone and clustered Hyper-V implementations support FCoE.
For extensive IBM XIV FCoE configuration details for Hyper-V, review the IBM Reference Configuration for
Microsoft Private Cloud: Implementation Guide from IBM Redbooks at the following website:
ibm.com/redbooks/redpieces/abstracts/redp4829.html
FCoE configurations need to use supported CNAs, switches, software, and firmware levels. For a
complete list of supported hardware and software, review the compatibility support matrices at the
following website:
ibm.com/systems/support/storage/ssic/interoperability.wss
Microsoft Hyper-V with IBM XIV Storage System Gen3
31
VHD and VHDX volumes
After the supported IBM XIV host connectivity methods have been determined, the type of guest volumes
should be considered. As explained earlier, the majority of customers place guest OS files on Hyper-V
host SAN volumes. When possible and applicable, Microsoft recommends using Hyper-V clusters with
CSVs to store the VM files. VHDs were used before Windows Server 2012 and are now considered legacy
devices since enhanced VHDX files are now available and designed to handle current and future
workloads.
As such, VHDX files have much larger storage capacity than their predecessor VHD files. VHDX files are
also less susceptible to data corruption during unexpected power outages and optimize structural
alignment of dynamic and differencing disks to prevent performance degradation on new, large-sector
physical disks.
Similar to physical machine volumes, VHD and VHDX files use either Hyper-V virtual IDE or SCSI
controllers. Before Windows Server 2012 R2, boot disks were required to use integrated device electronics
(IDE) for generation 1 VMs. The newer Windows Server 2012 R2 generation 2 VMs can now boot from
virtual SCSI controllers. SCSI-based volumes are more scalable and support multiple VHDs or VHDXs per
controller. Each VM supports four SCSI controllers with a maximum of 64 devices per controller and only
in extreme cases are multiple SCSI controllers required. Due to their expanded support capabilities, it is
recommended to use VM SCSI controllers when adding further VHDX storage. Naturally, the SCSI
connected device can be a VHDX, VHD, or a physical drive.
Additionally, similar to VHD files, VHDX files can also be either fixed or dynamically expanding. It is
recommended to use fixed VHDX files if performance is of a higher priority than space efficiency.
Conversely, use dynamic VHDX files if saving space is more important but because the disk continues to
dynamically grow, fragmentation can eventually impact performance. As with IBM XIV thin provisioning,
when using dynamic VHDX files, it is crucial to monitor the storage array used capacity. Administrators
can encounter all sorts of problems, including VMs going offline, if storage free space is exhausted with
VMs that use thin provisioning or dynamic VHDX files. So, it is recommended to take the necessary
precautions and actively monitor the storage capacity levels and trends.
Finally, as virtual machines are migrated to Windows Server 2012, consider converting VHD to VHDX
format to take advantage of the improved performance and capabilities. This includes both traditional and
guest failover cluster Hyper-V VMs.
Shared VHDX feature
Beginning with Microsoft Windows 2012 R2, guest cluster VMs are much less constrained by storage
topology or protocol limitations such as iSCSI quorums or application data and log volumes. Microsoft
introduced a new shared VHDX feature that allows multiple VMs to access the same volume or VHDX file
much in the same manner that multiple traditional cluster hosts access the same physical volumes or
LUNs.
This is an evolutionary feature that expands CSV capabilities specifically targeting HA VMs. As such, the
VM VHDX files must reside on parent host CSVs in order to take advantage of shared VHDX
enhancements. As a result, guest clusters can use a shared VHDX file that resides on an IBM XIV FC
LUN configured as a CSV. In turn, the VM benefits not only from system high availability but also
Microsoft Hyper-V with IBM XIV Storage System Gen3
32
application high availability, which is the primary appeal to guest clustering. So, both physical server and
application resource failures can trigger failovers to other healthy cluster nodes thus significantly reducing
business-critical VM outages.
Using shared VHDX files are typically recommended for the following configurations:



Guest cluster quorum
VM application data and log file volumes
Guest file server services
Before deploying shared VHDX files in production cloud environments, keep the following support
considerations in mind:



VM data drives must use the .vhdx file format but the VM operating system can reside on
either the .vhd or the .vhdx file format
Decide whether to use generation 1 and generation 2 VMs
VHDX files are only supported for Windows 8, Windows Server 2012 R2 and Windows
Server 2012 VMs with the integration services installed
Even though the shared VHDX feature has two deployment models, CSVs on block storage and scale-out
file server with server message block (SMB) 3.0 on file-based storage, only the first model is applicable to
the IBM XIV Storage System Gen3 at this time. SMB 3.0 or greater support is planned for a future IBM XIV
release and thus not currently supported. Nevertheless, storage administrators can perform the following
steps to configure this feature for guest cluster VMs.
1.
Using the Failover Cluster Manager on the active host, right-click the guest cluster VM and click
Settings.
2.
Select the SCSI Controller in the left settings pane and Hard Drive in the right pane. Then, click
Add (refer to Figure 16).
Microsoft Hyper-V with IBM XIV Storage System Gen3
33
Figure 16: Adding a hard drive through the guest cluster VM settings
3. With the Virtual hard disk option selected, click New (refer to Figure 17).
Figure 17: Adding a new virtual hard disk through the guest cluster VM settings
Microsoft Hyper-V with IBM XIV Storage System Gen3
34
4.
Review the initial page in the new virtual hard disk wizard and click Next (refer to Figure 18).
Figure 18: New virtual hard disk wizard
5.
Select the preferred disk type and click Next (refer to Figure 19).
Figure 19: New virtual hard disk wizard disk type selection
Microsoft Hyper-V with IBM XIV Storage System Gen3
35
6.
Specify the VHDX file name and location and click Next (refer to Figure 20).
Figure 20: Specifying the name and location of the new hard disk file
7.
Select the option to create a new blank virtual hard disk, enter its size, and click Next (refer to
Figure 21).
Figure 21: New virtual hard disk wizard configure disk
Microsoft Hyper-V with IBM XIV Storage System Gen3
36
8.
Confirm the new virtual hard disk configuration summary and click Finish (refer to Figure 22).
Figure 22: New virtual hard disk wizard completion
9.
In the VM settings left pane, expand the new hard drive and click Advanced Features. In the
settings pane on the right side, select the Enable virtual hard disk sharing check box, click OK
to apply the settings, and close the window (refer to Figure 23).
Figure 23: First guest cluster VM settings hard drive advanced features
Microsoft Hyper-V with IBM XIV Storage System Gen3
37
10. Right-click another guest cluster VM and click Settings.
11. Select SCSI Controller in the left settings pane and Hard Drive in the right pane. Click Add (refer
to Figure 24).
Figure 24: Adding a hard drive through an additional guest cluster VM’s settings
12. With the Virtual hard disk option selected, click Browse (refer to Figure 25).
Figure 25: Browsing for an existing shared VHDX file through an additional guest cluster VM’s settings
Microsoft Hyper-V with IBM XIV Storage System Gen3
38
13. Select or enter the previously created shared VHDX file (refer to Figure 26).
Figure 26: Adding an existing shared VHDX file through an additional guest cluster VM’s settings
14. In the VM settings in the left pane, expand the new hard drive and click Advanced Features. In
the right pane, select the Enable virtual hard disk sharing check box, click OK to apply the
settings, and close the window (refer to Figure 27).
Figure 27: Second guest cluster VM settings hard drive advanced features
Microsoft Hyper-V with IBM XIV Storage System Gen3
39
15. Use Windows Disk Management for each VM to confirm the presence of the newly added shared
VHDX volume and add it to the guest cluster. At this stage, this process is identical to adding disks
to a physical or a traditional Microsoft failover cluster.
For further information about the Microsoft Windows 2012 R2 shared VHDX feature, visit the following
website:
http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-inwindows-server-2012-r2.aspx
Pass-through disks
Even though many consider pass-through disks more of a legacy option of the past, they are still viable
and offer some of the better VM storage performance, similar to iSCSI or virtual FC. However, for Hyper-V
environments with IBM XIV storage, virtual FC is the preferred and best performing direct-access option.
Thus, if migrating to VMs with pass-through disks to Windows Server 2012, administrators might need to
consider switching VMs to virtual FC.
Pass-through disks are IBM XIV volumes mapped directly to a Hyper-V host where they are initialized but
taken offline to the host while exclusive connectivity is passed to the VM. The VM requires a virtual SCSI
controller to access the pass-through disk and up to 64 disks can be connected per virtual SCSI controller.
Each VM supports up to four SCSI controllers providing similar high performance and scalability in
comparison to physical machines. In fact, after the VM connects to a pass-through disk, it is configured in
the same manner as a physical machine disk.
Microsoft supports bootable pass-through disks for the VM OS as well. It is just a matter of initializing the
host disk and taking it offline before the VM OS installation. With legacy generation 1 VMs that use a
Hyper-V basic input/output system (BIOS), an IDE controller must be used for the bootable pass-through
disk. As a reminder, Windows 2012 R2 now supports the use of SCSI controller bootable disks with the
introduction of a generation 2 VM that uses a Hyper-V Unified Extensible Firmware Interface (UEFI).
IBM XIV online volume migration using Hyper-Scale Mobility
Very similar in nature to the Hyper-V storage live migration process but from a storage array perspective,
IBM XIV Storage System Gen3 offers online volume migrations using Hyper-Scale Mobility. This allows
storage administrators to perform online volume migrations that do not disrupt physical or virtual businesscritical application services. Basically, using a single IBM XIV management interface such as the XIV GUI,
customers can move Hyper-V host, VM system, and application data and log LUNs between storage
arrays with negligible impact to the production Hyper-V hosts or VMs.
This IBM XIV technology is made possible by enhancing existing data replication features to offer a new
type of data mobility feature designed specifically for online volume migrations where source volume
storage is to be decommissioned, rebalanced, or repurposed. However, unlike IBM XIV data replication,
IBM Hyper-Scale Mobility is not intended for disaster recovery and only supports synchronous Fibre
Channel connectivity between storage arrays located at shorter distances within the same site.
Microsoft Hyper-V with IBM XIV Storage System Gen3
40
To summarize, IBM XIV Hyper-Scale Mobility provides the following key benefits to Hyper-V cloud
environments:



Graceful retirement of aging storage systems that allow painless upgrades to the latest
IBM XIV technology.
Uncomplicated data growth management with increased flexibility in storage capacity
forecasting for both thick- and thin-provision environments.
Appreciably reduce performance administration by being able to fluidly balance similar
physical or virtual workload profiles across systems or move volumes from over-utilized to
under-utilized storage arrays, inevitably helping IT departments decrease their storage
TCO.
For further information about IBM XIV Storage System Gen3 online volume migrations using Hyper-Scale
Mobility, visit the following website:
ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102397
Hyper-V CSV Cache
In Windows Server 2012, CSV Cache is enabled by default and improves storage performance by using
system memory (RAM) as a write-through cache. CSV Cache is best suited for HA VM workload profiles
that consist of heavy read requests and fewer writes. Microsoft cites pooled VDI VMs and VM boot storm
reduction as good use cases. Due to this inherent design, only read request performance is boosted
without caching write requests. Furthermore, up to 20% of RAM can be allocated for Windows Server
2012, and up to 80% with Windows Server 2012 R2. CSV Cache is built into the Failover Clustering
feature, which manages and balances the performance across all nodes in the cluster. For Windows
Server 2012 R2 Hyper-V CSVs, the default value is set to zero but a minimum value of 512 MB is
recommended. However, additional testing with larger CSV Cache values can improve individual Hyper-V
environment performance. The Performance Monitor Cluster CSV Volume Cache counters can be used to
determine the optimal CSV Cache value.
The following elevated Windows PowerShell command syntax is provided to modify the CSV Cache on a
Windows Server 2012 R2 cluster node.
Get existing CSV Cache value:
(Get-Cluster). BlockCacheSize
Set CSV Cache value:
(Get-Cluster). BlockCacheSize = 512
Confirm CSV Cache new value:
(Get-Cluster). BlockCacheSize
For further information about CSV Cache, visit the following website:
http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx
Resource metering
Of course, with all of the new Windows Server 2012 virtualization features, it helps that Microsoft also
included extensive Hyper-V Resource Metering. Hyper-V Resource Metering allows hosting providers and
organizations to collect and track physical processor, memory, network, and storage usage metrics at the
Microsoft Hyper-V with IBM XIV Storage System Gen3
41
VM level. That way, cloud administrators can closely monitor VM resource utilization and can detect
performance bottlenecks and prevent outages.
The data collection can also be used for tracking capacity, tracking business unit resource usage for
charge-backs, and analyzing various workload costs. Fortunately, common HA VM migrations do not
negatively impact the data collection process and results. Resource metering is available through
PowerShell cmdlets and new APIs in the virtualization WMI provider.
VM resource usage measures the following data:







Average processor usage, in megahertz (MHz)
Average physical memory usage, in megabytes (MB)
Minimum memory usage (lowest physical memory)
Maximum memory usage (highest physical memory)
Maximum amount of disk space used by each VM
Total inbound network traffic, in MB, by virtual network adapter
Total outbound network traffic, in MB, by virtual network adapter
Microsoft System Center Virtual Machine Manager
SCVMM 2012 is Microsoft’s equivalent of VMware vCenter Server that allows administrators to manage
multiple virtual infrastructure hosts, including both Hyper-V and VMware, from a single administrative
interface. Using the SCVMM 2012 console, various support groups, including virtualization, database,
systems, and storage, are able to perform a full range of core storage management tasks. Administrators
can discover, classify, allocate, provision, map, assign, and decommission storage associated with
clustered and stand-alone Hyper-V hosts. Previously, many of these basic tasks required additional
storage workflow processes and proprietary applications but are now streamlined using a combination of
Microsoft SCVMM 2012 and IBM XIV Gen3 storage automation features.
For detailed information about the benefits of using Microsoft SCVMM 2012 with IBM XIV, visit the
following website:
ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102071
VM data protection
While there are numerous methods to protect Hyper-V VMs, in order to take advantage of storage array
snapshot technology and avoid potential Ethernet network latencies during backups and restores, virtual
solutions, including their corresponding applications, must support the Microsoft Volume Shadow Copy
Service (VSS) framework. Note that snapshot and shadow copy terminology is often interchanged in
Microsoft environments and refers to the same concept. Fundamentally, hardware or storage-based VM
backups require specific VSS framework components that are described in the following list and illustrated
in Figure 28.


VSS service: The VSS service is part of the host or guest Windows operating system and
acts as the communication hub, coordinator, or interpreter for all of the framework
components.
VSS requestor: The VSS requestor is the backup application such as the IBM Tivoli
Storage data protection software.
Microsoft Hyper-V with IBM XIV Storage System Gen3
42


VSS writer: The VSS writer is responsible for ensuring consistent backup data sets and
normally installed by default on the Hyper-V host or during guest application installations.
VSS provider: The VSS provider consists of the hardware / software combination that
generates the actual snapshot volume. In this case, the snapshots are generated by the
IBM XIV Storage System VSS provider and the storage system itself.
Figure 28: Microsoft VSS framework
Basically, storage array snapshot technology enables a point-in-time block-level data copy with minimal
impact to production systems or applications ordinarily viewed from two perspectives. From the physical or
virtual machine operating system perspective, snapshots are used to create crash-consistent backups.
Virtual machine crash-consistent backups are typically host-initiated using the Hyper-V writer and do not
account for application transaction states. From the application perspective, snapshots can be used to
create application-consistent backups. In order for this to work correctly, applications (that provide VSS
writers) are briefly quiesced to allow the disk subsystem to concurrently create snapped copies of
application data and log volumes. This helps to ensure that there is no pending I/O or uncommitted
application transactions (in memory or transaction logs) during the snapshot process which could be lost
during crash-consistent backups. Thus, all application I/O is flushed to disk during the quiesce process.
Application-consistent backups are especially important for database applications, such as Microsoft SQL
Server, and are host- or guest-initiated depending on the solution implementation and individual VSS
software support. Additionally, do not forget that guest-initiated application-consistent backups are like
traditional / physical backups because a VM using iSCSI, virtual FC adapters, and pass-through disks, can
use application data and log SAN volumes or LUNs similar to a physical machine. Comparing guest and
host-initiated snapshot backup methods reveal that both data protection practices have their own unique
and compelling benefits that merely relate to preference.
In any case, in order to fully protect VMs, many organizations end up often using a combination of hostbased crash- and guest-based application-consistent backup methods to provide the highest level of data
integrity and recovery. Similarly, many also employ a combination of VSS hardware-based backups with
granular file-level restores using local shadow copies created by the source SAN snapshot. The restore
process is slower due to the file-level copy process but this allows them to restore individual files without
Microsoft Hyper-V with IBM XIV Storage System Gen3
43
overwriting the entire volume. This type of backup and restore flexibility is available with backup software
such as IBM Tivoli Storage Manager.
Tivoli FlashCopy Manager
However for the most part, IBM Tivoli Storage FlashCopy Manager facilitates guest-based applicationconsistent backups using the Microsoft VSS framework in conjunction with IBM XIV Storage System Gen3
advanced snapshot technology. As a result, backup administrators can apply familiar, comprehensive
physical server backup mechanisms to their Hyper-V guests. Ultimately, FlashCopy Manager simplifies
Hyper-V VM application-consistent data protection by providing support for hardware-assisted snapshots.
As an added benefit of using VM direct-access application data and log volumes, administrators can
remap the volumes to other virtual or physical machines in a pinch during the worst-case emergencies.
Figure 29 illustrates a sample FlashCopy Manager backup configuration for Microsoft SQL Server VMs
running on Hyper-V cluster nodes.
Figure 29: FlashCopy Manager Hyper-V configuration
IBM Tivoli FlashCopy Manager provides a user-friendly graphical interface as well as full command-line
capabilities for popular physical and virtual Microsoft SQL Server and Exchange Server deployments. An
example of the FlashCopy Manager console is provided in Figure 30.
Microsoft Hyper-V with IBM XIV Storage System Gen3
44
Figure 30: IBM Tivoli Storage FlashCopy Manager guest interface
For further information about backing up Hyper-V VMs using FlashCopy Manager, visit the following
website:
http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/index.jsp?topic=%2Fcom.ibm.itsm.client.doc%2Fc_bac_hyp
erv.html
Microsoft Hyper-V with IBM XIV Storage System Gen3
45
Conclusion
The condensed Microsoft Hyper-V configuration guidelines and best practices for IBM XIV Storage System
Gen3 provided in this white paper can help IT administrators with their virtualization consolidation and
streamlining efforts using a variety of user-friendly management interfaces that include Microsoft SCVMM
2012. Additionally, with all of the new robust Hyper-V storage options available for virtual data centers, all
included with favorable Microsoft licensing and intuitive usability, building or expanding cloud
infrastructures with small footprints and a reduced TCO has never been easier especially with an IBM XIV
foundation. Furthermore as demonstrated, this end-to-end partner solution includes all of the expected
enterprise-class features that enhance physical and virtual resource agility, performance, high availability,
and scalability to help customers meet their virtualization goals. For additional Microsoft Hyper-V and IBM
XIV configuration guidelines, refer to the “Resources” section.
Microsoft Hyper-V with IBM XIV Storage System Gen3
46
Resources
The following websites provide useful references to supplement the information contained in this paper:

IBM Systems on PartnerWorld
ibm.com/partnerworld/systems

IBM XIV Host Attachment Kit for Windows
pic.dhe.ibm.com/infocenter/strhosts/ic/index.jsp?topic=%2Fcom.ibm.help.strghosts.doc%
2Fhak-homepage.html

IBM XIV Storage System Gen3 Architecture, Implementation, and Usage
ibm.com/redbooks/redbooks/pdfs/sg247659.pdf

IBM disk storage systems
ibm.com/systems/storage/disk/?lnk=mprST-dsys-usen

IBM solutions from independent software vendors, partners and solution providers
ibm.com/systems/storage/solutions/isv/

IBM XIV Storage System
ibm.com/systems/storage/disk/xiv/index.html

IBM XIV Storage System: IBM Hyper-Scale Mobility Overview and Usage
ibm.com/redbooks/abstracts/redp5007.html

IBM System x servers
ibm.com/systems/x/index.html

Microsoft TechNet – Hyper-V architecture and features overview
technet.microsoft.com/en-us/library/hh831531.aspx

Microsoft TechNet – General SCVMM overview and support
technet.microsoft.com/en-us/library/gg610610.aspx

What’s New in Hyper-V for Windows Server 2012 R2
technet.microsoft.com/en-us/library/dn282278.aspx

What's New in Failover Clustering in Windows Server 2012 R2
technet.microsoft.com/en-us/library/dn265972.aspx

What's New in Windows Server 2012 R2
technet.microsoft.com/en-us/library/dn250019.aspx
Microsoft Hyper-V with IBM XIV Storage System Gen3
47
Microsoft Hyper-V with IBM XIV Storage System Gen3
48
Trademarks and special notices
© Copyright IBM Corporation 2014.
References in this document to IBM products or services do not imply that IBM intends to make them
available in every country.
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked
terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these
symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information
was published. Such trademarks may also be registered or common law trademarks in other countries. A
current list of IBM trademarks is available on the Web at "Copyright and trademark information" at
www.ibm.com/legal/copytrade.shtml.
Microsoft, Windows, SQL Server, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both.
Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States,
other countries, or both.
INFINIBAND, InfiniBand Trade Association and the INFINIBAND design marks are trademarks and/or
service marks of the INFINIBAND Trade Association.
Other company, product, or service names may be trademarks or service marks of others.
Information is provided "AS IS" without warranty of any kind.
All customer examples described are presented as illustrations of how those customers have used IBM
products and the results they may have achieved. Actual environmental costs and performance
characteristics may vary by customer.
Information concerning non-IBM products was obtained from a supplier of these products, published
announcement material, or other publicly available sources and does not constitute an endorsement of
such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly
available information, including vendor announcements and vendor worldwide homepages. IBM has not
tested these products and cannot confirm the accuracy of performance, capability, or any other claims
related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the
supplier of those products.
All statements regarding IBM future direction and intent are subject to change or withdrawal without notice,
and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the
full text of the specific Statement of Direction.
Some information addresses anticipated future capabilities. Such information is not intended as a definitive
statement of a commitment to specific levels of performance, function or delivery schedules with respect to
any future products. Such commitments are only made in IBM product announcements. The information is
presented here to communicate IBM's current investment and development activities as a good faith effort
to help with our customers' future planning.
Microsoft Hyper-V with IBM XIV Storage System Gen3
49
Performance is based on measurements and projections using standard IBM benchmarks in a controlled
environment. The actual throughput or performance that any user will experience will vary depending upon
considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the
storage configuration, and the workload processed. Therefore, no assurance can be given that an
individual user will achieve throughput or performance improvements equivalent to the ratios stated here.
Photographs shown are of engineering prototypes. Changes may be incorporated in production models.
Any references in this information to non-IBM websites are provided for convenience only and do not in
any manner serve as an endorsement of those websites. The materials at those websites are not part of
the materials for this IBM product and use of those websites is at your own risk.
Microsoft Hyper-V with IBM XIV Storage System Gen3
50
Download PDF
Similar pages